In the preceding chapter, you learned about using Microsoft Azure with PowerShell. Now let’s see what we can do with Amazon Web Services (AWS). In this chapter, you’ll go deep into using PowerShell with AWS. Once you’ve learned how to authenticate to AWS with PowerShell, you’ll learn how to create an EC2 instance from scratch, deploy an Elastic Beanstalk (EBS) application, and create an Amazon Relational Database Service (Amazon RDS) Microsoft SQL Server database.
Like Azure, AWS is a juggernaut in the cloud world. Chances are high that if you’re in IT, you’ll be working with AWS in some way in your career. And as with Azure, there’s a handy PowerShell module for working with AWS: AWSPowerShell.
You can install AWSPowerShell from the PowerShell Gallery the same way you installed the AzureRm module, by calling Install-Module AWSPowerShell. Once this module is downloaded and installed, you’re ready to go.
I’m assuming you already have an AWS account and that you have access to the root user. You can sign up for an AWS free tier account at https://aws.amazon.com/free/. You won’t need to do everything as root, but you will need it to create your first identity and access management (IAM) user. You’ll also need to have the AWSPowerShell module downloaded and installed, as noted earlier.
In AWS, authentication is done using the IAM service, which handles authentication, authorization, accounting, and auditing in AWS. To authenticate to AWS, you must have an IAM user created under your subscription, and that user has to have access to the appropriate resources. The first step to working with AWS is creating an IAM user.
When an AWS account is created, a root user is automatically created, so you’ll use the root user to create your IAM user. Technically, you could use the root user to do anything in AWS, but that is highly discouraged.
Let’s create the IAM user you’ll use throughout the rest of the chapter. First, however, you need to somehow authenticate it. Without another IAM user, the only way to do that is with the root user. Sadly, this means you have to abandon PowerShell for a moment. You’ll have to use the AWS Management Console’s GUI to get the root user’s access and secret keys.
Your first move is to log into your AWS account. Navigate to the right-hand corner of the screen and click the account drop-down menu, shown in Figure 13-1.
Figure 13-1: My Security Credentials option
Click the My Security Credentials option. A screen will pop up, warning that messing with your security credentials isn’t a good idea; see Figure 13-2. But you need to do it here to create an IAM user.
Figure 13-2: Authentication warning
Click Continue to Security Credentials, then click Access Keys. Clicking Create New Access Key should present a way to view your account’s access key ID and secret key. It should also give you an option to download a key file containing both. If you haven’t already, download the file and put it in a safe spot. For now, though, you need to copy the access key and secret key from this page and add them to your default profile in your PowerShell session.
Pass both of these keys to the Set-AWSCredential command, which saves them so they can be reused by the commands that’ll create an IAM user. Check out Listing 13-1 for the full command.
PS> Set-AWSCredential -AccessKey 'access key' -SecretKey 'secret key'
With that done, you’re ready to create an IAM user.
Now that you’re authenticated as the root user, you can create an IAM user. Use the New-IAMUser command, specifying the name of the IAM user you’d like to use (in this example, Automator). When you create the user, you should see output like that in Listing 13-2.
PS> New-IAMUser -UserName Automator
Arn : arn:aws:iam::013223035658:user/Automator
CreateDate : 9/16/2019 5:01:24 PM
PasswordLastUsed : 1/1/0001 12:00:00 AM
Path : /
PermissionsBoundary :
UserId : AIDAJU2WN5KIFOUMPDSR4
UserName : Automator
The next step is to give the user the appropriate permission. You do that by assigning this user a role that’s assigned a policy. AWS groups certain permissions in units called roles, which allow the administrator to more easily delegate permissions (a strategy known as role-based access control, or RBAC). The policy then determines what permissions a role has access to.
You can create a role by using the New-IAMRole command, but first you need to create what AWS calls a trust relationship policy document: a string of text in JSON that defines the services that this user can access and the level at which they can access them.
Listing 13-3 is an example of a trust relationship policy document.
{ "Version": "2019-10-17", "Statement": [ { "Effect": "Allow", "Principal" : { "AWS": "arn:aws:iam::013223035658:user/Automator" }, "Action": "sts:AssumeRole" } ] }
This JSON changes the role itself (modifying its trust policy) to allow your Automator user to use it. It is giving the AssumeRole permission to your user. This is required to create the role. For more information about how to create a trust relationship policy document, refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html.
Assign this JSON string to a $json variable and then pass it as the value of the AssumeRolePolicyDocument parameter in New-IamRole, as shown in Listing 13-4.
PS> $json = '{ >> "Version": "2019-10-17", >> "Statement": [ >> { >> "Effect": "Allow", >> "Principal" : { "AWS": "arn:aws:iam::013223035658:user/Automator" }, >> "Action": "sts:AssumeRole" >> } >> ] >> }' PS> New-IAMRole -AssumeRolePolicyDocument $json -RoleName 'AllAccess' Path RoleName RoleId CreateDate ---- -------- ------ ---------- / AllAccess AROAJ2B7YC3HH6M6F2WOM 9/16/2019 6:05:37 PM
Now that the IAM role is created, you need to give it permission to access the various resources you’ll be working with. Rather than spend the next 12 dozen pages detailing AWS IAM roles and security, let’s do something simple and give the Automator full access to everything (effectively making it a root user).
Note that in practice, you should not do this. It’s always best to limit access to only those necessary. Check out the AWS IAM Best Practices guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) for more information. But for now, let’s assign this user the AdministratorAccess managed policy by using the Register-IAMUserPolicy command. You’ll need the Amazon Resource Name (ARN) of the policy. To do that, you can use the Get-IAMPolicies command and filter by policy name, storing that name in a variable, and passing the variable into Register-IAMUserPolicy (all of which you can see in Listing 13-5).
PS> $policyArn = (Get-IAMPolicies | where {$_.PolicyName -eq 'AdministratorAccess'}).Arn PS> Register-IAMUserPolicy -PolicyArn $policyArn -UserName Automator
The last thing you need to do is generate an access key that will let you authenticate your user. Do this with the New-IAMAcessKey command, as shown in Listing 13-6.
PS> $key = New-IAMAccessKey -UserName Automator PS> $key AccessKeyId : XXXXXXXX CreateDate : 9/16/2019 6:17:40 PM SecretAccessKey : XXXXXXXXX Status : Active UserName : Automator
Your new IAM user is all set up. Now let’s authenticate it.
In an earlier section, you authenticated with the root user—this was a temporary measure. You need to authenticate your IAM user so you can actually get some work done! You need to authenticate your IAM user before you can do just about anything in AWS. You’ll again use the Set-AWSCredential command to update your profile with your new access and secret keys. Change the command a bit, though, by using the StoreAs parameter, as shown in Listing 13-7. Because you’ll be using this IAM user throughout the rest of the session, you’ll store the access and secret key in the AWS default profile so you don’t have to run this command again for every session.
PS> Set-AWSCredential -AccessKey $key.AccessKeyId -SecretKey $key.SecretAccessKey -StoreAs 'Default'
The final command to run is Initialize-AWSDefaultConfiguration -Region 'your region here', which prevents having to specify the region every time you call a command. This is a one-time step. You can find all regions by running Get-AWSRegion to find the closest region to you.
That’s it! You now have an authenticated session in AWS and can move on to working with AWS services. To confirm, run Get-AWSCredentials with the ListProfileDetail parameter to look for all saved credentials. If all is well, you will see the default profile show up:
PS> Get-AWSCredentials -ListProfileDetail
ProfileName StoreTypeName ProfileLocation
----------- ------------- ---------------
Default NetSDKCredentialsFile
In Chapter 12, you created an Azure virtual machine. Here, you’ll do something similar by creating an AWS EC2 instance. An AWS EC2 instance offers the same learning opportunity that an Azure virtual machine does; creating VMs is an extremely common occurrence, whether you’re using Azure or AWS. However, to create a VM in AWS, you need to approach provisioning in a different way than with Azure. Here, the underlying APIs are different, meaning the commands you run will be different, but in a nutshell, you’ll be performing essentially the same task: creating a virtual machine. It doesn’t help that AWS has its own lingo! I’ve tried to mirror the steps we took to create the VM in the preceding chapter, but of course, because of the architectural and syntactic differences between Azure and AWS, you will see some noticeable differences.
Luckily, just as with Azure, you have a module called AWSPowerShell that makes it easier to write everything from scratch. Just as you did in the preceding chapter, you’ll build from the ground up: setting up all the dependencies you need and then creating the EC2 instance.
The first dependency you need is a network. You can use an existing network or build your own. Because this book is hands-on, you’ll build your own network from scratch. In Azure, you did this with a vNet, but in AWS, you’ll work with virtual private clouds (VPCs), which are a network fabric that allows the virtual machine to connect with the rest of the cloud. To replicate the same settings an Azure vNet might have, you’ll simply create a VPC with a single subnet set to its most basic level. Because there is such a wide range of configuration options to choose from, I decided it’s best to mirror our Azure network configuration as closely as possible.
Before you get started, you need to know the subnet you’d like to create. Let’s use 10.10.0.0/24 as our example network. You’ll store that information and a variable, and use the New-EC2Vpc command, as shown in Listing 13-8.
PS> $network = '10.0.0.0/16' PS> $vpc = New-EC2Vpc -CidrBlock $network PS> $vpc CidrBlock : 10.0.0.0/24 CidrBlockAssociationSet : {vpc-cidr-assoc-03f1edbc052e8c207} DhcpOptionsId : dopt-3c9c3047 InstanceTenancy : default Ipv6CidrBlockAssociationSet : {} IsDefault : False State : pending Tags : {} VpcId : vpc-03e8c773094d52eb3
Once you create the VPC, you have to manually enable DNS support (Azure did this for you automatically). Manually enabling DNS support should point the servers attached to this VPC to an internal Amazon DNS server. Likewise, you need to manually give a public hostname (another thing Azure took care of for you). To do this, you need to enable DNS hostnames. Do both of these by using the code in Listing 13-9.
PS> Edit-EC2VpcAttribute -VpcId $vpc.VpcId -EnableDnsSupport $true PS> Edit-EC2VpcAttribute -VpcId $vpc.VpcId -EnableDnsHostnames $true
Notice that you use the Edit-EC2VpcAttribute command for both. As its name suggests, this command lets you edit several of your EC2 VPC’s attributes.
The next step is creating an internet gateway. This allows your EC2 instance to route traffic to and from the internet. Again, you need to do this manually, here using the New-EC2InternetGateway command (Listing 13-10).
PS> $gw = New-EC2InternetGateway PS> $gw Attachments InternetGatewayId Tags ----------- ----------------- ---- {} igw-05ca5aaa3459119b1 {}
Once the gateway is created, you have to attach it to your VPC by using the Add-EC2InternetGateway command, as shown in Listing 13-11.
PS> Add-EC2InternetGateway -InternetGatewayId $gw.InternetGatewayId -VpcId $vpc.VpcId
With the VPC out of the way, let’s take the next step and add a route to your network.
With the gateway created, you now need to create a route table and a route so that the EC2 instances on your VPC can access the internet. A route is a path that network traffic takes to find the destination. A route table is a, well, table of routes. Your route needs to go in a table, so you’ll create the route table first. Use the New-EC2RouteTable command, passing in your VPC ID (Listing 13-12).
PS> $rt = New-EC2RouteTable -VpcId $vpc.VpcId PS> $rt Associations : {} PropagatingVgws : {} Routes : {} RouteTableId : rtb-09786c17af32005d8 Tags : {} VpcId : vpc-03e8c773094d52eb3
Inside the route table, you create a route that points to the gateway you just created. You’re creating a default route, or default gateway, meaning a route that outgoing network traffic will take if a more specific route isn’t defined. You’ll route all traffic (0.0.0.0/0) through your internet gateway. Use the New-EC2Route command, which will return True if successful, as shown in Listing 13-13.
PS> New-EC2Route -RouteTableId $rt.RouteTableId -GatewayId $gw.InternetGatewayId -DestinationCidrBlock '0.0.0.0/0' True
As you can see, your route should be successfully created!
Next, you have to create a subnet inside your larger VPC and associate it with your route table. Remember that a subnet defines the logical network that your EC2 instance’s network adapter will be a part of. To create one, you use the New-EC2Subnet command, and then use the Register-EC2RouteTable command to register the subnet to the route table you built earlier. First, though, you need to define an availability zone (where AWS datacenters will be hosting your subnet) for the subnet. If you’re not sure which availability zone you want to use, you can use the Get-EC2AvailabilityZone command to enumerate all of them. Listing 13-14 shows what should happen if you do.
PS> Get-EC2AvailabilityZone
Messages RegionName State ZoneName
-------- ---------- ----- --------
{} us-east-1 available us-east-1a
{} us-east-1 available us-east-1b
{} us-east-1 available us-east-1c
{} us-east-1 available us-east-1d
{} us-east-1 available us-east-1e
{} us-east-1 available us-east-1f
If it’s all the same to you, let’s use the us-east-1d availability zone. Listing 13-15 shows the code to create the subnet, using the New-EC2Subnet command, which takes the VPC ID you created earlier, a CIDR block (subnet), and finally that availability zone you found as well as the code to register the table (using the Register-EC2RouteTable command).
PS> $sn = New-EC2Subnet -VpcId $vpc.VpcId -CidrBlock '10.0.1.0/24' -AvailabilityZone 'us-east-1d' PS> Register-EC2RouteTable -RouteTableId $rt.RouteTableId -SubnetId $sn.SubnetId rtbassoc-06a8b5154bc8f2d98
Now that you have the subnet created and registered, you’re all done with the network stack!
After building the network stack, you have to assign an Amazon Machine Image (AMI) to your VM. An AMI, which is a “snapshot” of a disk, is used as a template to prevent having to install the operating system on EC2 instances from scratch. You need to find an existing AMI that suits your needs: you need an AMI that can support a Windows Server 2016 instance, so first you need to find the name of that instance. Enumerate all of the available instances with the Get-EC2ImageByName command, and you should see an image called WINDOWS_2016_BASE. Perfect.
Now that you know the image name, use Get-EC2ImageByName again, and this time, specify the image you’d like to use. Doing so will tell the command to return the image object you need, as you can see in Listing 13-16.
PS> $ami = Get-EC2ImageByName -Name 'WINDOWS_2016_BASE' PS> $ami Architecture : x86_64 BlockDeviceMappings : {/dev/sda1, xvdca, xvdcb, xvdcc...} CreationDate : 2019-08-15T02:27:20.000Z Description : Microsoft Windows Server 2016... EnaSupport : True Hypervisor : xen ImageId : ami-0b7b74ba8473ec232 ImageLocation : amazon/Windows_Server-2016-English-Full-Base-2019.08.15 ImageOwnerAlias : amazon ImageType : machine KernelId : Name : Windows_Server-2016-English-Full-Base-2019.08.15 OwnerId : 801119661308 Platform : Windows ProductCodes : {} Public : True RamdiskId : RootDeviceName : /dev/sda1 RootDeviceType : ebs SriovNetSupport : simple State : available StateReason : Tags : {} VirtualizationType : hvm
Your image is stored and ready to go. Finally, you can create your EC2 instance. All you need is the instance type; unfortunately, you can’t get a list of them with a PowerShell cmdlet, but you can find them at https://aws.amazon.com/ec2/instance-types/. Let’s use the free one: t2.micro. Load up your parameters—the image ID, whether you want to associate with a public IP, the instance type, and subnet ID—and run the New-EC2Instance command (Listing 13-17).
PS> $params = @{ >> ImageId = $ami.ImageId >> AssociatePublicIp = $false >> InstanceType = 't2.micro' >> SubnetId = $sn.SubnetId } PS> New-EC2Instance @params GroupNames : {} Groups : {} Instances : {} OwnerId : 013223035658 RequesterId : ReservationId : r-05aa0d9b0fdf2df4f
It’s done! You should see a brand-new EC2 instance in your AWS Management Console, or you can use the Get-EC2Instance command to return your newly created instance.
You nailed down the code to create the EC2 instance, but, as is, the code is cumbersome to use. Let’s make this code easier to use over and over again. Chances are, creating an EC2 instance will be a frequent occurrence, so you’ll create a custom function to avoid doing everything one step at a time. At a high level, this function works the same way as the one you created in Chapter 12 in Azure; I won’t go through the specifics of the function here, but the script can be found in the book’s resources, and I highly recommend you try to build the function on your own.
When the script is called and all dependencies already exist except for the EC2 instance itself, you’ll see output similar to Listing 13-18 when you run it with the Verbose parameter.
PS> $parameters = @{ >> VpcCidrBlock = '10.0.0.0/16' >> EnableDnsSupport = $true >> SubnetCidrBlock = '10.0.1.0/24' >> OperatingSystem = 'Windows Server 2016' >> SubnetAvailabilityZone = 'us-east-1d' >> InstanceType = 't2.micro' >> Verbose = $true } PS> New-CustomEC2Instance @parameters VERBOSE: Invoking Amazon Elastic Compute Cloud operation 'DescribeVpcs' in region 'us-east-1' VERBOSE: A VPC with the CIDR block [10.0.0.0/16] has already been created. VERBOSE: Enabling DNS support on VPC ID [vpc-03ba701f5633fcfac]... VERBOSE: Invoking Amazon EC2 operation 'ModifyVpcAttribute' in region 'us-east-1' VERBOSE: Invoking Amazon EC2 operation 'ModifyVpcAttribute' in region 'us-east-1' VERBOSE: Invoking Amazon Elastic Compute Cloud operation 'DescribeInternetGateways' in region 'us-east-1' VERBOSE: An internet gateway is already attached to VPC ID [vpc-03ba701f5633fcfac]. VERBOSE: Invoking Amazon Elastic Compute Cloud operation 'DescribeRouteTables' in region 'us-east-1' VERBOSE: Route table already exists for VPC ID [vpc-03ba701f5633fcfac]. VERBOSE: A default route has already been created for route table ID [rtb-0b4aa3a0e1801311f rtb-0aed41cac6175a94d]. VERBOSE: Invoking Amazon Elastic Compute Cloud operation 'DescribeSubnets' in region 'us-east-1' VERBOSE: A subnet has already been created and registered with VPC ID [vpc-03ba701f5633fcfac]. VERBOSE: Invoking Amazon EC2 operation 'DescribeImages' in region 'us-east-1' VERBOSE: Creating EC2 instance... VERBOSE: Invoking Amazon EC2 operation 'RunInstances' in region 'us-east-1' GroupNames : {} Groups : {} Instances : {} OwnerId : 013223035658 RequesterId : ReservationId : r-0bc2437cfbde8e92a
You now have the tools you need to automate the boring task of creating EC2 instances in AWS!
Much like Microsoft Azure’s Web App service, AWS has a web app service of its own. Elastic Beanstalk (EB) is a service that allows you to upload web packages to be hosted on the AWS infrastructure. In this section, you’ll see what it takes to create an EB application and then deploy a package to one. This process requires five steps:
Create the application.
Create the environment.
Upload the package to make it available to the application.
Create a new version of the application.
Deploy the new version to the environment.
Let’s start by creating a new application.
To create a new application, use the New-EBApplication command, which provides the application’s name. Let’s call it AutomateWorkflow. Run the command, and you should see something like Listing 13-19.
PS> $ebApp = New-EBApplication -ApplicationName 'AutomateWorkflow' PS> $ebSApp ApplicationName : AutomateWorkflow ConfigurationTemplates : {} DateCreated : 9/19/2019 11:43:56 AM DateUpdated : 9/19/2019 11:43:56 AM Description : ResourceLifecycleConfig : Amazon.ElasticBeanstalk.Model .ApplicationResourceLifecycleConfig Versions : {}
The next step is creating the environment, which is the infrastructure the application will be hosted on. The command to create a new environment is New-EBEnvironment. Unfortunately, creating the environment isn’t quite as straightforward as creating the application. A couple of the parameters, such as the application name and name of the environment, are up to you, but you need to know the SolutionStackName, Tier_Type, and Tier_Name. Let’s look at these a little more closely.
You use the SolutionStackName to specify the operating system and IIS version you’d like your app to run under. For a list of available solution stacks, run the Get-EBAvailableSolutionStackList command and inspect the SolutionStackDetails property, as shown in Listing 13-20.
PS> (Get-EBAvailableSolutionStackList).SolutionStackDetails PermittedFileTypes SolutionStackName ------------------ ----------------- {zip} 64bit Windows Server Core 2016 v1.2.0 running IIS 10.0 {zip} 64bit Windows Server 2016 v1.2.0 running IIS 10.0 {zip} 64bit Windows Server Core 2012 R2 v1.2.0 running IIS 8.5 {zip} 64bit Windows Server 2012 R2 v1.2.0 running IIS 8.5 {zip} 64bit Windows Server 2012 v1.2.0 running IIS 8 {zip} 64bit Windows Server 2008 R2 v1.2.0 running IIS 7.5 {zip} 64bit Amazon Linux 2018.03 v2.12.2 runni... {jar, zip} 64bit Amazon Linux 2018.03 v2.7.4 running Java 8 {jar, zip} 64bit Amazon Linux 2018.03 v2.7.4 running Java 7 {zip} 64bit Amazon Linux 2018.03 v4.5.3 running Node.js {zip} 64bit Amazon Linux 2015.09 v2.0.8 running Node.js {zip} 64bit Amazon Linux 2015.03 v1.4.6 running Node.js {zip} 64bit Amazon Linux 2014.03 v1.1.0 running Node.js {zip} 32bit Amazon Linux 2014.03 v1.1.0 running Node.js {zip} 64bit Amazon Linux 2018.03 v2.8.1 running PHP 5.4 --snip--
As you can see, you have a lot of options. For this example, choose 64-bit Windows Server Core 2012 R2 running IIS 8.5.
Now let’s look at the Tier_Type. The Tier_Type specifies the kind of environment your web service will be running under. The Standard type is required if you’ll be using this environment to host a website.
And finally, for the Tier_Name parameter, you have the options of WebServer and Worker. Choose WebServer here because you’d like to host a website (Worker would be required if you were creating an API).
Now that your parameters are all figured out, let’s run New-EBEnvironment. Listing 13-21 shows the command and the output.
PS> $parameters = @{ >> ApplicationName = 'AutomateWorkflow' >> EnvironmentName = 'Testing' >> SolutionStackName = '64bit Windows Server Core 2012 R2 running IIS 8.5' >> Tier_Type = 'Standard' >> Tier_Name = 'WebServer' } PS> New-EBEnvironment @parameters AbortableOperationInProgress : False ApplicationName : AutomateWorkflow CNAME : DateCreated : 9/19/2019 12:19:36 PM DateUpdated : 9/19/2019 12:19:36 PM Description : EndpointURL : EnvironmentArn : arn:aws:elasticbeanstalk:... EnvironmentId : e-wkba2k4kcf EnvironmentLinks : {} EnvironmentName : Testing Health : Grey HealthStatus : PlatformArn : arn:aws:elasticbeanstalk... Resources : SolutionStackName : 64bit Windows Server Core 2012 R2 running IIS 8.5 Status : Launching TemplateName : Tier : Amazon.ElasticBeanstalk.Model.EnvironmentTier VersionLabel :
You’ll notice that the status shows Launching. This means the app isn’t available yet, so you may have to wait a bit for the environment to come up. You can periodically check on the status of the app by running Get-EBEnvironment -ApplicationName 'AutomateWorkflow' -EnvironmentName 'Testing'. The environment may stay in a Launching state for a few minutes.
When you see the Status property turn to Ready, the environment is up, and it’s time to deploy a package to the site.
Let’s deploy. The package you’ll deploy should contain any files you want your website to host. You can put whatever you’d like in there—for our purposes, it doesn’t matter. All you have to make sure of is that it’s in a ZIP file. Use the Compress-Archive command to zip up whatever files you want to deploy:
PS> Compress-Archive -Path 'C:MyPackageFolder*' -DestinationPath 'C:package.zip'
With your package nice and zipped up, you need to put it somewhere the application can find. You could put it in a couple of places, but for this example, you’ll put it in an Amazon S3 bucket, a common way to store data in AWS. But to put it in an Amazon S3 bucket, you first need an Amazon S3 bucket! Let’s make one in PowerShell. Go ahead and run New-S3Bucket -BucketName 'automateworkflow'.
With your S3 bucket up and waiting for contents, upload the ZIP file by using the Write-S3Object command, as shown in Listing 13-22.
PS> Write-S3Object -BucketName 'automateworkflow' -File 'C:package.zip'
Now you have to point the application to the S3 key you just created and specify a version label for it. The version label can be anything, but typically, you use a unique number based on the time. So let’s use the number of ticks representing the current date and time. Once you have the version label, run New-EBApplicationVersion with a few more parameters, as shown in Listing 13-23.
PS> $verLabel = [System.DateTime]::Now.Ticks.ToString() PS> $newVerParams = @{ >> ApplicationName = 'AutomateWorkflow' >> VersionLabel = $verLabel >> SourceBundle_S3Bucket = 'automateworkflow' >> SourceBundle_S3Key = 'package.zip' } PS> New-EBApplicationVersion @newVerParams ApplicationName : AutomateWorkflow BuildArn : DateCreated : 9/19/2019 12:35:21 PM DateUpdated : 9/19/2019 12:35:21 PM Description : SourceBuildInformation : SourceBundle : Amazon.ElasticBeanstalk.Model.S3Location Status : Unprocessed VersionLabel : 636729573206374337
Your application version has now been created! It’s time to deploy this version to your environment. Do that by using the Update-EBEnvironment command, as shown in Listing 13-24.
PS> Update-EBEnvironment -ApplicationName 'AutomateWorkflow' -EnvironmentName 'Testing' -VersionLabel $verLabel -Force AbortableOperationInProgress : True ApplicationName : AutomateWorkflow CNAME : Testing.3u2ukxj2ux.us-ea... DateCreated : 9/19/2019 12:19:36 PM DateUpdated : 9/19/2019 12:37:04 PM Description : EndpointURL : awseb-e-w-AWSEBL... EnvironmentArn : arn:aws:elasticbeanstalk... EnvironmentId : e-wkba2k4kcf EnvironmentLinks : {} EnvironmentName : Testing Health : Grey HealthStatus : PlatformArn : arn:aws:elasticbeanstalk:... Resources : SolutionStackName : 64bit Windows Server Core 2012 R2 running IIS 8.5 Status : ❶Updating TemplateName : Tier : Amazon.ElasticBeanstalk.Model.EnvironmentTier VersionLabel : 636729573206374337
You can see that the status has gone from Ready to Updating ❶. Again, you need to wait a bit until the status turns back to Ready as you can see in Listing 13-25.
PS> Get-EBEnvironment -ApplicationName 'AutomateWorkflow' -EnvironmentName 'Testing' AbortableOperationInProgress : False ApplicationName : AutomateWorkflow CNAME : Testing.3u2ukxj2ux.us-e... DateCreated : 9/19/2019 12:19:36 PM DateUpdated : 9/19/2019 12:38:53 PM Description : EndpointURL : awseb-e-w-AWSEBL... EnvironmentArn : arn:aws:elasticbeanstalk... EnvironmentId : e-wkba2k4kcf EnvironmentLinks : {} EnvironmentName : Testing Health : Green HealthStatus : PlatformArn : arn:aws:elasticbeanstalk:... Resources : SolutionStackName : 64bit Windows Server Core 2012 R2 running IIS 8.5 Status : ❶Ready TemplateName : Tier : Amazon.ElasticBeanstalk.Model.EnvironmentTier VersionLabel :
As you check in, the status is Ready again ❶. Everything looks good!
As an AWS administrator, you may need to set up different types of relational databases. AWS provides the Amazon Relational Database Service (Amazon RDS), which allows for administrators to easily provision a few types of databases. There a few options, but for now, you’ll stick with SQL.
In this section, you’ll create a blank Microsoft SQL Server database in RDS. The main command you’ll use is New-RDSDBInstance. Like New-AzureRmSqlDatabase, New-RDSDBInstance has a lot of parameters, more than I can possibly cover in this section. If you’re curious about other ways to provision RDS instances, I encourage you to review the help contents for New-RDSDBInstance.
For our purposes, though, you need the following information:
The name of the instance
The database engine (SQL Server, MariaDB, MySQL, and so on)
The instance class that specifies the type of resources the SQL Server runs on
The master username and password
The size of the database (in GB)
A few of these things you can figure out easily: the name, username/password, and size. The others require further investigation.
Let’s start with the engine version. You can get a list of all available engines and their versions by using the Get-RDSDBEngineVersion command. When run with no parameters, this command returns a lot of information—too much for what you’re doing. You can use the Group-Object command to group all the objects by engine, which will provide a list of all engine versions grouped by the engine name. As you can see in Listing 13-26, you now have a more manageable output that shows all the available engines you can use.
PS> Get-RDSDBEngineVersion | Group-Object -Property Engine Count Name Group ----- ---- ----- 1 aurora-mysql {Amazon.RDS.Model.DBEngineVersion} 1 aurora-mysql-pq {Amazon.RDS.Model.DBEngineVersion} 1 neptune {Amazon.RDS.Model.DBEngineVersion} --snip-- 16 sqlserver-ee {Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Mo... 17 sqlserver-ex {Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Mo... 17 sqlserver-se {Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Mo... 17 sqlserver-web {Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Model.DBEngineVersion, Amazon.RDS.Mo... --snip--
You have four sqlserver entries, representing SQL Server Express, Web, Standard Edition, and Enterprise Edition. Since this is just an example, you’ll go with SQL Server Express; it’s a no-frills database engine and, most important, it’s free, which allows you to tune and tweak it if necessary. Select the SQL Server Express engine by using sqlserver-ex.
After picking an engine, you have to specify a version. By default, New-RDSDBInstance provisions the latest version (which you’ll be using), but you can specify a different version by using the EngineVersion parameter. To see all the available versions, you’ll run Get-RDSDBEngineVersion again, limit the search to sqlserver-ex, and return only the engine versions (Listing 13-27).
PS> Get-RDSDBEngineVersion -Engine 'sqlserver-ex' | Format-Table -Property EngineVersion EngineVersion ------------- 10.50.6000.34.v1 10.50.6529.0.v1 10.50.6560.0.v1 11.00.5058.0.v1 11.00.6020.0.v1 11.00.6594.0.v1 11.00.7462.6.v1 12.00.4422.0.v1 12.00.5000.0.v1 12.00.5546.0.v1 12.00.5571.0.v1 13.00.2164.0.v1 13.00.4422.0.v1 13.00.4451.0.v1 13.00.4466.4.v1 14.00.1000.169.v1 14.00.3015.40.v1
The next parameter value you need to provide to New-RDSDBInstance is the instance class. The instance class represents the performance of the underlying infrastructure—memory, CPU, and so forth—that the database will be hosted on. Unfortunately, there’s no PowerShell command to easily find all available instance class options, but you can check out this link to get a full rundown: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html.
When selecting an instance class, it’s important to verify that it’s supported by the engine you chose. Here, you’ll use the db2.t2.micro instance class to create your RDS DB, but many of the other options will not work. For a full breakdown on which instance classes are supported under which RDS DB, refer to the AWS RDS FAQs (https://aws.amazon.com/rds/faqs/). If you choose an instance class that’s not supported by the engine you’re using, you’ll receive an error as in Listing 13-28.
New-RDSDBInstance : RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.t1.micro, Engine=sqlserver-ex, EngineVersion=14.00.3015.40.v1, LicenseModel=license-included. For supported combinations of instance class and database engine version, see the documentation.
Once you’ve selected a (supported) instance class, you have to decide on a username and password. Note that AWS will not accept any old password: you cannot have a slash, @ sign, comma, or space in your password, or you will receive an error message like the one in Listing 13-29.
New-RDSDBInstance : The parameter MasterUserPassword is not a valid password. Only printable ASCII characters besides '/', '@', '"', ' ' may be used.
With that, you have all the parameters needed to fire off New-RDSDBInstance! You can see the expected output in Listing 13-30.
PS> $parameters = @{ >> DBInstanceIdentifier = 'Automating' >> Engine = 'sqlserver-ex' >> DBInstanceClass = 'db.t2.micro' >> MasterUsername = 'sa' >> MasterUserPassword = 'password' >> AllocatedStorage = 20 } PS> New-RDSDBInstance @parameters AllocatedStorage : 20 AutoMinorVersionUpgrade : True AvailabilityZone : BackupRetentionPeriod : 1 CACertificateIdentifier : rds-ca-2015 CharacterSetName : CopyTagsToSnapshot : False --snip--
Congratulations! Your AWS should have a shiny, new RDS database.
This chapter covered the basics of using AWS with PowerShell. You looked at AWS authentication and then went through several common AWS tasks: creating EC2 instances, deploying Elastic Beanstalk web applications, and provisioning an Amazon RDS SQL database.
After this chapter and the preceding one, you should have a good sense of how to use PowerShell to work with the cloud. Of course, there’s much more to learn—more than I could ever cover in this book—but for now, you’ll be moving on to the next part of this book: creating your own fully functional PowerShell module.
3.135.190.232