Getting AWS Certified, Some Tips!

If you work daily with AWS – building, deploying, running, debugging, etc… on AWS, then you probably already have all the skills needed to pass any one of the Associate style exams (Solution Architect, Developer, Sys Ops).

Having recently taken the Solutions Architect, SysOps and Developer Associate exams I wanted to share some insights and my top tips for the exam:

  1. Do some online training course (details below)
  2. Do some practice exams
  3. Relax, you got this!

The exam blue prints provide useful details about how the exam is structured.

1. Do a online training course

I can personally recommend the backed AWS Certified Solutions Architect – Associate 2019, this is the one I used and was quite happy with it. A few tips:

  • Each section has a summary/recap that covers the most important exam related points, worth watching a few times.
  • The exam is cheaper on Udemy, but you can then migrate them to The version on is updated more frequently and the quiz’s have more questions.

Another great resource (and regular source of questions) is the AWS service FAQ pages (e.g. S3 faq, Lambda faq, etc…).

2. Do some practice exams

It is very useful to get a feeling for how the exam is run. The real exam consists on 65 questions, you have 130 minutes to complete it. I purchased some practice exams, specifically: AWS Certified Solutions Architect – Associate Practice Tests. These questions were useful, but perhaps a little easier than what the real questions were. However, still very useful to take them, to identify areas that you might need some further research. It is well worth taking the practice exams multiple times, until you get +95% in them all.

3. Relax, you got this!

If you have done all of the above, then you are well prepared to take the exam! Schedule one via:

Most exams are Remotely Proctored, which is to say, it is just a computer in a room with a webcam. You will need to identify yourself (bring your passport and drivers license) via a chat agent on the computer and then you will start your exam.

At the end of the exam, it will tell you that you successfully passed, however the official certification won’t be available for for “up to 5 days” after the exam (although usually it takes ~24 hours).

Bonus tip!

If you are not a native English speaker, you can request a 30-minute extension for your exam. To do so, please log into your AWS Certification Account (not the PSI account) and take the following steps:

  1. From the top navigation, click Upcoming Exams
  2. On the right, click the Request Exam Accommodations button
  3. Click the Request Accommodation button
  4. Select ESL +30 Minutes from the accommodation dropdown
  5. Click Create

Now when you go to schedule your exam the time will be 30 minutes longer than normal. Note that you MUST request the accommodation BEFORE you schedule the exam.

Best of Luck!

One line docker bash

If you want to be to quickly install something, or try something out, but not pollute your environment, then docker is a great way to do that. Here is a one line command that will start new docker bash terminal (and mount the current working directory inside the docker container):

docker run -v $(pwd):/opt -w="/opt" --rm -it debian:latest bash

Add this to your .bashrc file as an alias so it can be run very quickly any time:

alias dock='docker run -v $(pwd):/opt -w="/opt" --rm -it debian:latest bash'

Customizing Your Container bash:

If you are often running this, then you might might find it handy to have your own custom .bashrc file mounted inside the container, so that all your aliases, and other settings are available by default. You can do this by simply mounting your .bashrc file.

-v $(echo ~)/.config/.docker_bash:/root/.bashrc

Naturally “/.config/.docker_bash” would be the path to your file .bashrc file that you want to use inside the container.

So the full command that I use myself:

alias dock='docker run -v $(pwd):/opt -w="/opt" --rm -v $(echo ~)/.config/.docker_bash:/root/.bashrc -it debian:latest bash'

Catch Me Speaking at AWS Stockholm

I will be giving a talk about how we use serverless (AWS Lambda, API Gateway, etc…) at the AWS Stockholm event (May 3rd 2017). My talk is on the “Deep Dive on Serverless Stack” track. Presentation starts at 15:00 in room A 2.

For my part I will be going into:

  • AWS Lambda and how we use AWS SAM to make deployment easier
  • Amazon API Gateway integration with AWS Marketplace
  • How aws-serverless-express makes our lives easier
  • A few tips and pointers from our serverless adventure

Feel free to ask questions or come up to me after the presentation, I will hang around for a while to answer questions.

Photobox Downloader Updated

I have updated my Photobox Downloader application with a bunch of new features and fixes. Photobox recently implemented some heavy throttling for photos download, this update addresses that and more.


Change log for 0.3.2:

  • New retry logic will retry failed (timeout) downloads automatically
  • Can now skip already downloaded files (helps greatly with interrupted downloads)
  • New debug mode (pass “-d”) gives extensive logging
  • Less concurrent photo downloads, to avoid throttling
  • Improved the documentation
  • Updated example
  • Albums with slashes are handled better

Update by running:

  $ npm update -g photobox-downloader

The source code for API usage is available on GitHub project repository.

API Gateway testing permissions tip

Just a tip, in order for API Gateway test / sandbox area to be able to execute (invoke) a Lambda function that was generated by CloudFormation, you need to explicitly grant the Sandbox permission in your CloudFormation file. As it is not documented and there is currently no way to “export” a manually created API as CloudFormation file, it is easy to overlook/miss. The simple solution for this is to add a new Lambda permission, with the “stage name” set to “null”.

Here is a complete example of a Lambda Permission Resource in CloudFormation:

        "ApiGatewaySandboxPermission" : {
            "Type" : "AWS::Lambda::Permission",
            "Properties" : {
                "FunctionName" : { "Ref" : "MyFunctionAlias" },
                "Action" : "lambda:InvokeFunction",
                "Principal" : "",
                "SourceArn" : { "Fn::Join": [ "", [
                    { "Ref" : "AWS::Region" }, ":",
                    { "Ref" : "AWS::MyAccountId" }, ":",
                    { "Ref" : "MyRestApiId"}, "/",
                ] ] }
 The interesting part here are the last few lines. This grants the SandBox (Stage Name is “null”) to invoke all GET based methods, starting at the root (/*) of your API, tweak path as needed.
 Hope this helps.

Nginx Proxy Pass, resolving “No required SSL certificate was sent”

If you are using Nginx as a reverse proxy and trying to inject client certificates you may run into a Server 400 “No required SSL certificate was sent” error. I spent a few hours debugging this issue and thought I’d share my findings. The problem is fairly subtle and easy to over look/miss, numerous other people have ran into it. The problem is that the backend server is using SNI (Server Name Indication, basically allows multiple SSL/TLS certs on a single IP). You must explicitly tell Nginx to pass forward the domain name in the TLS handshake, so that the final destination (your backend) knows which SSL/TLS cert to serve.

Following the Nginx proxy documentation, you would set the required directives and expect it to work, so your configuration might look something like:

location / {
        proxy_pass       ;
        proxy_ssl_certificate      /etc/nginx/conf.d/ssl/client.crt;
        proxy_ssl_certificate_key  /etc/nginx/conf.d/ssl/client.key;

The solution is to just add and extra directive to enable SNI , the directive is called “proxy_ssl_server_name“. A working example would be:
location / {
        proxy_pass       ;
        proxy_ssl_server_name      on;
        proxy_ssl_certificate      /etc/nginx/conf.d/ssl/client.crt;
        proxy_ssl_certificate_key  /etc/nginx/conf.d/ssl/client.key;

 Restart Nginx and test with your browser again, all should be working!

Another year, another SSL cert, another bullet dodged!

Just renewed my SSL certificate thanks to StartSSL (using their very nice and fairly straight forward automated system), it got me thinking on all the issues that I have encountered and heard about in regards to SSL certificate and domain renewal.

In a previous job there was an incident where a SSL certificate expired, this went unnoticed (it was a analytics service, not a core service) for 5 weeks, not too bad right? Well, unfortunately due to an additional oversight in the client code, if a request failed, it would retry the request after a time out of 0.5 seconds, indefinitely!. This lead to hundreds of thousands of clients hammering the (auto-scaling) load balancer with requests that were dropped as SSL verification failed. The end result, a large bill that could have been avoided. This issue is hardly unique, even the largest cloud providers have been hit by similar issues (see Windows Azure Service Disruption from Expired Certificate or domain getting transferred or Microsoft losing its Hotmail domain).

It is hard to have a universal list of best practices as it is dependent on the size of the organisation, but some good ideas:

  • Avoid single point of failure, never have an SSL cert (or domain name) associated with an individual developers email address. Instead use a specific mailing list.
  • Monitoring, even secondary services (that are considered ‘best effort’), need active monitoring.
    • Monitoring results should be sent to mailing list not a specific individual
    • When setting up monitoring, don’t forget about monitoring costs as well as uptime!
  • Consolidate all domain and SSL certs with a single trusted provider
  • Purchase with a Company credit card with auto renew

What best practices would you recommend? What bullets have you dodged (or got hit by!)? Any services out there that you use to avoid issues?


Setup OpenVPN using OpenWRT

Note 2016/12/07: Since this article was written PIA have updated their config. While I have not personally tested this out, some commentators have reported success by doing 2 additional steps:

  • With the new CA file, you now need to specify the port under VPN settings (1198).
  • Specify encryption (AES-128-CBC) and authentication (SHA1).

Configuring OpenVPN to work on OpenWRT is relatively easy and straight forward, just not very well documented. This is an in depth, step-by-step guide to configure OpenVPN (VPN provider Private Internet Access – commonly called PIA) on an OpenWRT router.


  • You already have OpenWRT installed on your router
  • You know how to connect to your router via SSH and Web panel
  • Router is connecting to another device (Modem, other router, direct to ISP) that is supplying internet access

Lets being…

This tutorial will cover:

  • Installing and configuring OpenVPN
  • Configuring a network interface
  • Setting up some firewall rules & DNS Leak protection
  • Verify everything works


  1. First step, open a SSH connection to your router, login as root. You should see something like Figure 1 below.
    Fig 1

    Figure 1 – SSH Login


  2. Next we have to update packages and installed some required libraries, enter the following commands in the terminal:
  3. Next, download the OpenVPN config files from PIA – somewhere to your local machine. Extract all the files from the zip file. You are only interested in 2 files (ca.crt and crl.pem, we will get back to them later). You can safely delete the *.ovpn files.
  4. Now open your broswer and go to your router web panel, by default this should be:
  5. Once logged in you should notice a new menu item called Services, goto it and click the OpenVPN option, see Figure 2 below.
    Figure 2 - OpenVPN menu

    Figure 2 – OpenVPN menu


  6. Time to add our new configuration. At the bottom, in the text field, enter a new name “pia_client”, select “Simple client configuration for a routed point-to-point VPN” and click Add button (Figure 3)
    Figure 3 - Create config

    Figure 3 – Create config


  7. You will immediately be taken to the config page, click the link “Switch to advanced configuration”
    Figure 4 - Advanced Menu

    Figure 4 – Advanced Menu


  8. All settings on the Service page should be fine. Click the “Networking” link at the top.
  9. See Figure 5 below for how the settings should look like. A few notes:
    • If there is a line missing, use the “Additional Field” drop down at the bottom, select the missing field and press Add button
    • Ensure that “dev” is set to “tun” and not “tap”
    • If there is a field called “ifconfig” with an IP address, remove the address (i.e. make field blank)
    Figure 5 - Networking Setup

    Figure 5 – Networking configuration


  10. Click blue Save button on the bottom
  11. Now click on the “VPN” link to change to the VPN tab. As in Networking, there will be some fields missing, use the “Additional Field” drop down at the bottom again to add them. A few notes:
    • “auth_user_pass” field value should be “/etc/openvpn/userpass.txt” (It doesn’t exist yet, but we will get back to it in a few minutes)
    • The “remote” field should be the hostname of which ever exit node you want to use – see PIA Networking page for a complete list.
      Figure 6 - VPN configuration

      Figure 6 – VPN configuration


  12. Now click on the “Cryptography”. As  before, use the “Additional Field” drop down at the bottom to add missing fields.
    • IMPORTANT: for the “ca” field, you will need to browse to the location of the ca.crt file from the you downloaded in step 3.
    • The “crl_verify” path should be set to “/etc/openvpn/crl.pem”
      Figure 7 - Cryptography configuration

      Figure 7 – Cryptography configuration


  13. We have the VPN configuration done now, but we still need to configure the interface as well as the Firewall.
  14. From the Menu at the top select Networking -> Interfaces.
  15. Click the “Add new interface…” button.
    • Name: “PIA_VPN”(IMPORTANT: Name must be exactly this)
    • Protocol of the new interface: Unmanaged
    • Cover the following interface: Custom Interface: tun0
      Figure 8 - Create Interface

      Figure 8 – Create Interface


  16. Enter in the details and click the Save button.
  17. For the final few steps, we will switch back to SSH.
  18. Next we have to create a file that will store your PIA username and password. It is just a simple text file, with first line username and second line your password. Then we will chmod it to set correct permissions.
    Create username and password file

    Figure 9 – Create username and password file

  19. Now have to add the crl.pem file (from the, just open it in a text editor like notepad and copy the contents
    Figure 9 - Create CRL file

    Figure 10 – Create CRL file


  20. Now we need to setup some firewall rules to forward the VPN traffic
  21. Almost done!
  22. In order to protect against DNS Leaks, we need to update the DHCP server to supply the PIA DNS servers instead of your ISP’s DNS.
  23. From the main menu, goto: Network -> Interfaces -> LAN -> DHCP Server (below the “Common Configuration” section) -> Advanced Settingss. In the “DHCP-Options” field enter the value: “6,,”.
  24. Click “Save & Supply”
    Figure 10 - Interfaces - > LAN

    Figure 11 – Interfaces – > LAN

    Figure 11 - DNS settings

    Figure 12 – DNS settings


  25. All done! Now we can start the VPN connection.
  26. Goto: Services -> OpenVPN, check the Enabled checkbox beside our”pia_client”, then press the Start button, your VPN should now start up.
    Fig 12 - VPN Started

    Fig 13 – VPN Started


Verify it works…

  • To verify your traffic is going over VPN you can use the PIA What is My IP tool

    Figure 13 - Successs! VPN working

    Figure 14 – Successs! VPN working

  • If it isn’t working then you may have missed a step. Try going to Status -> System Log in the main menu, it may contain useful information.
  • To verify your DNS is not leaking use something like DNS Leak site (you may have to release & renew your DHCP IP before this will work)

Congratulations, your VPN tunnel is now setup!

Fix ‘node-gyp rebuild’ error on windows

While playing around with Flux & React I ran into some issues using a yoeman flux generator. It kept failing on “node-gyp rebuild”. If you do any development on Windows you’ve likely run into issues with node-gyp before. The core of the problem is that node-gyp is no longer being actively developed  and so it has some old dependencies that a modern development env might not have.

node-gyp rebuild failed

node-gyp rebuild failed

How to fix?

  1. Goto Control Panel -> Programs and Features and uninstall “Microsoft Visual C++2010 x64 Redistributable” and “Microsoft Visual C++ 2010 x86 Redistributable” (if present)cpluplus_redist
  2. Download and install Python 2.7.3 (if you have Python 3.x already installed, just leave it, both can coexist)
  3. Visual C++ 2010 Express or Visual Studio 2010
  4. Windows SDK 7.1
  5. Visual Studio 2010 SP1
  6. Visual C++ 2010 SP1 Compiler Update for the Windows SDK 7.1

IMPORTANT: The order of steps above is important!

Now open a command window/console and enter the following commands

npm config set python /Python27/python.exe --global
npm config set msvs_version 2010 --global

Final Step

Finally goto Start -> All Programs -> Microsoft Windows SDK v7.1 -> Windows SDK 7.1 Command Prompt


From this command window the `node-gyp rebuild` command will work.

NOTE: You only need to use the Windows SDK 7.1 Command Prompt when running `npm install`, once installed can go back to normal command window.