Skip to main content

Passed AWS CSA Professional

2 weeks ago I passed the AWS CSA Professional certification exam. Here are my thoughts hoping they might thelp others who are studying or are planning to earn the certification.

First off to be eligible for the AWS CSA PRO you need to hold a valid, not expired AWS CSA Associate level certification. In my case I got the Associate 2 years ago and it would have expired on Feb 5th 2018.

Both PRO and Associate certifications expire after 2 years. Amazon training will email you a reminder of the expiration roughly 6 months before. In the notification email they will also highlight
all available ugrade paths, or just the name of the recertification exam.



That means I could either recertify for the Associate level or undertake the upgrade path to Professional level. To nudge me on the 'right' path AWS Training included a discount on the Professional certification exam.

So around september '17 I decided to take the AWS CSA Professional certification and started studying. Through my employer I got a one year subscription to Linux Academy, and planned a schedule.

Being busy and with a family I did not start studying seriously until Christmas 2017 which left me with a little more than 1 month of time before my hard deadline: Feb 5th 2018.

I mid January I took a practice exam online, which I failed rather spectacularly with just a 40% score. At that time I barely had 3 weeks left.

So I intensified my studying and went through most Linux Academy material another time, taking notes and noting down areas where I felt I was not confident enough.

I took the Linux Academy practice exam a ton of times, each time noting again the areas where my preparation was weak, studying them and then taking the LA practice exam again.
At this point I could consistently pass the LA practice exam with a 90% score.

But I wanted to be super-sure, so I started searching on the web, until I came across a course on Braincert. The course offers one introductory lesson and then 5 practice exams. The price was heavily discounted so I decided to sign up.

I took all 5 practice exams, once again using the strategy I outlined above: noting down specific areas where I'd felt unsecure so that I could go back to study them later. This course also gave solutions with explanations to all questions, which makes the whole experience so much more valuable.

Exactly one week before the deadline I booked the exam and passed it with a 91% score.
The exam is 170 minutes long and I used nearly all the time I had. I think I had 15 minutes left when I decided to submit it.

You will be given a notepad that you can use to apply the elimination process that I mentioned above. This method is really effective because it allows you to "save" the context of the question and come back to it later, if you need to.

Basically for most questions I would write the question number on top and then all the possible answers below:

 31
----

A X
B X
C
D
E X

I would then place a cross on obviously wrong answers (A, B and E in the example). For cases when I would not be able to identify the right anwser I could do one of two things:

  1. put a tick next to the aswer I thought would be the correct one: this would be my candidate if I had to come back to the question
  2. or put nothing next to all the answers that could be correct. In this case I would have to apply the same elimination process later when reviewing the question

I would also suggest that you also select one of the 'not obviously wrong' questions in the software. In the case that you run out of time at least you would have submitted an answer increasing your chances of success, since the exam does not grade wrong answers negatively.

In closing, IMHO the key steps to passing the exam are (in no particular order):
  1. the excellent studying material at Linux Academy, videos and practice labs. These were invaluable to laying a foundation of knowledge that I could then build on
  2. the LA guide to taking the exam: a few very simple and practical tips (mark questions, always start by excluding answers that you are sure are wrong, focusing on what the question is really asking of you, and so on)
  3. LA also provides a list of whitepapers on the AWS site. I read them all and then used some as a starting point for more whitepapers
  4. for some services I had no familiarity: DynamoDB, Direct connect I watched keynotes and/or videos from AWS re:invent which can be easily found on youtube. These are probably the best way to gain deeper knowledge on topics where you can't get first hand experience, so they're highly recommended.
    I would recommend to watch them even if you're not taking the certification because there's so much valuable information on design, reliability, and performance. There's a list with links at the bottom of this post
  5. the Braincert practice exams: these are incredibly detailed and so cheap you shouldn't even think about getting them

Links

AWS re:Invent 2016: Deep Dive on Amazon DynamoDB (DAT304)

AWS re:Invent 2015: Deep Dive in AWS Direct Connect and VPNs (NET406)

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.