AWS re:Invent Updates and Thoughts

December 17, 2019

Tweet This:
Share on LinkedIn:

By John Valentine, Kovarus Cloud Practice Manager

Every AWS re:Invent conference has a theme that the keynotes follow and this year was no different. On Tuesday, I sat down to watch the keynote and began to see the main message — How businesses can transform their IT organization and ultimately gain a competitive edge in the market. Andy Jassy began laying out how various organizations may either choose to rapidly adopt, or slowly tiptoe into public cloud. However, the businesses that truly disrupt the market are the born-in-the-cloud startups that inherently have the flexibility and agility so many traditional shops often lack and try to achieve.

This leads to the main question that many organizations are asking themselves — How do we transform and innovate ourselves? AWS feels their answer to this question lies in its ever-expanding services, tooling, and on-demand nature, and I share that view. Andy walked through the six major areas that an organization must address before they can truly transform their business, but I’m going to reduce that further and say there are really two major areas: Leadership and Technology.

Let’s first focus on the leadership aspect, since this is really the most important piece. Within the realm of leadership, there are four major differentiators that an organization that is serious about adopting cloud will focus on. These are:

  • Senior leadership team conviction and alignment.
  • Top-down aggressive goals.
  • Train your builders.
  • Give developers access to all the tools.

As most of us know, if our senior leadership isn’t on board with a project, we will never get the support or sponsorship we need to be successful. Having leadership aligned and supporting our goals, the process and end state are absolutely critical as they will ultimately be the ones to push the initiative across the entire organization. Once leadership supports this IT transformation, then we must begin training both our infrastructure team, or our builders as AWS calls them. This will ensure we can actually build and run the cloud environment to support our developers (alternatively, an organization can use a managed service provider such as Kovarus to assist here). Lastly, giving our developers access to any and all tools in the AWS portfolio will greatly accelerate their ability to decrease time to value and get what they need faster. This is often done by providing a shared sandbox account where developers can test tools at minimal risk and if it proves valuable, they can promote that tool to production.

Once our leadership is on board and we have trained our team, we must focus on the technology, so let’s start with compute. AWS has made a ton of improvements to performance using a somewhat new hardware and hypervisor suite called Nitro, which I’ll be writing a follow-up blog on. Nitro allowed AWS to offer new instance types, as well as improve the existing compute, memory and network throughput of existing instances. In essence, Nitro removes unneeded features from the hypervisor and puts computing on Nitro chips to free up as many resources as possible. This also allows AWS to create specific instance types for specialized use-cases such as scale-out, machine learning and bare metal. Because of this, you’re going to see an exponential increase in new instance types and features in the future. Already, there are four times the number of instance types released this year than last, and I expect that will grow each year.

AWS also released new features within the container space. The current container offerings are Amazon Elastic Container Service (ECS) which is a deeply integrated Docker-based container service, Elastic Kubernetes Service (EKS) which allows customers to have a fully managed Kubernetes offering and Fargate, which allows customers to manage containers at the task level (serverless container offering). Forty percent of new container customers start with Fargate due to its ease of use, which drove AWS to release Fargate for EKS, allowing customers the same experience as traditional Fargate, but for Kubernetes based container environments.

Coming from an on-premises storage consulting background, I’ve seen a lot of pain around siloed storage and the inefficiencies that accompany that. AWS recommends Simple Storage Service (S3) as the solution for this and released S3 Access Points to help simplify the management of storage and help customers move away from those siloed architectures to the ideal data lake. The main goal of S3 Access Points is to ensure that only the correct resources have access to specific S3 buckets. Traditionally, IAM was the solution for this, but it does get challenging to manage as our environment grows. S3 Access Points greatly simplifies the management of access permission rules for each application. Any access point can be restricted by Virtual Private Cloud (VPC) to firewall S3 data access within customers’ private networks, and AWS Service Control Policies (SCP) can be used to ensure all access points are VPC restricted.

To further help customers build out a powerful data lake solution, AWS released several new services that assist. Following is a list of what I found interesting that was released:

  • Redshift Spectrum with AWS Lake Formation — Think of Lake Formation as a cloud formation template specific to managing permissions and control policies across all of your data in a data lake. Combine this with Spectrum, which is actually doing the querying of the data lake, and you have a simple way to manage both who has access to the data and the actual query of data itself.
  • Released Federated Query — This nifty tool allows users to query data across multiple services such as RedShift, S3 and RDS Aurora.
  • Data Lake Export — This allows customers to put data back into S3 in an automated fashion for other analytics tools to access data.
  • Release of RA3 instances — The overwhelming ask has been to scale storage and compute separately within a RedShift cluster. RA3 instances with managed storage enables this and intelligently moves least recently used data to S3.
  • AQUA (Advanced Query Accelerator) — This offering provides 10x better query performance than any other platform hardware-enabled query solution. Think of this as a high-speed cache in front of an S3 bucket. Remember the AWS Nitro chips we mentioned previously? Each node comes with a Nitro chip that offloads CPU and memory cycles to perform compression and encryption with a field-programmable gate array (FPGA) that speeds up aggregations and filtering. AQUA can actually perform computations on raw data and saves work around building movement pipelines.
  • UltraWarm — This is a new warm tier for ElasticSearch and addresses pain around massive log data, which is often expensive to store at scale. Because of this, customers often limit the amount of data retained for analysis and thus, miss out on valuable insights. With UltraWarm, customers can analyze down to the blocks of data that are being accessed and moved to S3 if cooling and can cost 90% less if used correctly.

In addition to building out a rich suite of tools to help customers implement a data lake solution, AWS is also continually improving their machine learning portfolio. One major area of focus for AWS is around the three major ML frameworks that data scientists use (TensorFlow, PyTorch and MXNet) as roughly 90 percent of data scientists use a variety of frameworks. AWS has now built out dedicated teams that focus on each framework and work to improve its performance and integration with other AWS services. Because of this, AWS is seeing some impressive performance numbers. The example AWS used compared the results from a test on the Mask R-CNN framework that a company called Mountain View performed using private beta hardware. They took the results of that test (which took 35 minutes to complete) and compared that to their own internal frameworks on Nitro hardware. The results were fairly impressive with AWS TensorFlow taking 28 minutes to complete the test and both PyTorch and MXNet taking 27 minutes; all using P3 instances.

AWS also released a ton of new features for SageMaker, which is AWS’s framework for machine learning and AI. Those are summarized as follows:

  • SageMaker Studio is the first fully integrated development environment for machine learning. It features a Web-based Integrated Developer Environment (IDE) to store and collect everything you need such as code, notebooks, project folders, etc. making it easier to manage and build a model.
  • SageMaker Notebooks allows customers to spin up a notebook within seconds. And if you want to grow that notebook, you simply input the needed CPU; there are no instances to manage and assists with automating the migration from a smaller to a larger notebook.
  • SageMaker Experiments captures every step of tuning, training, etc. of models. It allows customers to capture all input variables, input parameters, outputs, etc., and saves them in an experiment. It also stores all the experiments so you can search for historical experiments easily and share them with other users.
  • SageMaker Debugger improves the accuracy of machine learning models, is now on by default and works with all three frameworks within SageMaker. Feature prioritization provides visibility into what’s driving the model, dimensions being left out, and if models are biased and need to be changed.
  • SageMaker Model Monitor detects concept-drift issues within a machine learning framework.
  • And lastly, SageMaker AutoPilot provides automatic training with no loss of visibility or control.

AWS is also using machine learning to improve the functionality of other services within their portfolio. Let’s take a look at what they’re doing around security and fraud and how machine learning is improving these services.

  • Fraud Detector, which is a fully managed fraud detection solution, looks at historical data for a variety of activities and runs using Machine Learning
  • Amazon CodeGuru is a new ML-enabled offering that automates code review and it works like this — write your code and commit, add CodeGuru to pull the request integrated with GitHub and CodeCommit, from there, CodeGuru reviews the code using millions of code reviews based on open source projects and internal data, assesses the code and provides feedback such as best practices, concurrency issues, incorrect handling and identifies correct input validation.
  • Kendra, which provides enterprise search functionality powered by Machine Learning. It’s fairly simple to use, only requiring that you provide your data source and credentials. From there, Kendra pulls in the data and indexes it. The result is an intelligent and relational enterprise search solution that doesn’t use simple keyword returns, but rather uses ML to understand the data and its relevance to each other, ultimately providing faster and better results.
  • Lastly is Profiler, which observes your application and every five minutes provides latency, CPU use, etc. to identify expensive lines of code that costs the most and where to optimize said code.

The last service AWS focused was on addresses those customers that want to move to the cloud but are either too far away from an AWS region limiting performance, or who can’t utilize AWS Outpost due to data center power and cooling issues. For those customers, AWS released Local Zones, which places compute, storage, database and analytics services as an extension of an AWS region to deliver low latency. This offering plays on Outpost and is targeted at those who don’t want hardware on-site. The first one will be in LA, with other Local Zones being added to other regions.

The previous services are only a fraction of what AWS released this year, but you can see the focus for AWS is to help customers transform their IT organization and ultimately the business. Even with all these services available to customers, it’s important to find a partner that can provide the guidance, experience and assistance in taking advantage of all that AWS has to offer. Kovarus is an AWS Advanced consulting partner that helps customers on this IT transformation journey.


Looking to learn more about modernizing and automating IT? We created the Kovarus Proven Solutions Center (KPSC) to let you see what’s possible and learn how we can help you succeed. To learn more about the KPSC go to the KPSC page.

Also, follow Kovarus on LinkedIn for technology updates from our experts along with updates on Kovarus news and events.