Cloud has won. More businesses than ever are relying on cloud computing technologies to deliver services reliably at scale. With this shift to cloud native technologies, microservices, and abstraction, the battle for security has shifted with it. Security is no longer about firewall rules and access control–it’s about data.
Your average scaling technology company is now using multiple cloud providers, a dozen plus infrastructure vendors, and hundreds of microservices. Each of these technologies comes with its own configurations, policies, and monitoring requirements. Gone are the days where we can expect security personnel to monitor and configure a few perimeter and DMZ firewalls to secure our infrastructure.
With the use of the cloud comes the huge advantages in scalability and rightsizing workloads. As a result, we no longer exist in a world where we log into a VM and see runtime measured in years. Typical cloud services now get stood up by API, run a task, deliver a result to the next service, and then spin down. This has resulted in an explosion of log volumes and we can not expect security teams to manually parse through billions of logs per month. Security must change with the rest of our infrastructure teams.
Fundamentally, security detections are all about identifying anomalies in datasets and investigating their causes and effects. How do you identify abnormal if you can no longer get a grasp on exactly what normal is?
We must start by bringing all of our data together into one centralized location – and this can not just be security data, this must be all of our infrastructure monitoring data. Performance impact is one of the most pivotal triggers for a security investigation – how do we do that if our DevOps team is working off of a different dataset than our security teams?
Once we have all of our data living under one roof we can start with the easy stuff: rules. What do we know we don’t want? Sure, we know we don’t want databases exposed to the internet, we know we don’t want overly permissive roles attached to internet-facing compute resources, etc., but how do we build detections for the risks we don’t know? With all of our data in one place we can start to build out maps of everything our infrastructure should be doing and use some more advanced ML-based detection mechanisms to alert us to anything outside of the bounds of what we expect. At this point, we’ve built a machine that alerts us to any NEW behavior in our environment – fundamentally new behavior should include any security incident and gives us a great jumping off point for our IR processes.
Just because we can no longer wrap a firewall around our on-prem infrastructure and call it good enough does not mean we can’t effectively secure the cloud. Once we learn how to leverage the data at our disposal to make informed, real-time decisions, we can drive better security outcomes for our stakeholders and customers. Organizations that prioritize the data problem as part of their cloud security strategy will be far better equipped to navigate the continuously evolving threat landscape.
Get in touch with our team to discuss your cloud security strategy.