Top Security Predictions that WON’T (But Should) Happen in 2021

If you’re scoffing at the predictability of a trend-related blog post in January, we couldn’t agree more. In an effort to be slightly less predictable, we’re taking a different approach by letting you in on what the cyber security community predicts will not happen this year. Industry veteran and Vice President of Security Strategy at Lightstream, Rafal Los recently took to social media to ask, “What’s the thing that probably won’t happen in cyber security in 2021?” Some of the responses from his followers were expected, others, not so much. So, without further ado…let’s take a look at their anti-trends for the coming months.

Tied for #1: Password Elimination & Meaningful Asset Management

Let’s face it, passwords and asset management seem like they’ve been a thorn in the side of the security industry since the invention of the computer. In fact, the first computer password was developed in 1961 at the Massachusetts Institute of Technology, for use with the Compatible Time-Sharing System (CTSS). Yet 60 years later – long after CTSS has given way to the modern Windows and OSx systems in use today – the general consensus is that passwords won’t be going away anytime soon. What is driving this skepticism?

For starters, we still don’t have a better way to protect our personal and enterprise data. Thumb prints and facial recognition are promising, but they still haven’t proven themselves to be ironclad. Adding to that are the security challenges COVID-19 has forced enterprises to overcome. With many companies now operating in work-from-home (WFH) environments and the very real possibility that this will be an ongoing strategy in the post-pandemic economy, remote workers are at a huge risk for identity-related breaches. Corporate IT is struggling to maintain control of computer-related assets, including software, unauthorized devices and loss of security.

Knowing that passwords are here for the foreseeable future and that asset management has never been more challenging, 2021 presents an opportunity for IT leaders. This is a critical time to adopt new ways to improve the identification, tracking and management of employees, applications and devices that access resources.

#2: Widespread Zero Trust Adoption

It’s hard to argue that the adoption of Zero Trust principles is anything but required for cybersecurity to advance. So, despite Zero Trust being at the foundation of Lightstream’s offerings, and what analysts and professionals feel is the future of security, there appears to be a lack of confidence in it being widely adopted in the coming months. It could be that many see Zero Trust as a tool or a widget to be installed – when in fact it’s a rethinking of the way systems interact and behave. Zero Trust goes at the root of security – identity and data – oddly the two things cyber security understands the least. There is something of significance here, but we’ll save that for a future article.

Enterprises should widely embrace a model that shuns the assumption that everything behind the corporate firewall is safe, or that there is such a thing as “behind the corporate firewall” anymore. The security of every organization depends on a new way of thinking, and the Zero Trust model of “never trusting, always verifying” would be hugely beneficial in an environment where remote working is becoming the norm. Lightstream’s Managed Security Services platform incorporates automation, Zero Trust concepts, best practices and industry-specific compliance to help IT leaders manage costs effectively, reduce complexity and improve the efficiency and efficacy of data center, network and cloud security.

#3: Fully Patched Environments/Systems

“Patching. It was a problem in 1999, and the social media responses prove that it continues to be a problem in 2021. What makes this such a difficult task?” ponders Rafal Los. Patching is the process of applying ‘fixes’ to existing deployed software packages, most often from the vendor, when flaws are identified and resolved. Similar to applying a physical patch to a garden hose to prevent water from leaking out, the purpose of the cyber security patch is to cover the vulnerability, keeping attackers from exploiting the flaw. Much like how water usually finds a way to break through that patch in your garden hose, attackers are experts in finding ways to circumvent applied patches when the underlying cause is not fully remediated. Therefore, enterprises must ramp up their vulnerability management strategies in the coming year.

The process of identifying, categorizing, prioritizing, and resolving vulnerabilities in operating systems, enterprise applications (whether in the cloud or on-premises), browsers and end-user applications is no small feat. It’s an ongoing process that requires considerable time and resources, which makes it an initiative that enterprise IT might best consider outsourcing.

#4: Elimination of Phishing

It’s clear that no one expects phishing to go away, nor do we expect people to stop clicking on phishing lures – yes this includes you, security professionals. So, it’s not a huge shock that this is among the anti-trends predicted for 2021. Phishing scams are becoming more sophisticated every day, so it’s imperative that corporate IT do their best to stay one step ahead of attackers. This may involve taking a multi-faceted security approach to lessen the number of phishing attacks and reduce the impact when attacks do occur.

#5: Unification of C-Suite & Security Professionals

There are many cynics in the cyber security industry that see a lack of executive accountability (with the exception of the Chief Information Security Officer). This threatens to further deepen a dangerous rift between security professionals and the companies/boards they support. Therefore, there is a major opportunity for companies to develop strategies to ensure accountability “goes both ways,” as we like to say.

#6: Effective Use of Machine Learning

People are still broadly skeptical of Machine Learning in cyber security. This subset of artificial intelligence has been significantly hyped since its inception, yet it still hasn’t fully come to fruition. Rafal Los considers that while it sounds exciting, perhaps we might be a bit premature in the belief that systems can learn from data, identify patterns and make decisions without human intervention. Besides, we’ve all seen what happens when machines become “artificially intelligent” – and we’re pretty sure we don’t like the ending of that movie.

Other Notable Anti-Predictions

There were several other responses to Raf’s social media posts worth mentioning. While you’d be hard pressed to find someone that expects the number of breaches to go down or malware volume to decrease, they surprisingly didn’t make it into the top six predictions. Could that be due to industry optimism, or perhaps it’s just that we’re all tired of talking about these topics? On the flip side, the elimination of WindowsXP systems made the top 10, which is astonishing since it officially became unsupported way back in 2014 (seriously, what’s it going to take?).

Software-security-related items appear several times, making it obvious that there are some who still have little faith in software security. Rafal Los blames the contentious relationship between security professionals and developers. According to Raf, a typical security professional/developer exchange [still] goes something like this:

Security professional: “You’re doing it wrong.”

Developer: “You don’t know what you’re talking about. Show me.”

Security professional: “It’s not my problem. Fix it.”

Clearly, this is another area where there is major room for unification in 2021.

Contact Lightstream to find out how we can help you unify strategies to build secure, generational capabilities that can help your organization accomplish its goals for 2021 and beyond.

The Red Herrings of Cybersecurity Blog Series 3 of 4

Welcome to 2021.

I felt like I needed to write that we survived 2020 and are now well on our way to whatever things this year holds. In this series, I’m addressing the things that your vendors do or say that are “red herrings” – that is, they sound good but aren’t quite right.

In this installment, I’m going to address complexity. Having been involved in selling cybersecurity solutions since roughly 2007, I believe I know a few things about this.

I believe with all my heart the following statement to be true.

“The value of any security solution is inversely proportional to its complexity.”

Give that a think for a second.

The more pieces of a solution your vendor has to virtually duct-tape together for you, the less real security value the solution holds overall. I do not doubt in my mind this is true. The reason for that – I’ve seen it with my very own eyes. I’ve witnessed 100+ page solution specifications that were so complex I don’t think anyone truly understood what was happening. Forget about actually explaining it.

I think customers sometimes believe that because a solution they’re being presented is exceptionally complex that it is better. That has something to do with the level of knowledge of the buyer. I’ve seen opportunistic sales teams take advantage of this, and it’s unfortunate.

The truth of the matter is simplicity always wins. It is difficult to debate that rationally. The more steps there are in a process; the higher the chance that there will be a failure along that chain of events. As a buyer, you should be looking for the simplicity of the overall solution. Additionally, look for simplicity in the various technology components, processes, and outcomes.

Rejecting complexity and insisting on simplicity is critical in security. It is particularly critical when you’re dealing with managed services. Here are 3 of the most important pieces, when it comes to keeping it simple:

  1. Engagement process – the process by which a customer engages with the vendor for specific tasks, workflows, or requests; for example, requesting changes or working incidents
  2. Integrations – connecting technologies together, to maximize their effectiveness, must be simplified to keep the system from becoming brittle and incurring unexpected outages
  3. Technical solution – the various technical pieces of the solution should minimize complexity by limiting the number of specialized components, and the number of times that a workflow passes from one technical system to another

There you go, part 3 on complexity. In a nutshell – if you don’t understand the solution someone is trying to sell you because it’s uber-complex … it’s probably not right for you.

The Evolution of the CIO: The Convergence of Technology and Operations and How Enterprises Must Adapt

In a recent report entitled Gartner Top 10 Strategic Predictions for 2021 and Beyond, a Gartner contributor boldly stated that by 2024, 25% of traditional large-enterprise CIOs will be held accountable for digital business operational results, effectively becoming “COO by proxy.” No one can argue that as enterprise processes have become digitized, today’s CIOs are being challenged to shoulder many tasks that traditionally fell under the operations umbrella. Over the past few decades, technology has helped streamline processes and create efficiencies across the enterprise, making IT support integral to every organizational silo, from marketing to finance to customer support.

How the role of the CIO changed in 2020

In 2020, the COVID-19 pandemic forced organizations worldwide to rethink the way they do business. IT teams scrambled to set up remote working capabilities for the majority of staff, which was no small feat from an operational standpoint. As we enter 2021, many are still successfully working from home thanks to operational controls, technology and the support staff that maintain it.

All of this has taught us how important ‘composability’ is in business. According to Gartner, one of the keys to enabling business success in 2021 and beyond, is to engineer your organization for real-time adaptability and resilience in the face of uncertainty. That means accelerating digital business initiatives so that you’re able to quickly and smartly react to external circumstances and optimize business processes accordingly.

Companies that are cloud natives already have an advantage. However, as CIOs are increasingly being called on to enhance operations and help make their organizations more nimble, they have less time to focus on important initiatives such as cloud management and security.

The impact of digital transformation in the enterprise

External pressures are forcing the C-suite to evolve, and new roles keep popping up in response to this digital transformation. The organizational silos that have always existed are now becoming somewhat obsolete. Enterprises that were once vertical in nature are being flattened by digitization. As they become more horizontal, they’re increasingly resistant to the vertical roles that once governed them.

So how are business leaders supposed to overcome these challenges and equip their organizations with the composability they need to accomplish future goals? In the wake of such a drastic digital conversion in 2020, how do you build a C-suite that works with this new model? Who should report to whom? And finally, how should CIOs think differently in the coming year? Future-proofing the enterprise won’t be easy, and it will likely require significant changes.

Closing the gaps in what technology can do and what your business wants to do

Since people are often opposed to such change, it is not recommended that an internal leader conduct such a drastic shakeup. Instead, it is recommended that you engage a project management organization or other third-party consultant to analyze your business and technical processes. It’s also wise to partner with a culture consultant who can bring an outside view and help facilitate a smooth transition. You may find that outsourcing some of your IT services will free up your CIO and support staff so that they can focus on their core business which is now heavily centered on enhancing operations.

When outsourcing, it’s best to find a partner with multiple views of the environment in order to address any gaps in service. Keep in mind that what you knew yesterday about the tech stack is not necessarily what you’ll need to know in the future. IT professionals should no longer consider themselves purely technologists but rather business optimization professionals, and outsourcing the baseline technology set will allow for that shift. The ideal partner can expertly manage your cloud environment and provide value through technical and operational best practices, cost optimization and a specific focus on security and compliance.

While Gartner’s view is that the roles of CIO and COO will merge in the coming years, it is unlikely that internal IT teams are ready for a total transformation. The breaking down of operations and IT silos has been a very slow process that may never be complete. Some CIOs don’t believe it would be entirely appropriate, as there are still many COO responsibilities that do not quite fit into the CIO’s business model. So, we may see a new title taking over this role in the future. As with everything, there will be early adopters such as cloud-native businesses and others where the bulk of operations are already in the digital environment, as well as organizations such as those operating with legacy systems that may never adopt it.

The bottom line is that a CIO’s role, and that of its support staff, is no longer just about technology. Holistic thinkers know that as we move forward, the focus should be more about the overall business and culture of an organization. COVID-19 forced the operational model to change overnight, and it’s impossible to go back to the way it was before. The past year highlighted how CIOs can drive digitalization across the organization — and how their shift in focus from purely IT to contributing to overall business operations is integral to future success.

To learn about how Lightstream can help your organization overcome complex technology convergence challenges through a flexible mix of consulting, integration and managed services, visit www.lightstream.tech.

What the SolarWinds Compromise Means to You

A Summary Analysis of the SolarWinds Breach

What happened?

In the simplest of terms, SolarWinds – a company synonymous with Network Management Systems (NMS) that is used almost universally across ~300,000 customers worldwide – was compromised through what is being labeled a “supply-chain” attack. This means that attackers from what appears to be a nation state-sponsored APT (Advanced Persistent Threat) group executed an attack against the software company that allowed them to insert code into SolarWinds’ most popular platform called Orion.

Between March and June 2020, the attackers were able insert code into the build system of Solarwinds’ Orion tool and push out updates which contained what is effectively a trojan horse. This means that the attackers were then able to use the compromised update of the Orion platform to then pull down malware (Sunburst) onto systems that were compromised with this update. From there, attackers had what is effectively free-range on the victim’s network. Since they were operating from a tool that is meant to reach out and monitor/manage network and system infrastructure, their compromised allowed them virtually limitless capabilities on most networks they infected.

It should be noted that while the attack appears to be targeted to the government sector and it’s providers, such as FireEye in one documented case, it is being relayed that any customer who had the relevant software installed should assume compromise.

*Please keep in mind that this situation is actively evolving with significant global effort to provide more information as it becomes available. The information contained in this advisory is subject to change at any time, and we encourage you to do additional research.

 

Why is the situation critical?

SolarWinds’ Orion is one of the most popular NMS (Network Management System) platforms out there. As a result, it’s confirmed install base is some 18,000+ networks worldwide. If you have the tool installed you are advised to assume breach and immediately enact your breach protocols and procedures. Work your incident response processes through, and in the event that you have evidence of no compromise you will have peace of mind and certainty. Even though your organization may not have the tool installed, it’s highly likely that one of your partners or suppliers may, leading to a third-party risk management nightmare that requires urgent attention. Now is the time to reach out to your close partners, particularly those that have assets connected into your physical or virtual network, and obtain certainty on their current state relevant to this compromise.

Organizations that find themselves compromised with this attack should assume that the attacker has had full access to all NMS and connected systems, assets, and data, and could move around the network undetected and exfiltrate sensitive data at will. There have been no detection capabilities prior to this breach going public, and new indicators of compromise (IOCs) are being published as researchers around the world work to uncover them. It should be noted that attackers will adapt and change their signatures to avoid detection. It is highly advised that companies review their logs for signs of long-term compromise based on the IOCs known at this point.

 

What should you do now?

If your organization does have SolarWinds’ Orion installed, you can take immediate steps to mitigate while you investigate. At minimum we urge all customers to review their logging, network access, and security strategies at this time to minimize potential impact and mitigate risk. Additionally, we provide the following suggestions:

  1. If you have Orion installed on your network and rely on it for monitoring/management you must immediately disable it’s access to the Internet. If you are unable to do so, access should be limited to absolutely only those IP addresses that are required to operate,
    1. Additionally, perform in-depth log analysis going back to March for the IOCs being published including domains that are using in the attack. Keep in mind now that the attack has been uncovered, these will likely change as the attackers pivot their attack to avoid discovery.
    2. Monitor closely all Orion NMS network activity, and perform packet-capture logging for evidentiary purposes, is possible.
  2. If you do not have Orion installed you should not necessarily assume your organization is safe. Consider your 3rd party suppliers and connected partners and perform due-diligence to understand whether these have the tool installed and could have a potential compromise.

 

What can Lightstream do for you?

 

Right now

  • Lightstream’s security team can assist in assessment or analysis of the situation to understand potential impact to your organization
  • Lightstream’s teams should be alerted immediately via ticket if your organization has SolarWinds’ Orion installed so that we can take additional measures for investigation
  • If your organization has minimal, none, or insufficiently operationalized endpoint or network security monitoring and response capabilities, Lightstream can help by deploying, managing, and detecting and responding to threats such as this both today and in the future

Near and Long-term

  • Lightstream’s Edge Defense and Endpoint Defense services are optimized to identify, protect against, detect and respond to, and recover from threats to your organization’s IT infrastructure, systems, and applications. Enterprises large and small can use our fully managed platform to supplement their own security operations (SOC) or fully outsource the management, detection and response 24x7x365
  • Lightstream’s expertise in Zero Trust architecture can be used to evolve your physical and virtual network to minimize the damage and business impact from even sophisticated attackers. We offer this service to our managed and new customers
  • Lightstream’s Security Advisory Services can perform a Security Strategy Program Framework (SSPF) assessment to understand how your existing security strategy would be impacted in cases such as this. This is offered to both new and existing customers.

 

Additional Links and Resources

Why IT is rethinking Best of Breed Management tools

Most of today’s IT leaders take a Best of Breed approach to procuring solutions and toolsets for their enterprises. They seek out the top-tier providers for each service, whether it may be for bandwidth, managed network, managed security services, managed firewall, or endpoint protection. The commonly held belief is that by partnering with the leading vendor for each service, they can build an end-to-end IT environment that’s bullet-proof. Makes sense, right? Not exactly.

While it’s very common for silos to develop within infrastructure and support, it can create major obstacles for already strained IT departments. Co-managing multiple systems and vendors with little cross connectivity and integration between each is a complex and resource-draining process.

Inevitably, each provider has its own shared responsibility model that they strictly adhere to. Each model is unique, and they do not communicate or compromise with other service providers to make up for their inherent differences. This leaves considerable disparities and gaps in service that internal IT departments are required to fill in. Furthermore, deploying and maintaining an app that works within each of these environments can be extremely complicated and time consuming.

Real-world scenario #1

A major retail website experiences a breach in its data security. This breach causes millions of buyers to have their financial records exposed to the public Internet. Individuals scramble to disable accounts, dispute unauthorized charges, change passwords, and expedite getting new credit cards. Meanwhile, fingers are pointed at the retailer and the reputation of their brand is at stake. The company’s cloud service provider promptly releases a statement that due to the nature of its shared service model, the breach was due to no fault of its own, but rather the result of negligence or an error made by the managers of the retail site. In addition to helping its victimized customers, the retail company is faced with significant legal consequences followed by months or even years working to restore its reputation. The painful reality for this company is that all of this could have been avoided by having a single provider manage its network and cloud security and eliminate the gaps in service that put customer data at risk.

Real-world scenario #2

A tier 1 manufacturing company has a global network they are being forced to operate in a remote work environment due to restrictions caused by the Covid19 pandemic. From the CTO’s perspective, they are having problems with supporting business units with network outages and moves, adds, and changes. They believe they are in need of a network services provider, however that provider must be able to seamlessly work with their other IT vendors. They soon discover that while it is possible for different service providers to work with one another, it comes at an exorbitant, unnecessary cost to their internal IT department. In short, the enterprise’s disparate vendors – all working in silos – are causing network connectivity problems, security issues, and an overly complicated move/add/change process.

A practical solution

By taking an integrated, “big picture” approach in each of these scenarios, the right service provider can customize an end-to-end solution that consists of any combination of bundled services. For instance, the manufacturer originally seeking network services is offered a comprehensive solution consisting of network and managed services, managed firewall, and managed endpoint security. Going forward, they will have one NOC handling any potential issues and ensuring their environment’s uptime. A single SOC to examine security and mitigate any potential threats. One provider managing all aspects of their IT environment with one single interface for them to work with. When presented with an integrated solution that is single-handedly responsible for Managed Security Services, SOC as a Service, SD-WAN solutions, and NOC as a Service, this customer has an “Aha!” moment. As the solution is being rolled out, the organization is looking forward to working within a safer, more streamlined environment designed to eliminate service gaps and help them realize greater efficiencies while cutting costs.

Expert advice

More than ever, IT departments within organizations are being burdened by securing and retaining talent, procuring and maintaining toolsets, and overcoming budgetary constraints. To assist with this arduous process, the experts at Lightstream recommend taking a 3-step approach to their solution procurement strategy.

First, recognize the challenges your organization is having difficulty overcoming. Next, have a general idea of what the solution to your problems might be. And finally, understand the roadmap for your technology. Will a mobile workforce be a factor in the future? Where is your enterprise currently, and where would you like it to go? Don’t be afraid to use a whiteboard mentality, and don’t be constrained by what you think technology can offer. Clearly articulate your goals and allow the provider to come up a customized solution that supports all of your business units globally.

The current pandemic has forced us all to think differently about the future. It has exposed challenges across industries and within telecom and IT, it has exposed gaps in the network community. In this new world, customers are struggling to promote connectivity and security in networks that weren’t designed to support mobile workforces. Security has been forced to take a back seat, and network capacity is being stretched very thin.

It is critical to partner with a provider who understands all of this and who can customize a solution to not only help you overcome your current challenges, but who can also help you reach your future goals.  Contact us to learn more about Lightstream’s integrated offerings and how we can help you simplify the procurement process and decomplicate your IT environment.

SD-WAN Benefits in the Time of COVID

It’s no secret that the COVID-19 pandemic has changed the way we work. Enterprises globally have had to shift their operations and shut their offices to help slow the spread of the disease. The result: In the blink of an eye, millions of employees tapping corporate networks and applications that mostly reside in the cloud from their home-based remote offices.

To say this this work revolution has been a strain is certainly an understatement.

Ill-prepared businesses are having to adjust to this new norm while ensuring they can secure, support, and manage their remote users and avoid unpredictable user experiences in the last mile. At the same time, security is more important than ever as corporate data moves into unsecured spaces.

This “forced” digital IT transformation in just under two months has had a notable impact on enterprise networking. In fact, an IDC survey of 250 large-to-medium-size companies in June found that almost half of all respondents—48%–reported they will increase investment in advanced automation platforms to reduce the manual management of the network.

These new challenges highlight why now is the perfect time for enterprises to consider an SD-WAN (Software-defined Wide Area Network) architecture to power their businesses and networks. SD-WAN is a WAN overlay architecture that allows enterprises to leverage any combination of transport—including MPLS, dedicated Internet, broadband, and LTE services—to securely connect users to applications.

SD-WAN improves cloud and on-premise application performance by optimizing enterprise network connectivity, in turn maximizing user experience and boosting productivity.  SD-WAN platforms also provide greater visibility into what’s happening across the network. At the same time, SD-WAN solutions can proactively recognize and remediate many network issues in real-time, thus reducing impact to productivity and collaboration.

A Higher Level of Service & Agility

SD-WAN gives enterprises a higher level of service and more intelligence into what the WAN is doing. That means when the enterprise tasks the CIO with adding features such as live streaming across the WAN, SD-WAN provides that flexibility, often with little to no intervention required from the IT organization.

SD-WAN typically provides greater application intelligence, examining network traffic, identifying the application, and making classification and forwarding decisions accordingly. Network management teams can use this application-awareness to prioritize their business traffic across the entire WAN or for individual branches or remote users.

That’s key because the source of application and network issues across a network can be considerable. Branch information can be collected and centrally processed in the SD-WAN policy engine and technologies like machine learning and artificial intelligence can perform a proactive diagnostic of network reliability or application performance.

All of this can save IT organizations significant time and effort in deploying, reconfiguring, and troubleshooting, improving the remote IT operational experience as well as the end user experience.  In short, by employing SD-WAN, enterprises can get a better handle on their connectivity, their bandwidth, their network, and their applications—allowing businesses to operate smarter and more efficiently, especially during this new remote era.

Security Considerations

While greater application intelligence and visibility can be useful for security teams, SD-WAN technology can also open the door to security challenges if not property addressed.

Because SD-WAN solutions bring distributed Internet to multiple locations within an enterprise, firewall technology is necessary to keep data and applications safe. Many SD-WAN providers have already integrated firewall technology and other security features into their products. Industry consolidation between SD-WAN providers and security providers is also on the rise, such as Palo Alto Network’s recent acquisition of CloudGenix.

The key is to make sure you work with a partner who understands clearly how to secure SD-WAN solutions effectively with clear KPIs that work well with your IT organization.

Layering in Managed Services

The added network automation provided by SD-WAN affords organizations with significant benefits, but enterprises can further operational gains by layering managed services on top.  This allows an organization to redirect valuable IT personnel’s attention from “keeping the lights on” to activities that drive additional value to the company.

These services can range from Managed SD-WAN solutions to Managed Security Services that address security from the network’s edge all the way to the cloud, and incorporate automation, Zero Trust architecture, and best practices for security and industry-specific compliance.

To be sure, anything an organization can do to get its people doing more of what drives value to the enterprise sets a business apart from its competitors. SD-WAN is built to do just that.

From tapping an improved cloud-based delivery system to maximizing scalability and productivity to seamless security, SD-WAN will make smart businesses look, work, and perform smarter.

In these uncertain times, that can be a game changer.

 

The Red Herrings of Cybersecurity – Blog 2 of 4

Hello again.

In the previous blog in this series, I set things up for you. I explained the three things that I believe are “red herrings” in our industry – and now we’re going to dive into the first. Let’s go for a short, pointed, and honest ride.

There has been a consistency about managed services providers in the years I’ve worked for them. While not particularly comforting, the consistency of failings at least meant that we were all doing it wrong together. There is cold comfort in that.

One of those things that killed me for years is the speed of implementation. Or should I say, the complete lack thereof? In my years with HP, one of the managed services accounts that I worked with directly was grumpy because it had taken over 9 months to get an IDS successfully implemented. Yes, you read that right. Nine months. It’s not like security is a real-time battle of good and evil, and losing seconds is cause for concern, right?

I swore that I’d work to improve this, but ultimately I was unsuccessful. Then I left the company. But this stayed in my mind for a while. In my next role, I was too far removed from this situation to be able to affect it. That said, it never left my mind as my team and I advised CISOs on strategy and program development. The goal was always to decrease the time that elapsed between signing a contract and getting “security value.”

Fast-forward a bit to when I joined my previous role at Armor. The company was touting “2 minutes to deploy” and given my previous experience I thought I hit the jackpot. I’d learn over the next two years why I had been chasing a false dream. I’d recognize that faster is not necessarily better, although rapid time to value is desirable.

So what changed that swayed my thinking? Experience.

You see, I had the opportunity to witness a few “2-minute” deployments. They were categorically a disaster. Why? The answer lies in another question.

“How much protection can you expect from a security tool that does near-zero customization?”

If you answered the above with “about that, near-zero” you are now in my headspace. One of the reasons; and this is personal opinion now, there were so many install failures and missed issues downstream was that we were going for speed versus security. Sure we had it installed in two minutes. But did it serve any value? That was debatable, at best.

The lesson is this – to provide a valuable outcome to your customers, you need to do the work. There is a multi-step process that needs to be followed that I’ll readily share with you, here.

1. Understand your customer, their environment, and their challenges. Without this, you’re applying peanut butter. There are no two customers that share the same strategy, architecture, network topology, and security response needs. This I can guarantee. So why would you pretend that a single stock configuration would do anything but provide for the most basic of controls? I would argue that without this step you’ll be doing more harm than good.

2. Prototype and test your configurations. Once you think you know your customer, develop the defensive model, policies, and response actions. Work hard to identify not just the 80% case but those 20% outliers that are going to cause trouble once you deploy. Here’s a hint – one of the most difficult things to get right is the disruptive cases. The situations where something happens to upset the customer’s ecosystem due to a configuration you’ve made are irreversible – especially during initial deployment. If you can’t get it right from the start, you’ll lose your customer’s trust before you ever get to protect them. Minimize your unknowns; that’s the best advice I can give.

3. Expertly guided deployment is essential. Far too many times I heard customers say, “We got this” and then proceed to bungle everything because of either ego or something else. But I promise if your provider is offering you assistance to deploy – take it. If they’re not, ask why they’re not helping you be successful.

Expect this effort to take you north of forty hours for a mid-size implementation. That’s my estimation. You, the provider, should spend a week of solid work to get to a deployment stage. That’s a far cry from 2 minutes but provides infinitely more security value.

While I still believe that deploying as quickly as possible to get security value is critical, I no longer believe that doing so at the expense of customization and testing is viable. Everything comes at a price, and in cybersecurity, the price for protection is time. And effort. It takes effort, planning, patience, and expertise on your part and your customers. I don’t care how you present it – those are things you can’t rush.

Next up, removing complexity. I welcome your comments in the meantime.

Top 5 Azure Mistakes your Security Team is Making

With its scalable structure, pay-as-you-go pricing, and 99.95% SLAs, it’s no wonder Microsoft Azure is a long-time leader in the IaaS space. Its popularity is also due to the fact that it not only offers -Infrastructure as a Service (IaaS) but also Software as a Service (SaaS) and Platform as a Service (PaaS). With Azure, clients can use the services purely in the cloud or combine them with any existing applications, data center or infrastructure already in place. But with all of this flexibility and reliability comes responsibility. It is critical that IT professionals understand Azure’s shared responsibility model as well as which security tasks are handled by the cloud provider and which tasks are handled by you.

Here are -five common security mistakes that typically result from a rushed build/setup process and inadequate management, as well as tips on and how you can avoid them when designing, deploying, and managing your Azure cloud solution.

1. Misconfiguration of Roles & Administration

Misconfiguration is a common occurrence in situations where an Azure solution is implemented without proper planning.

One aspect of misconfiguration is the assignment of roles to users. It is recommended that you follow the principle of least privilege and select a role that provides the user only with the amount of permission they need to do their job. Failing to follow this best practice leads to excess access permission which can easily be avoided by taking the time to properly assign these roles at the outset.

The old adage that “too many cooks spoil the broth” applies to countless scenarios, and Azure is no exception. Assigning too many administrators, failing to establish lease permissions for those administrators, and not enabling Azure’s Multi-Factor Authentication (MFA) are risky oversites. MFA provides an extra layer of security by requiring administrators to provide authentication via phone call, text, or mobile app before they can log into the portal. This helps prevent the administrator’s account from being compromised or misused.

2. Weak, Mismanaged Passwords

This misstep may seem obvious, but regardless of how many times people are warned against setting weak passwords, far too many people still use them. According to Microsoft, they see over 10 million username/password pair attacks every day across their platforms. Failing to assign strong passwords and requiring them to be frequently updated creates vulnerabilities that are easily avoidable.

In setting up Azure services, Microsoft recommends the following to IT administrators:

  • Maintain an 8-character minimum length requirement (and longer is not necessarily better).
  • Eliminate character-composition requirements.
  • Eliminate mandatory periodic password resets for user accounts.
  • Ban common passwords, to keep the most vulnerable passwords out of your system.
  • Educate your users not to re-use their password for non-work-related purposes.
  • Enforce registration for multi-factor authentication.
  • Enable risk based multi-factor authentication challenges.

3. Not Enabling or Managing Logging

Failing to turn on the logging feature is another common misstep in the building process. First, logging must be turned on to permit access visibility. But it doesn’t stop there. The Azure Activity Log must be regularly monitored to gain insight into who is accessing and managing your Azure subscription and to track all create, update, delete, and action activities performed. In addition, an investment in Sentinel – Azure’s cloud-native security information and event manager (SIEM) platform – can go a long way, as it uses built-in artificial intelligence to quickly analyze large volumes of data across an enterprise.

4. Misconfiguration of Security Controls

Haste and -lack of expertise in the configuration of your security tools can mean huge exposure risks for your organization. Failing to enable Azure’s security center and its highly valuable native security tools is a big no-go as it leaves your data open to breaches.

Network Security Groups (NSGs) are the foundation of all network security designs in Azure, and therefore should always be applied to safeguard subnets of a virtual machine-based web application deployment. In a typical design, there is a virtual network and subnets. The subnets should not be assigned to a public IP that could open unwanted ports. NSGs control access by permitting or denying network traffic via communication between different workloads on a vNET, network connectivity from on-site environment into Azure, or direct internet connection.

5. Lack of Oversight

IT administrators often view their Azure cloud solution as just a data center, but it’s essential to remember that this isn’t a case of “set it and forget it.” In fact, your job is far from over once the migration or build is complete; ongoing management and security are critical to the success of your Azure environment.

Proper management of your solution requires a multi-faceted approach. In addition to maintaining compliance with organizational and regulatory security requirements, you must continuously monitor the machines, networks, storage, data services, and applications to protect against potential security issues. Prioritize security alerts and incidents so you can zero in on the most critical threats first. Troubleshooting will be easier if you track changes and create alerts to proactively monitor critical components. Managing update schedules will ensure that your solution is equipped with the latest tools to support ongoing operations.

The bottom line is that your Azure solution is only as strong as the team supporting it. Therefore, IT professionals must do everything in their power to remediate security vulnerabilities before attackers have a chance to take advantage of them. If security and technical expertise and staffing have become obstacles to the effective implementation of your cloud strategy, turn to Lightstream’s Cloud Managed Services (CMS) for help overcoming these challenges.