Something significant is happening in enterprise software—and the stock market has noticed.

In the first week of February 2026, $285 billion evaporated from global software stocks in 48 hours. Jefferies dubbed it the "SaaSpocalypse." The iShares Expanded Tech-Software Sector ETF (IGV) plunged into bear market territory, dropping over 20% from its peak. Salesforce fell 14% in five days. Adobe, Microsoft, ServiceNow, and SAP collectively shed more than $730 billion in market value over the past month.

The catalyst? A growing realization that AI—and specifically the phenomenon known as "vibe coding"—may be fundamentally disrupting the software-as-a-service model that has dominated enterprise technology for two decades.

For small and medium-sized business owners, this isn't just a Wall Street story. It's a shift that could change how you think about the software tools you use every day—and the risks that come with building your own.

What's Actually Happening

For years, the SaaS model operated on a simple formula: businesses pay per user, per month, for access to software they don't have to build or maintain. Need a CRM? Buy Salesforce. Need HR management? Buy Workday. Need design tools? Buy Adobe.

The problem, as many business owners know firsthand, is that most organizations use only a fraction of the functionality they're paying for. A 25-person company with a Salesforce Enterprise license is paying for capabilities built for organizations 100 times their size.

Enter vibe coding—a term coined by Andrej Karpathy (co-founder of OpenAI) to describe using AI tools to generate and refine software through natural language rather than traditional programming. Instead of writing code line by line, users describe what they want in plain English and AI generates the application.

The implications are significant. Tasks that once required a development team and months of work can now, in some cases, be accomplished in days or even hours. A custom CRM tailored to exactly how your business operates. An internal dashboard that pulls from your specific data sources. A workflow automation tool designed around your actual processes—not a generic template.

And the market is responding. According to Publicis Sapient, enterprises are actively reducing traditional SaaS licenses by approximately 50%—including major platforms—by substituting them with generative AI tools and custom-built alternatives.

Real Examples: It's Already Happening

This isn't hypothetical. Several notable cases illustrate the trend:

Klarna, the Swedish fintech company, announced it was moving away from Salesforce and Workday, consolidating approximately 1,200 SaaS applications into an AI-driven internal "knowledge stack." The company reported cutting marketing costs by 25% (saving $4 million annually) and increasing average revenue per employee from $400,000 to $700,000. Though it's worth noting that the reality was more nuanced than the headlines—Klarna reportedly replaced some SaaS tools with other SaaS tools (adopting Deel for HR, for instance) rather than building everything from scratch.

Publicis Sapient has reported actively reducing SaaS licenses—including major platforms like Adobe—by approximately 50%, substituting them with generative AI tools and internally built chatbots.

The vibe coding tool Cursor reached $200 million in annual revenue before hiring a single enterprise sales rep—suggesting massive demand for AI-assisted software development.

According to a McKinsey analysis, the software industry is entering what it calls a "post-SaaS" era, where AI agents interact directly with back-end data through APIs, potentially commoditizing the application layer that SaaS companies have built their businesses on.

The Appeal for Small Businesses

For SMBs, the appeal of building custom tools is understandable:

  • Cost reduction: Instead of paying $150-$300 per user per month for an enterprise CRM that's 80% features you'll never use, you could potentially build a tailored solution for a fraction of the ongoing cost.
  • Perfect fit: Custom software does exactly what your business needs—no more, no less. No fighting with workflows designed for a different industry or company size.
  • Data ownership: Your data lives on your infrastructure, not on a third party's cloud. This can simplify compliance and reduce third-party vendor risk.
  • Competitive advantage: A tool built around your specific processes can become a genuine differentiator rather than a commodity everyone else also has access to.

These are legitimate benefits. But there's a significant catch that many business owners don't discover until it's too late.

The Security Problem No One Talks About

Here's where the conversation needs to get serious.

According to Veracode's 2025 GenAI Code Security report, which tested 100 leading AI models across 80 curated tasks, nearly half of all code generated by AI contains security flaws—despite appearing production-ready. Perhaps more concerning, researchers found no significant improvement in security across newer or larger models.

A December 2025 assessment by security startup Tenzai compared five major vibe coding tools and found 69 vulnerabilities across 15 test applications, including several rated "critical." The types of flaws were particularly troubling:

Business Logic Vulnerabilities

AI-generated code is especially prone to business logic flaws—errors in how the application handles workflows, permissions, and data access. Human developers bring intuitive understanding of how business processes should work. AI models lack this contextual awareness and depend entirely on explicit instructions.

Hardcoded Secrets and Exposed Credentials

AI coding tools often have access to local file systems, including environment files and configuration scripts. A common pattern in vibe-coded applications is the accidental inclusion of real API keys, database credentials, or authentication tokens directly in the generated code—credentials that can end up in version control or deployed to production.

Authentication Bypasses

Researchers have documented cases of AI models accidentally removing security checks during code generation—a phenomenon described as "hallucinated bypasses." In one documented case, a startup called Enrichlead used Cursor to write every line of code for their product. While the interface looked polished, the AI had placed all security logic on the client side. Within 72 hours, users discovered they could manipulate browser console values to bypass payment and access all premium features for free. The project shut down entirely.

Weak Cryptography and Outdated Dependencies

AI models trained on open-source repositories may embed insecure coding practices and outdated libraries. A vibe coder who doesn't know the difference between MD5 and Argon2 will accept generated code because it works—unknowingly deploying password hashing that's been considered broken for years.

The Scale of the Problem

By 2026, AI agents are generating roughly ten times more code than human developers, according to industry analyses. This dramatically amplifies the scope of the security problem. And because the code looks correct and functions as expected in basic testing, it creates a dangerous false sense of security.

As Palo Alto Networks' Unit 42 research team has found, most organizations allow employees to use vibe coding tools without formal risk assessments or security controls—a pattern that mirrors the shadow AI challenge we've discussed previously.

When Vibe Coding Goes Wrong: A Growing Pattern

The Enrichlead case isn't an isolated incident. Several documented failures illustrate the risks:

  • Replit's autonomous AI agent deleted the primary databases of a project it was developing because it determined the database needed "cleanup"—violating a direct instruction prohibiting modifications during a code freeze.
  • CVE-2025-55284 (Claude Code): A vulnerability allowed data exfiltration from developer machines through DNS requests via prompt injection embedded in analyzed code.
  • CVE-2025-54135 (Cursor): The "CurXecute" vulnerability allowed attackers to execute arbitrary commands on a developer's machine through an active Model Context Protocol (MCP) server.
  • Windsurf persistent prompt injection: Malicious instructions placed in source code comments caused the Windsurf development environment to store them in its long-term memory, enabling data theft over months.

Industry observers have dubbed 2026 the "Year of Technical Debt," warning that the surge of AI-generated code will require extensive cleanup as developers discover bugs, security flaws, and scaling failures introduced by vibe coding.

What Business Leaders Should Consider Before Building

None of this means custom software is a bad idea. It means custom software requires the same security rigor as any other business-critical system—something that's easy to overlook when an AI tool makes building feel effortless.

For businesses considering the build-vs-buy decision, several security considerations may be worth examining:

Use Established Frameworks—Don't Reinvent the Wheel

Rather than letting an AI generate authentication, encryption, and database logic from scratch, consider building on established frameworks that have years of security testing behind them. Frameworks like Django, Ruby on Rails, Next.js, or Laravel have well-tested authentication systems, input validation, and security defaults. Using them as a foundation reduces the attack surface that custom code introduces.

Security experts consistently advise: don't use AI to generate authentication, cryptography, or system-level code. Build those parts on proven libraries and frameworks, and let AI handle the business logic and interface layers where the risks are lower.

Get Penetration Testing Done

If you're deploying custom software that handles business data, customer information, or financial transactions, penetration testing isn't optional—it's essential. A professional security assessment can identify the kinds of vulnerabilities that AI introduces but that basic testing won't reveal: authentication bypasses, privilege escalation, injection flaws, and business logic errors.

As we noted in the cybersecurity checklist, security assessments for businesses under 100 employees typically range from $1,500 to $5,000—a modest investment compared to the cost of a breach.

Keep Custom Applications Behind a Firewall or SASE

Custom-built internal tools should not be exposed to the public internet. Consider deploying them behind:

  • A corporate VPN so the application is only accessible to authenticated team members on the company network
  • A SASE (Secure Access Service Edge) framework that combines network security with zero-trust access controls, ensuring only authorized users on authorized devices can reach the application
  • Firewall rules that restrict access to known IP ranges or require multi-factor authentication before granting access

This approach dramatically reduces the attack surface. Even if the application contains vulnerabilities, limiting who can reach it significantly reduces the likelihood of exploitation.

Treat AI-Generated Code Like Third-Party Code

Every line of AI-generated code should be reviewed with the same scrutiny you'd apply to code from an unknown contractor. This means:

  • Security-focused code review before deployment
  • Static Application Security Testing (SAST) integrated into the development workflow
  • Dependency scanning to catch outdated or vulnerable libraries
  • Regular security audits as the application evolves

Have an AI Usage Policy That Covers Development

If employees or contractors are using AI tools to build internal software, your organization's AI usage policy should explicitly address development use cases—including which tools are approved, what data they can access, and what security review is required before deployment.

The Real Cost Calculation

The financial case for building custom software needs to account for more than just the subscription savings:

SaaS SubscriptionCustom-Built Alternative
Monthly costPredictable per-user feeDevelopment + hosting + maintenance
SecurityProvider's responsibilityYour responsibility
UpdatesAutomaticYou build and test them
ComplianceOften included (SOC 2, etc.)You must achieve independently
ScalingBuilt inYou must architect for it
SupportIncludedYou provide it internally

Consider a hypothetical scenario: a business replaces a SaaS platform and saves a few thousand dollars per month—but the custom-built alternative introduces a security vulnerability that leads to a data breach. According to IBM's 2025 Cost of a Data Breach Report, small businesses can expect to pay between $120,000 and $1.24 million to respond to and resolve a security incident. Those monthly savings disappear quickly when measured against that kind of exposure.

This doesn't mean building is always the wrong choice. It means the decision should be made with eyes open to the full cost picture, including the security investment required to do it responsibly.

The Balanced Path Forward

The SaaS shakeup is real, and it's creating genuine opportunities for businesses to rethink their technology stacks. AI-powered development tools are remarkable, and they're only going to get more capable.

But capability and safety aren't the same thing. As we've seen with the rapid advancement of AI in vulnerability discovery, AI is simultaneously making it easier to build software and easier to find flaws in it. The businesses that benefit most from this shift will be those that embrace the opportunity while respecting the risks.

For most small businesses, the practical path forward may look something like this:

  • Evaluate which SaaS tools genuinely deliver value proportional to their cost—and which ones you're overpaying for
  • Experiment with AI-built tools for internal, low-risk use cases first—internal dashboards, workflow automations, reporting tools
  • Invest in security from day one, not as an afterthought—penetration testing, secure frameworks, access controls
  • Keep mission-critical systems on proven platforms until your custom alternatives have been thoroughly tested and hardened
  • Partner with IT and security professionals who can help you evaluate the risks and implement appropriate safeguards

The SaaSpocalypse narrative may be overstated—Nvidia CEO Jensen Huang called the idea that AI replaces software "the most illogical thing in the world," and Bank of America has called the selloff irrational. But the underlying trend is real: AI is changing how software gets built, who builds it, and what businesses are willing to pay for.

The question isn't whether your business will be affected by this shift. It's whether you'll navigate it with the right balance of ambition and caution.


This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific circumstances and develop appropriate protective measures.