Finite State Blog

Opaque Vendors: How to Secure Components Without Source Code Access

Written by Edwin Shuttleworth | Jul 4, 2025 9:44:18 PM

Let’s start by imagining an all-too-common scenario.

Your CISO has asked you to certify the security of a critical IoT component in preparation for the next wave of evolving regulations that you need to comply with. The vendor can’t or won't share source code, their SBOM looks surface-level at best, and your traditional security tool can only scan what it can see, which in this case isn’t much. Sound familiar?

If you're a product security leader in a highly regulated industry like automotive, healthcare, or critical infrastructure, this scenario probably keeps you up at night. If it doesn’t, it’s only a matter of time until it does.

The rise of "opaque vendors", that is, suppliers who provide minimal transparency into their components, has created a massive blind spot in modern supply chain security. And with new regulations demanding unprecedented visibility into every piece of code in your products, the stakes have never been higher.

The Opaque Vendor Crisis: It's Worse Than You Think

In today’s global supply chain, most OEMs depend on third-party software and hardware to bring products to market. These upstream vendors range from reputable open-source libraries to suppliers who provide little more than a compiled binary blob. But it doesn’t matter who makes them; if the component ships in your product, it’s your responsibility to secure it, so you need to solve your visibility issues quickly. 

An “opaque vendor” isn’t just someone who won’t give you source code; opacity exists on a spectrum. Even vendors who provide source code often deliver it without build materials, documentation, or dependencies needed to understand what you’re actually getting.

The truly opaque vendors? They give you a binary blob, minimal documentation, and maybe a basic SBOM that lists “uses PostgreSQL” without mentioning any of PostgreSQL’s dozens of dependencies, extensions, or dependencies of those extensions. Either way, you’re flying blind and on the hook with regulators and customers if (or should I say, when) security issues develop. 


Unfortunately, this problem is proliferating at an alarming rate for several interconnected reasons: 

Global supply chain complexity has reached a breaking point. Today's products don't just have vendors; they have vendors of vendors, vendors for them, and so on, often stretching 5+ tiers deep. That specialized chip in your medical device contains software from a company that uses libraries from another company that depends on open source components maintained by volunteers. Even running an NVIDIA graphics card on Linux requires pulling in closed-source binary blobs that become part of your product's attack surface.

Specialization is driving fragmentation. No one builds everything in-house anymore; it’s not practical or cost-effective, and in fast-paced markets like IoT, it’s just not possible. The expertise required for different components from wireless chipsets to cryptographic libraries has become so specialized that companies inevitably rely on external suppliers for critical functionality.

Regulatory pressure is forcing uncomfortable conversations. The EU Cyber Resilience Act, FDA medical device cybersecurity mandates, and automotive UNECE WP.29 requirements aren't asking nicely for transparency; they're demanding it. Companies that can't demonstrate security visibility into their entire product stack face real regulatory and liability risks.

 

Why Your Security Stack Fails Against Opaque Components

Most enterprise security tools assume you have source code access. Your static analysis tools, vulnerability scanners, and dependency checkers are designed for a world where you’re writing every line of code you’re shipping. But when you aren't, those tools fail to get a complete picture, leaving swaths of your software supply chain unaccounted for. 

I've worked with security teams who are already struggling to audit their own first-party code, let alone dive deep into every upstream component they're integrating. The natural response is to rely on vendor attestations and self-provided SBOMs. However, in our experience, these documents are frequently incomplete, outdated, or surface-level at best. You'll see an SBOM that lists major components but completely omits their dependencies, which essentially gives you a one-layer-deep view of what might be a ten-layer-deep dependency stack.

This is where the biggest misconception comes in. Many teams operate under the assumption that if something goes wrong with an upstream component, "it's not my fault, it's the vendor's fault." That might feel logical, but it's a dangerous assumption, especially in a post-CRA age where regulators don't particularly care whose code introduced a vulnerability, just who shipped it. 

Example: The Real (World) Cost of This Assumption

We recently saw a rash of vulnerabilities in high-end pan-tilt-zoom cameras as a result of HiSilicon manufacturing a chip used to control these products. That chip was sold to a white label camera manufacturer, VHD, who then sold their assemblies to multiple camera vendors who integrated it into their finished products. However, somewhere along the way, a pair of vulnerabilities were introduced that allowed remote code execution in the embedded web server.

When threat actors discovered and exploited this vulnerability, it wasn't HiSilicon's or VHD’s customers who were compromised, but the end users of those cameras. The camera manufacturers bore the brunt of the security incident, despite the vulnerability originating upstream in a component they didn't write.

 

When Source Isn’t an Option, Binary Analysis is Your X-Ray Vision for Black Box Components

Binary analysis examines the actual executable code that runs on devices without needing the original source code. When dealing with opaque vendors, it isn’t just helpful, it’s your only option. 


You can’t secure what you can’t see, but binary analysis shines a light in all the dark, forgotten corners of your code.

At Finite State, our approach treats every component equally, regardless of whether it's your first-party code or came from an upstream vendor five tiers removed from your supply chain. We don't have a concept of "this is your code, so we'll analyze it" versus "this is third-party code, so we'll trust it." Everything gets the same level of scrutiny.

Deep Firmware Unpacking and Component Mapping

The process starts with firmware unpacking using a variety of mechanisms and strategies to pull apart binary images into their constituent components. The goal is to offer complete visibility by mapping every library, version, and dependency without requiring source code access. This includes detecting obfuscated or deliberately hidden components that might not appear in vendor documentation.

We use a vast variety of unpacking mechanisms depending on the exact system we're dealing with, then deeply inspect every single aspect of it.
 

Vulnerability Correlation at Scale

Once we have that component map, we correlate it against over 200 vulnerability data sources and threat intelligence feeds we check daily. This isn't just about known CVEs; we're looking for patterns that indicate potential zero-day vulnerabilities, licensing risks, misconfigurations, outdated dependencies, and dangerous defaults.

When a new critical vulnerability is disclosed, you get immediate alerts because we automatically correlate it with the components in your products. If a vulnerability comes out tomorrow that affects your supply chain, you'll know immediately.

Penetration Testing: Proving Security Beyond Documentation

Penetration testing also plays a critical role in reducing risk from opaque components. If binary analysis tells you what's in your product, then penetration testing proves how secure it is. 

Full-scope penetration testing becomes essential when dealing with opaque systems because it analyzes your product holistically rather than examining individual components in isolation.

Dynamic Analysis of Complete Systems

Finite State's red team conducts full-scope tests that holistically assess all aspects of the product, including firmware, hardware interfaces, APIs, and cloud integrations. When something's opaque, we treat it like an adversary would: testing, poking, and probing until we surface threats that static scans can't detect.

By looking at your system as a complete entity, penetration testing can uncover previously unknown threats such as missing controls, debug functionality, and insecure operations that first-party or third-party components may have introduced. This is particularly valuable for opaque vendors because it reveals vulnerabilities at the integration points between components, shedding light on issues that might not be apparent when examining components separately.

Reverse Engineering Hidden Functionality

The testing approach doesn't discriminate between your code and vendor code. If there's a command injection vulnerability buried in a binary blob from an upstream supplier, comprehensive penetration testing will find it. More importantly, it will demonstrate the real-world impact of that vulnerability in the context of your specific product.

Regulatory Compliance Evidence

From a regulatory compliance perspective, penetration testing provides the evidence packages that auditors want to see. It demonstrates due diligence and offers crucial liability protection by showing that you've taken reasonable steps to identify and address security risks, even in components where you don't have source code visibility.

 

The "Opaque Vendor Playbook": A Practical Risk Management Framework

Based on our experience working with regulated industries, here's a practical approach to operationalizing security for opaque components:

Before Procurement

  • Vendor security questionnaire 2.0: Go beyond compliance checkboxes to ask specific questions about build processes, dependency management, and vulnerability response
  • Binary analysis as part of vendor evaluation: Don't just ask vendors about their security practices—verify them with actual analysis
  • Contractual requirements for security transparency: Build in the right to perform binary analysis and penetration testing on components you're purchasing

During Integration

  • Generate and validate your own SBOMs: Rather than relying solely on vendor-provided documentation, create comprehensive bills of materials through binary analysis
  • Automated policy enforcement: Set clearly defined thresholds for acceptable risk levels—outdated libraries, licensing violations, cryptographic issues
  • Continuous vulnerability monitoring setup: Ensure you're immediately alerted when new threats emerge that affect your components


Ongoing Management

  • Regular re-scanning for new vulnerabilities: The security landscape changes daily, so your component analysis needs to be ongoing rather than one-time
  • Vendor communication protocols: Establish clear escalation paths for when vulnerabilities are discovered in their components
  • Remediation validation and tracking: When vulnerabilities are found, you have several options:
    • Push issues upstream to vendors for patches (ideal but often challenging)
    • Implement binary-level patching when possible
    • Deploy compensating controls locally (e.g., input sanitization middleware for vulnerable components)

 

Breaking Through the Opacity Barrier

The opaque vendor problem isn’t going away. If anything, increasing specialization and global supply chain complexity will make it worse. But it’s a solvable problem with the right approach and tools. 

You can attempt to negotiate transparency requirements into contracts, but market realities don't always make this feasible. The most successful security teams we work with have shifted from trying to eliminate opaque vendors (impossible) to building robust processes for managing the risks they introduce (achievable). 

“Binary analysis and penetration testing aren't nice-to-have security enhancements; they are essential capabilities for modern product development.”

The key insight is that opaque doesn't have to mean insecure. With deep binary analysis and comprehensive penetration testing, you can achieve security visibility into any component, regardless of whether the vendor provides source code. The techniques and tools exist; it's just a matter of incorporating them into your security workflow.

The regulatory environment will only get more demanding around supply chain transparency. The companies that get ahead of this trend by implementing robust binary analysis and verification processes will have a significant competitive advantage. They'll be able to work with a broader range of suppliers, meet compliance requirements more easily, and most importantly, ship more secure products.

The Path Forward

Here's what we recommend:

  1. Assess your current opaque vendor exposure: Identify the components in your products where you have limited visibility
  2. Prioritize based on risk and regulatory requirements: Focus first on critical systems or those facing regulatory scrutiny
  3. Pilot binary analysis on your highest-priority components: Start small and demonstrate value
  4. Contact Finite State for a proof-of-concept: See how deep binary analysis works with your actual firmware and components

The goal isn't perfect transparency; it's to offer sufficient visibility to make informed security decisions and meet your compliance obligations. With the right approach, you can turn your opaque vendor relationships from a source of anxiety into a managed aspect of your security posture.

Ready to illuminate your supply chain blind spots? Contact Finite State to discover how we can help you secure components without source code access.