AI is everywhere now, but many businesses still hesitate to trust it. One major concern? Sensitive data like intellectual property and personal information might leak when used to train AI models. That fear is holding back enterprise adoption.
DataKrypto believes it has the answer. The company just launched FHEnom for AI, a homomorphic encryption framework designed to protect company data and custom models without slowing performance. Unlike traditional encryption, FHEnom works in real time, allowing AI models to function normally without ever seeing raw data.
Keeping AI Blind to Your Data
At the core of FHEnom is fully homomorphic encryption (FHE) combined with trusted execution environments (TEEs). These two layers ensure that all data stays encrypted throughout processing—even while the AI is working on it.
Here’s how it works: When a user sends a query, they connect to a TEE provided by a third party—like AWS. They receive an encryption key and encrypt their question before it ever reaches the AI. The model then processes the encrypted input and sends back an encrypted answer, which the user decrypts on their end. The AI never sees the question, the answer, or the underlying data.
Inside the TEE, DataKrypto places core parts of the model—like tokenizers and embedding layers—ensuring everything stays protected. The encryption keys remain locked inside the TEE, which means even if someone steals the model, all they’ll get is unreadable data.
Stops AI Poisoning Before It Starts
One of the biggest threats to AI systems is poisoning—when someone tries to feed bad data to skew results. FHEnom blocks this entirely. Since only the TEE holds the key, no outsider can inject training data. Without the key, the AI can’t even understand the input.
As DataKrypto’s CTO Luigi Caramico explains, “You can’t train or fine-tune the AI unless it can decrypt the data. If you lock down the TEE, you eliminate the risk of poisoning altogether.”
Even better, FHEnom protects both model weights and user data in ciphertext at all times. TEEs also isolate sensitive processes in a cryptographically verified zone. The result? A model that functions normally but remains sealed off from outside attacks.
This isn’t a full enterprise data security system—it won’t protect files before they enter the TEE. Companies still need access controls and encryption for internal systems. But within the AI workflow, FHEnom offers an airtight shield.
As Caramico puts it, “We solve AI’s core vulnerabilities—data leakage, model theft, and tampering—without compromising speed.” In an AI world driven by speed and scale, FHEnom might be the missing piece that lets companies finally trust the tools they build.