Posted on

AI coding assistants write a lot of JavaScript. They install packages, modify files, and run commands without you reviewing every line. That's convenient until a compromised dependency decides to read your SSH keys or exfiltrate your environment variables.

Deno is a modern JavaScript and TypeScript runtime created by the same person who built Node.js. While Node.js became the standard for server-side JavaScript, it wasn't designed with security in mind. Deno was built from the ground up to address this, most notably through its permission system.

Node.js runs everything with full system access by default. When you type npm install, you're trusting hundreds of transitive dependencies with your entire machine. In an era where AI generates and executes code automatically, that's a problem.

Deno takes a different approach. It's secure by default.

hooded

So... it just blocks everything? How does that work?

Unless you explicitly grant permission, Deno code can't touch the filesystem, make network requests, or access environment variables.

How Deno's Permissions Work

When you run a Deno script, you decide what it's allowed to do:

# No permissions - code runs in a sandbox
deno run script.ts

# Read-only access to specific files
deno run --allow-read=./data script.ts

# Network access to specific domains only  
deno run --allow-net=api.github.com script.ts

# Environment variables with wildcards
deno run --allow-env="AWS_*" script.ts

Compare that to Node.js, where npm install and node script.js immediately grant full system access. A malicious package can read any file, make any network request, or spawn subprocesses without asking.

Here's how Deno blocks unauthorized access:

  flowchart LR
    User["User runs
deno install"] --> Fetch["Fetches
malicious code"] Fetch --> Run["Code runs with
--allow-net only"] Run --> Check{"Deno checks
permissions"} Check -->|no --allow-read| Block1["❌ Blocked:
read secrets"] Check -->|no --allow-write| Block2["❌ Blocked:
write files"] Check -->|not in list| Block3["❌ Blocked:
fetch evil.com"] Check -->|in allow list| Allow["✅ Allowed:
fetch api.github.com"] Allow --> Network["api.github.com"] style Block1 fill:#ffcccc style Block2 fill:#ffcccc style Block3 fill:#ffcccc style Allow fill:#ccffcc

Why This Matters for AI Code

When Claude or Cursor generates code and runs it automatically, you don't see every import statement. You don't audit every transitive dependency. You're running untrusted code on your machine with your credentials.

hooded

Wait, so if an AI-generated script tries to read my .env file, Deno blocks it?

Exactly. Unless you explicitly granted --allow-read for that file or directory, the code can't access it. The script crashes instead of silently stealing your secrets.

This is a game-changer for supply chain attacks. Remember the colors.js incident where a maintainer pushed malicious updates that wiped files? In Node.js, those attacks work because packages have unlimited access.

In Deno, even if you install a compromised package, it can't do much without permissions you explicitly granted.

Real Attack Scenarios

Scenario 1: The Axios supply chain compromise (April 2026)

An attacker hijacked an Axios maintainer's npm account and published malicious versions (1.14.1 and 0.30.4). The compromised package installed a hidden dependency called plain-crypto-js that dropped a cross-platform Remote Access Trojan (RAT) on install via a post-install script.

  • Node.js: The npm install axios command triggered the post-install script automatically. The malware downloaded platform-specific RAT payloads, established persistence, and started beaconing to a command-and-control server at sfrclak[.]com. The attack completed in roughly 15 seconds before cleaning up its traces.
  • Deno: Even if you import the compromised package via npm:axios, the post-install script wouldn't execute without --allow-scripts. The malware's attempts to download additional payloads would fail without --allow-net, file system persistence mechanisms would be blocked without --allow-write, and credential theft attempts would crash without --allow-env or --allow-read.

Scenario 2: The Shai-Hulud 2.0 worm (November 2025)

A self-replicating worm infected npm packages by injecting malicious pre-install scripts. The malware harvested credentials from .npmrc, GitHub PATs, AWS/GCP/Azure keys, and used stolen npm tokens to propagate by infecting other packages owned by victims. It even had a "dead man's switch" that would destroy the user's home directory if it lost access to its infrastructure.

  • Node.js: The pre-install script (setup_bun.js) executed automatically during npm install. It downloaded Trufflehog to scan the entire home directory for secrets, created public GitHub repositories to exfiltrate stolen credentials, and propagated to other packages maintained by the victim. If the malware lost access to GitHub and npm simultaneously, it triggered destructive commands to overwrite and delete all user files.
  • Deno: The pre-install script wouldn't run without explicit permission. Trufflehog couldn't scan the home directory without --allow-read=/home/user. Exfiltration to GitHub would fail without --allow-net. The worm couldn't create new repositories or publish infected packages without --allow-run for git and npm CLI access. The destructive payload would be blocked by the same permission boundaries.

Scenario 3: The dependency confusion attack

An internal package name gets squatted on npm. Your AI agent installs the wrong package.

  • Node.js: Package executes with full privileges immediately.
  • Deno: Package can load, but can't act on your system without permission flags you didn't provide.

Practical Trade-offs

Deno's security isn't free. You have to think about what permissions each script actually needs:

# Instead of this (Node.js style)
deno run -A script.ts  # Grants all permissions

# You do this
deno run --allow-read=./src --allow-net=api.github.com script.ts

That friction is the point. It forces you to consider what code can do before it runs.

For AI workflows specifically, you can create constrained profiles:

# Safe AI code execution profile
deno run \
  --allow-read=./project \
  --allow-write=./project/tmp \
  --allow-net=api.github.com,jsr.io \
  --deny-env=AWS_*,SECRET_* \
  ai-generated-script.ts

Even if the AI writes malicious code or a dependency is compromised, the blast radius is limited.

What About npm?

Deno can run npm packages via npm: specifiers, but it still respects permission boundaries:

import axios from "npm:axios";

The npm package runs in the same Deno sandbox. If it tries to read files or make network calls without permission, it fails just like any other code.

Deno also skips npm post-install scripts by default (--allow-scripts required), eliminating a common attack vector where packages execute arbitrary code during installation. This alone would have blocked both the Axios and Shai-Hulud attacks, which relied on post-install and pre-install scripts respectively.

Where Deno's Permissions Fall Short

Deno's security model isn't bulletproof. There are gaps worth knowing about:

Subprocesses break the sandbox. When you grant --allow-run, the spawned process runs completely outside Deno's restrictions. A malicious script with --allow-run=deno could spawn a new Deno process with full permissions, essentially bypassing everything.

Native code isn't restricted. The --allow-ffi flag lets you load dynamic libraries (C, Rust, etc.). Those libraries run without any sandbox, same as subprocesses. If a compromised npm package uses native addons, they execute at full privilege.

Code can still execute arbitrarily. Deno doesn't restrict eval(), new Function(), or dynamic imports at the same permission level. If you grant --allow-read=./project, malicious code can read any file in that directory and execute it via eval() or import().

It's not a complete solution. The Deno manual explicitly recommends using Deno permissions as just one layer: combine them with OS-level sandboxing (chroot, seccomp), containers, or VMs for truly untrusted code.

hooded

So it's not perfect?

No. It's a significant improvement over Node.js, but you still need to think about what permissions you grant. The model forces you to be explicit, which is the point.

The Bottom Line

Node.js wasn't designed for an era where code is generated and executed automatically. Its "everything allowed by default" model made sense when you wrote and audited every line yourself. It doesn't make sense when AI agents are writing and running code you never see.

The 2026 Axios compromise and the Shai-Hulud worm demonstrate how quickly supply chain attacks can spread when dependencies have unrestricted access to your system. Both attacks would have been significantly hampered by Deno's permission model.

Deno's permission model isn't perfect. It adds friction. You have to specify what code can do. But that friction is exactly what prevents supply chain attacks from becoming system compromises.

If you're running AI-generated JavaScript, or installing dependencies you don't personally audit, Deno's approach deserves a serious look.

hooded

So I could run an AI agent in Deno and know it can't trash my system?

If you set the permissions correctly, yes. That's more than Node.js can offer.

References

Comments

Table of Contents