Nodepp is the end of excuses. For too long, developers have settled for fragmented glue-ware or managed runtimes that treat hardware resources like garbage. Nodepp is a vertically-integrated C++ framework that proves you don't need a massive Virtual Machine or a bloated Garbage Collector to write high-level async code.
While others are busy masturbating the CPU β burning millions of cycles on garbage collection, context switching, and runtime management β Nodepp focuses on Pure Execution. It provides a unified world architecture where every module shares the same high-efficiency DNA, scaling from an 8-bit Arduino to an Intel XEON cloud server.
NODEPP UNIFIED ARCHITECTURE: Co-designed components MODEL
=========================================================
[ APPLICATION LAYER ] Logic: High-Level Async
||
+---------||--------------------------------------------+
| || UNIFIED ptr_t DATA CARRIER |
| || (Zero-Copy / Reference Counted) |
| \/ |
| [ PROTOCOL LAYER ] Protocol Layer: HTTP / WS / TLS |
| || Parser: ptr_t Slicing |
| || |
| \/ |
| [ REACTOR LAYER ] Reactor Layer: kernel_t |
| || Engine: Epoll/KQUEUE/IOCP/NPOLL |
+---------||--------------------------------------------+
||
\/ OS Layer: LINUX / WINDOWS / MAC
[ HARDWARE / KERNEL ] Source: Sockets / Registers
Nodepp: Closing the Gap Between Bare-Metal Performance and Scripting Agility through Silicon-Logic Parity Read the full technical breakdown, including architectural deep-dives into ptr_t, kernel_t and coroutine_t.
-
π: 1. Deterministic RAII (
ptr_t): Eliminates the unpredictable latency spikes (Stop-the-World) of Garbage Collectors. By utilizing Small Stack Optimization (SSO) and reference counting, memory is reclaimed with microsecond precision. -
π: 2. Cooperative Multitasking (
coroutine_t): Stackless coroutines eliminate context-switching overhead. This allows for massive connection density on low-power hardware, from 8-bit industrial sensors to cloud-scale reactors. -
π: 3. Platform-Agnostic Reactor (
kernel_t): A unified abstraction over native kernel I/O(Epoll, Kqueue, IOCP, and Npoll). It provides a consistent non-blocking interface across Linux, Windows, Mac, and Bare-Metal, ensuring that I/O multiplexing is always native to the silicon.
We didn't test this on a supercomputer. We tested it on an educational-grade Dual-Core Apollo Lake potato. If your framework can't perform here, it's not "scalable" β it's just hiding behind hardware.
1 - Performance Benchmark: HTTP Throughput vs. Resource Tax
Test: 100k requests | 1k Concurrency | Environment: Localhost see benchmark
| Metric | Bun (v1.3.5) | Go (v1.18.1) | Nodepp (V1.4.0) | Impact |
|---|---|---|---|---|
| Requests / Sec | 5,985 | 6,139 | 6,851.33 | +11.6% Performance |
| Memory (RSS) | 69.5 MB | 14.1 MB | 2.9 MB | 95.8% Reduction |
| Max Latency | 1,452 ms | 326 ms | 245 ms | Elimination of GC Spikes |
| p99 Latency | 1,159 ms | 249 ms | 187 ms | High-precision SLA stability |
| Energy Efficiency | Low | Medium | Extreme | Maximum hardware utilization |
2 - Performace Benchmark: Resource Management & Latency Jitter Analysis
Test: 1k Cycles | 100k Allocations see benchmark
| Runtime | Avg. Cycle Time | VIRT (Address Space) | RES (Physical RAM) | Memory Model |
|---|---|---|---|---|
| Nodepp | 3.0 ms (Β±0.1 ms) | 6.1 MB | 2.7 MB | Deterministic RAII |
| Bun | 7.2 ms (5-11 ms range) | 69.3 GB | 72.6 MB | Generational GC |
| Go | < 1.0 ms* | 703.1 MB | 2.2 MB | Concurrent GC |
Note: Go's <1ms measurement is a lie β it only reflects allocation latency. Reclamation is deferred to concurrent GC cycles, creating "ghost" resource pressure.
3 - Performace Benchmark: High-Concurrency Benchmark - 100k Task Challenge
Test: 100k asynchronous tasks see benchmark
| Runtime | RSS (Memory) | CPU Load | VIRT Memory | Strategy |
|---|---|---|---|---|
| Nodepp (Balanced) | 59.1 MB | 75.9% | 153 MB | Multi-Worker Pool |
| Nodepp (Single) | 59.0 MB | 59.9% | 62 MB | Single Event Loop |
| Bun | 64.2 MB | 24.2% | 69.3 GB | JavaScriptCore Loop |
| Go | 127.9 MB | 169.4% | 772 MB | Preemptive Goroutines |
4 - Performace Benchmark: Nodepp Stability & Memory Benchmarks
Test: 4 Valgrind-based stress tests see benchmark
| Test Case | Objective | Iterations / Load | Memory Leaks | Result |
|---|---|---|---|---|
| Atomic Longevity | High-concurrency HTTP | 100k requests | 0 bytes | PASSED |
| Rapid Lifecycle | Smart Pointer stress | 1M object cycles | 0 bytes | PASSED |
| Broken Pipe | Resilience to I/O failure | 100k interruptions | 0 bytes | PASSED |
| Multi-Thread Atomicity | race conditions stress | 100k Messages * 2 workers | 0 bytes | PASSED |
The Nodepp Project did not originate in a laboratory; it was forged in the trenches of mission-critical Edge Computing and WASM development. While architecting ecosystems that bridge ESP32 hardware, web browsers, and cloud infrastructure, we identified a systemic crisis: the forced fragmentation of a single business logic across three incompatible execution environments.
- The Edge: Native C/C++ for low-level hardware (High performance, near-zero agility).
- The Frontend: JavaScript/WASM for browser interfaces (High agility, massive memory churn).
- The Infrastructure: Managed Runtimes like Python, Go, or Node.js for server-side orchestration (High operational cost, unpredictable latency due to Garbage Collection).
Nodepp was built to collapse these silos. By providing a unified, asynchronous C++ runtime that mirrors the productivity of scripting languages, we enable Resource-Dense Computing.
#include <nodepp/nodepp.h>
#include <nodepp/http.h>
using namespace nodepp;
void onMain() {
fetch_t args;
args.method = "GET";
args.url = "http://ip-api.com/json/?fields";
args.headers = header_t({
{ "Host", url::host(args.url) }
});
http::fetch( args )
.then([]( http_t cli ){
auto data = stream::await( cli );
console::log("->", data.value());
})
.fail([]( except_t err ){
console::error( err );
});
}We restore the direct relationship between code and hardware through Deterministic RAII and Stackless Coroutines, allowing you to deploy the same high-level logic from an 8-bit microcontroller to a 64-core cloud reactor without changing your mental model.
Still Skeptical?
Watch logic-parity in action. This isn't a "concept"βit's A Fully Functional Enigma Machine running in a Literal Potato board :
ezgif-7f4dec232396a556.mp4
Nodepp abstracts complex socket management into a clean, event-driven API.
#include <nodepp/nodepp.h>
#include <nodepp/regex.h>
#include <nodepp/http.h>
#include <nodepp/date.h>
#include <nodepp/os.h>
using namespace nodepp;
void onMain() {
auto server = http::server([]( http_t cli ){
cli.write_header( 200, header_t({
{ "content-type", "text/html" }
}) );
cli.write( regex::format( R"(
<h1> hello world </h1>
<h2> ${0} </h2>
)", date::fulltime() ));
cli.close();
});
server.listen( "0.0.0.0", 8000, []( socket_t /*unused*/ ){
console::log("Server listening on port 8000");
});
}The Nodepp project is supported by a suite of modular extensions designed to follow the same unified design patterns:
- π: Data Parsing: XML
- π: Tor: Torify, JWT.
- π: Security: Argon2,
- π: Web: ExpressPP, ApifyPP.
- π: IoT/Embedded: SerialPort, Bluetooth.
- π: Databases: Redis, Postgres, MariaDB, Sqlite.
Nodepp is the only framework that lets you share logic between the deepest embedded layers and the highest web layers.
- π: Hardware: NodePP for Arduino
- π: Desktop: Nodepp for Desktop
- π: Browser: Nodepp for WASM
- π: IOT: Nodepp for ESP32
Nodepp is an open-source project that values Mechanical Sympathy and Technical Excellence.
- π: Sponsorship: Support the project via Ko-fi.
- π: Bug Reports: Open an issue via GitHub.
- π: License: MIT.
Nodepp is distributed under the MIT License. See the LICENSE file for more details.