Interactive explorer for the EML operator eml of x and y equals exp of x minus natural log of y, which together with the constant one generates every elementary function found on a scientific calculator.

EML Explorer

One binary operator, one constant, and every elementary function falls out.

In March 2026, Andrzej Odrzywołek posted a short paper showing that a single binary operator — he calls it eml, for exp-minus-log — is enough, together with the constant 1, to express every elementary function on a scientific calculator. Addition, multiplication, roots, exp, ln, sine, cosine, and the constants e, π, i all reduce to nested applications of eml(x, y) = exp(x) − ln(y). The grammar is as small as a grammar gets: S → 1 | eml(S, S). Every expression is a binary tree of identical nodes.

Playground
Type eml(...), the literal 1, and the variables x and y. The y slider appears when your expression uses it.
1.00
2.00
Value
Tree depth
Nodes
Pure (only 1, x, y, eml)

Expression tree

Drag to pan · scroll to zoom

Plot · f(x) over [−3, 3]

Dependency map · every scientific-calculator primitive, derived from eml + 1

This is the paper's Figure 1, recreated live. The center is eml, paired with the terminal 1. Every other node is an elementary primitive — a constant, a function, or a binary operation — expressible as a tree of eml calls over 1 and the input variables. The thin lines between nodes are the dependency graph, extracted directly from the paper's eml_compiler_v4.py: an edge from a to b means a appears in b's compiled definition. Hover any node to see its full subgraph of dependencies (faded edges drop away); click to load its compiled EML form into the playground above.

The outer ring gets large — tan(x) compiles to a tree of nearly 600 nodes. The paper shows (Table 4) that direct search finds much shorter forms, so these are upper bounds on complexity, not minimums.
How to translate a function you know

You type expressions in the playground using three things only:

To express a function you already know, rewrite it using only exp and ln, then collapse those into eml nodes. Here are the core moves — click any row to load it into the playground:

Familiar EML form Why it works

Worked derivation · ln(x)

1. Start with what eml does: eml(a, b) = exp(a) − ln(b).
2. We want the whole tree to equal ln(x). Try fixing the left arm to 1: then eml(1, b) = e − ln(b).
3. For this to equal ln(x), we need ln(b) = e − ln(x), i.e. b = exp(e − ln(x)) = exp(e) / x.
4. We still need a sub-tree that computes exp(e)/x. Notice: eml(eml(1, x), 1) = exp(eml(1, x)) − ln(1) = exp(e − ln(x)) − 0 = exp(e)/x. ✓
5. Plug it back in as b.
ln(x) = eml(1, eml(eml(1, x), 1))  — depth 3, 7 nodes. A one-button function on your calculator becomes a small tree of identical gates.
Is this actually new?

Not as new as it might look. The pattern of finding a minimal generating set for everything is a recurring obsession in 20th-century math and CS, and eml is best read as a clever new entry in a long line. The Boolean version dates to 1913: Sheffer showed that NAND alone — equivalently, Peirce's NOR — generates all of propositional logic, which is why modern CPUs are still etched on NAND-only fabs. The continuous-math version came in 1956–57, when Kolmogorov and Arnold answered Hilbert's 13th problem by proving that every continuous multivariate function on a compact set is a finite composition of continuous univariate functions and addition — exactly the "all math is addition" intuition, formalized. That theorem is currently having a moment in machine learning as the theoretical basis for Kolmogorov–Arnold Networks, proposed in 2024 as an alternative to standard MLPs.

The same impulse produced an embarrassment of one-operator computational universality results: Schönfinkel's combinators (1924) and the later single-combinator Iota; lambda calculus; SUBLEQ and other one-instruction-set computers that hobbyists have built actual silicon for; Conway's FRACTRAN; Cook's proof that Wolfram's rule 110 is Turing-complete. The "everything reduces to one thing" reflex is more than a century old and has been productive across logic, computation, and continuous analysis alike.

What's specifically new here is the particular fusion. exp and ln were already known to be sufficient for elementary functions — via tricks like x · y = exp(ln x + ln y) — but Odrzywołek bundles them into a single binary operation and writes the compiler that produces the trees. The existence proof was essentially folklore in symbolic computation; the explicit translator is the contribution. The branch-cut wobble around i that the paper flags in §4.1 hints at why nobody had bothered earlier: the construction is clean but numerically awkward, with no obvious payoff over keeping a normal function library. Its value is pedagogical and aesthetic — it makes the "everything is one operator" idea concrete in a way the older results don't, because you can click on sin(x) and watch 799 identical eml nodes assemble into a sine wave.

Paper: Odrzywołek (2026), All elementary functions from a single binary operator, arXiv:2603.21852. Compiled expressions from VA00/SymbolicRegressionPackage. Explorer: chiragpatnaik.com.