High Performance 3D Animation with React + rxjs

Ben (@vivavolt), August, 2021

You know when you load a website and it has a bunch of fancy visualisations that respond to mouse and scroll position with animation? For most of the web's history creating experiences like these has either been impossible or required masochistic determination.

Trying to replicate an apple.com product page is an extreme test of sanity

It used to be difficult to create pretty much any interface in the browser. Efforts like React, Vue, Svelte, Solid and friends have trivialised the jQuery battles of the past. Now we can express our interfaces declaratively, as a function of state -> view.

In fact React has even let us port this idea to the third dimension with react-three-fiber; a wonderful library that uses three.js as a custom React render target.

const ColoredBox = () => {
const [toggled, setToggled] = useState(false)
return (
<mesh onClick={() => setToggled(!toggled)}>
<boxGeometry args={[1, 1]} />
color={toggled ? 'blue' : 'red'}

Try panning, scrolling and clicking

This is staggeringly little code to implement in-browser 3D. We get a lot for free here courtesy of React's Virtual DOM (VDOM) diffing, suspense and state management. There is, however, a sticking point.

VDOM-based renderers are a perfect match for managing changes to a render tree but hold no advantage when rapidly changing state on a single node. When it comes to rendering 60, 120 or 240 times a second every millisecond adds up fast. In React changing a single DOM attribute still incurs the entire VDOM render and comparison process, when we could simply assign that property directly. This is essentially what React animation libraries do internally, hiding behind a kinder API.

The Future of User Interaction on the Web


Denoting high-level programming languages which can be used to solve problems without requiring the programmer to specify an exact procedure to be followed. "Focus on the what, not the how."

We love our VDOM renderers because we can be declarative about our interfaces, keeping the focus of our code on what we want to happen rather than how to accomplish it. I've been wondering, with libraries like react and react-three-fiber combined with rising support for webgl, wasm and wgpu, are we on the path to far richer interactions in the browser? If so, does this mean we'll have to give up our lovely declarative components for imperative animation logic?

As a game developer I work with a few common game engines and none of them can be considered declarative. In a typical game the graph of data dependency is far wider and denser than a web app and as a result most game engines prioritise performance over clarity. So, how can we get the best of both worlds? Declarative, composable animation logic with as-fast-as-possible state updates.

There are many possible answers here. Programmatic animation is a whole sub-specialty within user interface development: tweens, timelines, easing functions, springs, cancellation, the FLIP method... There's a lot of jargon 😵‍💫.

Thankfully we have great libraries like framer-motion, react-spring and GSAP that offer us different abstractions for wrangling animation logic. That said, we can learn a lot more about animation by implementing our own approach. I find that animation libraries tend to trade-off between flexibility and ease-of-use, which is perfectly fine, but if we're going to bring videogame quality interactions to the web we need both.

Is there another conceptual approach to animation yet to be popularised? Recently I came across samsarajs, a library designed for continuous user interfaces. That is, interfaces that may never be "at rest" and are constantly reacting to changes in data. The project is rooted in functional reactive programming or FRP.

Briefly, FRP is focused on one main concept: the data stream.


A sequence of values distributed over some amount of time

What values? How much time? Those are up to the specific instance. We can have a stream of anything: keyboard events, chunks of data from a websocket, jobs in a worker queue or even a single constant number. Streams can be infinite or they can end at our choosing and in turn we can take just a few items from a stream or subscribe to updates indefinitely. Libraries like rxjs provide an algebra for working with streams, letting us mix them together, pluck out select elements and aggregate data over time. If you want to go deeper into streams I recommend this guide from Andre Staltz.

In my experience reactions to FRP are mixed. Many people are scared away by its abstract nature, some fear it encourages tightly wound spaghetti code and a dedicated few believe it is the future of programming. I think it's all of the above, FRP is powerful and like any powerful abstraction it is open to abuse. When you have a nuclear-powered ultra-hammer everything looks like an ultra-nail.

Regardless, samsarajs's fundamental insight is that the layout of an application can be modelled as a stream[ref]. Selfishly, I immediately wondered if I could apply this to my problem.

Animation can also easily be modelled as a stream[ref], it's almost in the definition:


A series of frames shown in succession over time, creating the illusion of movement.

Combining this with input streams from the user we can create a unified model of user intention -> data mutation -> animated visualisation.

The "dialogue" abstraction, image provided by the cycle.js docs.

This model is heavily inspired by cycle.js which is one of the most mindblowing frameworks around even after 7+ years of development. The cycle described by cycle.js from sources to sinks is a conceptual model that I find myself using in every interface, generative artwork or game I create. It keeps UX at the front-of-mind by including the user as a formal entity, unified with other sources of input like the disk, network or clock.

So with all that said, is there a way to use FRP and react-three-fiber to create performant, declarative animations? Let's find out.


Alright, here's the meaty part. I'm using React and react-three-fiber for rendering and rxjs to provide our streams. My implementation focuses on a three core concepts:

  • useObservable: values to animate
  • interpolator: how to transition between values
  • useAnimation: performant rendering of animations


You might've heard of observables before, the base concept is simple:


A variable which notifies subscribed listeners when the internal value is changed. Listeners can subscribe and unsubscribe from change notifications on demand.

We can declare a new observable value using our hook, useObservable:

const scale = useObservable(1)

Once we declare an observable we can update it by calling scale.set(2) or scale.swap(x => x + 1). This will change the underlying value and send an update event down the scale.changes stream.

// Log the change stream, but only if scale > 1
.pipe(filter(x => x > 1))
.subscribe(x => console.log(`it's ${x}!`));
// => it's 2!
scale.swap(x => x + 1.5);
// => it's 2.5!

In ReactiveX terminology, this is a Subject<T> wrapped up for easy consumption from React.


type Interpolator = {
end: number,
sample: (t: number) => number
const demo: Interpolator =
interpolator(0, 1, 'easeOutCubic')

An interpolator acts as a translation layer between different numerical ranges. They typically take the form of functions accepting a value, t, from 0...1 and output a value of t from 0...1. This might sound familiar if you've heard of easing functions, which are almost ubiquitous in programmatic animation:

Comparison of various common easing functions, via Noisecrime on the Unity Forums

Our interpolators are almost identical except for two important properties:

1. Remapping

const linear = interpolator(0, 1, 'linear')
console.log(linear(0), linear(0.5), linear(1))
// => 0, 0.5, 1
const warped = mapInterpolator(linear, -2, 4)
console.log(warped(0), warped(0.5), warped(1))
// => -2, 1, 4

This is important when we apply an animation. We'll animate values with certain curves between 0...1 but in practice we want to translate that into whatever range is relevant. We might want to animate a box's width between 32px and 400px but until the point of actually applying the animation we can preserve our sanity by using the normalised 0...1 range.

2. Composition

You can combine interpolators in many useful ways. We might want to add them together, subtract them, multiply them or sequence them one after the other.

Currently I've only written the sequence composition, but it demonstrates the principle.

const bounce = sequence(
interpolator(0, 1.2, 'easeOutCubic'),
interpolator(1.2, 1, 'easeOutCubic')
console.log(bounce(0), bounce(0.5), bounce(1))
// => 0, 1.2, 1


Finally, the hook that connects it all together. useAnimation takes an observable value, an interpolator, the duration in milliseconds and a function to apply the animated value.

useAnimation(scale, bounce, 500, value => {
mesh.scale.x = mesh.scale.y = value;

The value => {} callback is the application site of our side effects, in FRP terms this is known as a sink. Before this function is called all we are doing is changing some numbers in memory over time using an animation curve defined by our interpolator, but our sink is where we connect to our output. This may feel a little "bare metal" on first inspection, but I would argue this approach is vital for practical usage. A simple adjustment allows us to use this same animation with react-three-fiber or react-dom, retargeting only the binding layer.

const bounce = sequence(
interpolator(0, 1.2, 'easeOutCubic'),
interpolator(1.2, 1, 'easeOutCubic')
const scale = useObservable(1);
// react-three-fiber
const mesh = useRef();
useAnimation(scale, bounce, 500, value => {
mesh.current.scale.x = mesh.current.scale.y = value;
// react-dom
const element = useRef();
useAnimation(scale, bounce, 500, value => {
element.current.style.transform = `scale(${value})`;

This approach gives us maximum control and flexibility without compromising on performance. You can imagine packaging these value => {} callbacks into common pieces scaleDom, rotateDom, updateShaderUniform etc.

const scaleDom =
(el, v) => el.current.style.transform = `scale(${value})`;
const rotateDom =
(el, v) => el.current.style.transform = `rotateZ(${value})`;
const setShaderUniform =
(shader, uniform, value) => {
shader.current.uniforms[uniform].value = value;

Here's an example sketch I made using this API (try moving your mouse around, panning, zooming etc.):

The source for this entire article including the above sketch is public on github.

How does useAnimation work?

I'm not ready to publish useAnimation as a library on npm just yet, I'd like to explore the API surface more and put together documentation / examples. That said, you can poke around the sourecode yourself on github and come back if you're confused / curious to know more.

I started with, "what happens when a value we want to animate changes?" Well, we emit a change event on our .changes stream. Okay, so then from that change event we need to start an animation from the current value to the new value. As expressed earlier, an animation is a stream of frames... So we need to get one of those.

Thankfully Subject<T> from rxjs has us covered yet again. If we create a new Subject, we can call .next() on it to emit a new value whenever we want. So, if we combine a Subject with requestAnimationFrame we will have a new value published on every renderable frame the browser gives us.

This is a little gnarly in practice, but luckily I found an example from learnrxjs.com that worked perfectly. My version is in frameStream.ts and is identical except I don't clamp the framerate to 30.

The implementation for react-three-fiber turned out to be more challenging, I ran into issues asking for multiple requestAnimationFrame loops. So, instead, I built on top of useFrame to construct a stream held in a React MutableRef<T> in a similar way:

export const useFrameStream = () => {
const s = useRef<Subject<number>>(new Subject<number>())
useFrame(({ clock }) => {
return s

Okay, so we've got our framestream. Let's look at useAnimation and break it down piece by piece. We'll start by let's identifying some familiar concepts:

  • source is the return value of useObservable()
  • source.changes is the update stream to the underlying value
  • frame$ is the stream of requestAnimationFrames
export const useAnimation = (
source: ObservableSource,
interpolator: Interpolator,
duration: number,
sink: (v: Animatable) => void
) => {
// first, store the current animation state seperate to the observed value
const underlying = React.useRef(source.value())
React.useEffect(() => {
// update the render target upon mounting the component
// listen to the update stream from our observable value
const sub = source.changes
// switchMap: the magic operator that enables cancellation
// our value might change AGAIN mid-animation and
// we need to cut over to target the updated value
// switchMap has exactly these semantics, it'll cancel
// an old stream and replace it with a new one whenever
// it recieves a value
switchMap((v) => {
// capture the time when the animation started
const baseTime = Date.now()
return concat(
// take our frame stream at ~60Hz
// calculate the % into the total duration we are at
map((dt) => (Date.now() - baseTime) / duration),
// only animate while are < 100%
takeWhile((t) => t < 1),
// we append 1 to ensure we send an explicit frame at 100%
// mapInterpolator warps an interpolator's domain from 0...1
// to whatever we want
// here we map [0<->1] to [prev<->current]
map(mapInterpolator(interpolator, underlying.current, v).sample)
.subscribe((v) => {
// finally we store the current value and call
// the supplied update callback
underlying.current = v
return () => {
// stop listening for changes when the component unmounts
}, [duration, source, sink, interpolator])

Wrapping Up

As stated above, all the code for this experiment is available on github with an MIT license.

If you want to go deeper again then check out the project README and samsarajs. I'd like to try @most/core instead of rxjs here since it boasts impressive performance[ref]. To me, this seems like a promising area for further investigation. I've begun to experiment with a similar approach in Unity3d, hopefully more to report soon!

This is the first post from my new project ⊙ fundamental.sh where I'm attempting to document my favourite abstractions and programming patterns. Please don't hesitate to get in touch with me with feedback, ideas for extension or questions. You can find me on twitter, discord (ben#6177) or around the web.

If you'd like to be notified of the next time I write about programming subscribe to the newsletter. I only post when I have something worth saying.

Subscribe to ⊙ fundamental.sh

You'll receive new posts straight to your inbox when they're ready. I'm not interested in spamming you, I'm interested in sharing my passion for programming with other likeminded people.

Powered by Buttondown.