Software is Eating the World... And Game Engines are Eating Software

August 21, 2022

This thought piece is a copy of one I published on Hacker Noon

Marc Andreessen famously claimed, as far back as 2011, that “software is eating the world,” and that it would radically disrupt businesses across industries. This prediction has come true as software has infiltrated every facet of personal life, work life and industry. A decade later, I claim that we are seeing signs that “game engines are eating software.”

 

But first, what is a game engine? A game engine is programmable software and tools that allow artists to create interactive virtual environments for games, but not just games. This software will act as a physics simulator - accurately modeling kinematics, sound, and lighting. The obvious use case for this technology is to create 2D or 3D interactive environments for games, hence the name. In the early days of gaming, game developers would create their own proprietary game engines which took considerable development time and resources to build and maintain. Today, the market is dominated by Unity and Epic Games (the maker of Unreal Engine). Graphics chip maker NVIDIA is slowly inching towards building a game engine of its own, but may focus more on industrial simulation than the incumbents.

So why would a tool designed for game development capture the software market as a whole, and why now? A confluence of factors are arising that may signal exponential growth:

 

  • Real-time photorealism capabilities that now compete with special effects shops

  • Growing demand for simulation (especially for training AI)

  • Growing demand for responsive user interfaces

  • Vast availability of inexpensive GPUs in SoCs originally developed for smartphones

 

Game engines have reached a level of photorealism in real-time comparable with special effects for big budget movies that had taken hours of render time per frame. Real-time ray tracing (such as Nvidia’s RTX) generates a massive step change in lighting realism, Hair and water simulation packages from NVIDIA Gameworks and soft-tissue simulation tech from Ziva Dynamics enables extremely realistic environments and character animation. Real-time generation of photorealistic special effects and virtual environments may radically disrupt the cost structure of traditional entertainment that is currently reliant on special effects shops. In fact, some studios are already using game engines to dramatically reduce the cost of filming. For example, Disney uses Unreal Engine to create virtual backgrounds in The Mandalorian, and Unity has seen use as a rendering engine for many animated films

 

More importantly, this technology opens up entirely new entertainment mediums such as “choose your adventure” TV shows or movies with customizable levels of violence and gore. Even more impactful: the use of game engines in entertainment enables the mass availability of volumetric video (video that has 3D data and allows the user to move the camera and see a scene from different perspectives). VR is a particularly immersive way to enjoy volumetric video.

 

Game engines are effectively highly accurate and performant physics engines. As a result, they have great potential for digital twins and generating training data (although there is always the risk of overfitting). Tesla uses game engines to generate difficult or dangerous scenarios in which to train its Autopilot AI. Nvidia created the virtual environment simulator Isaac to train massive numbers of picking robots in parallel. However, there may be an even greater market opportunity in industrial and manufacturing, where sensor data can be compared against digital twin simulations in real time to diagnose and preemptively maintain key machinery.

 

These applications, although important and large opportunities themselves, represent only a tiny fraction of the potential of game engines. I believe that game engines will eventually eat user interfaces. Users demand responsive applications, and have a growing expectation that objects and interfaces should be interactive. For example, in maps, a user expects to be able to pan and zoom fluidly as well as reorient the camera and have 3D buildings react appropriately to virtual lights. In a car interface, users expect responsiveness and the ability to pan around their car’s avatar to open trunks and lock doors (Tesla has an exceptional UI in this regard). Even simple devices like dishwashers, refrigerators, and other household appliances will need simple, clear, but dynamic and fluid user interfaces. Game engines provide an easy-to-use WYSIWYG editor for UI that is responsive and dynamic. This provides non-experts an easy and fast way to build a UI that customers love, rather than the more time intensive process of building a UI from scratch.

One argument against the use of game engines for UI is the increased hardware requirement. After all, why should a car or a refrigerator have a GPU? However, the exponential decline in prices for ARM SoCs, plus the proliferation of Android into more home devices, means that these everyday objects not only have a GPU, but are running on an OS that game engines have relentlessly optimized for. In such a case, not using a game engine for a UI is effectively wasting the perfectly good silicon already in the device. It is likely that these SoCs will have vastly underpowered GPUs compared to their smartphone counterparts, but they will still be more than capable of running a simple UI in a highly optimized game engine.

Unity and Epic Games are aggressively building out lessons and course plans to use their software, hoping that they become the de-facto programming choice for new developers. Unity in particular has targeted children with its Lego-themed unity programming courses. The nature of game engines lends itself well to learning programming, as it enables real-time visual “debugging” of the code as users can see in real-time how the code works. 

The confluence of these factors presents a bullish future for game engines. Will incumbent game engines iterate and innovate quickly enough to become the de facto simulation and user interface layer of the future or will new startups built from the ground up for these roles seize this opportunity? Is it possible that we will live in a future where all front-ends are built on Unity or Unreal and all back ends are built on AWS, Azure, or Google Cloud? Or, if you want to get really philosophical, are we living in a game engine?

 

A Wildcard: The Metaverse

 

The Metaverse means many different things to many different people and has become somewhat meaningless, just like once technical, now marketing terms such as AR, XR, and MR before it. Some people believe that the Metaverse needs to be a decentralized and permissionless network, others believe that it needs to be completely run in virtual reality, and there are many shades in between. I have avoided discussing it for this reason, and because it remains unclear if end users will actually prefer the Metaverse to the status quo, despite the advantages of working, shopping, and socializing in a virtual environment. Regardless of your definition, most interpretations of the Metaverse believe that it requires some sort of 2D or 3D interactive environment, and this provides a massive opportunity for game engines to eat apps and the entire web as we know it today. 

This thought piece is a copy of one I published on Hacker Noon