The most telling design decision at Dataland has nothing to do with screens.
Walk through the world's first AI art museum when it opens in spring, and you get the expected. Mirrors. Projections. Canvases at architectural scale. Then one gallery hits you with a smell, generated by the same AI that made the painting in front of you. One model. Two outputs. Two senses.
Most coverage skips that detail. It shouldn't. The smell is the answer to what Dataland actually is: a museum built around design problems only AI could solve. Every gallery runs on the same logic, and the design decisions behind it are worth looking at closely.
What Is Dataland?
View this post on Instagram
Dataland is the world's first AI art museum, a 25,000-square-foot institution built around a single premise: that data, processed through artificial intelligence, can function as an artistic medium at the same level as oil, stone or light.
Co-founders Anadol and Erkılıç designed it as a permanent home for AI-generated art, running multisensory design across five galleries powered by the Large Nature Model, the world's first open-source generative AI model trained exclusively on nature data.
The LNM ingests audio, visual, and ecological data from 16 rainforest locations globally. It has processed half a million scent molecules and occupies nearly a third of the building's total 35,000-square-foot footprint. It runs on an 87 percent carbon-free service based in Oregon.
Dataland Los Angeles opens June 20, 2026, at The Grand LA, a Frank Gehry-designed complex in Los Angeles' Grand Avenue Cultural District, adjacent to The Broad, MOCA, the Music Center and Walt Disney Concert Hall.
Its inaugural exhibition, "Machine Dreams: Rainforest," runs through Jan. 31, 2027, and draws entirely from the LNM's ecological archive. Institutional data partnerships include the Smithsonian, London's Natural History Museum, the Cornell Lab of Ornithology and the Getty.
Inside the Infinity Room: Why It's Not What You Think
The most reported feature of the Dataland museum is also the most misunderstood.
Gallery C, the Infinity Room, uses floor-to-ceiling mirrors, projectors and algorithms to generate what Anadol calls "machine hallucinations" in real time.
Yes, it looks like Yayoi Kusama's Infinity Mirror Rooms: the mirrors, the infinite reflections, the spatial disorientation. Most coverage stops there. But Kusama and Anadol are doing opposite things.

Kusama's rooms reflect the visitor. You walk in and see yourself repeating into the distance. The room is a self-portrait, her hallucinations made into a space you can stand inside.
Anadol's room, on the other hand, reflects the data, not the visitor. The mirrors face outward, and the subject is the machine's view of the world. Anadol calls it "a portal into our shared, digitized reality as interpreted through the mind of a machine."

Ten years and 10 million visitors later, that brief is still the one the studio is solving. According to the Dataland blog, the room has been in development since 2015, traveling to 35 cities across iterations.
Each version taught the studio something new about how people relate to data environments. The Dataland iteration is the most significant technical leap in that decade of work.
This iteration runs World Models, a type of generative AI that simulates real-world physics and spatial behavior. It also pipes in AI-generated scents from the LNM.
Anadol describes the shift in a single line: earlier rooms showed you a picture of a machine's dream. This one puts you inside a world the machine is imagining as you walk through it.

The visitor does not see the algorithm. They feel what the algorithm produces. That is the design decision. Not a technical upgrade. A deliberate choice about how to present AI to people who will never read a model architecture paper. The room works because it removes that requirement entirely.
The Infinity Room draws the photographs. Qualia draws the harder question.
Qualia: The Gallery Where AI Paints Your Emotions
View this post on Instagram
The series began when Anadol and Erkılıç visited the Amazon rainforest. Anadol later described the moment this way:
"Nature was continuously computing its environment, taking invisible forces like light, wind, moisture, and time, and translating them into physical geometry."
This observation drove a question the studio spent a decade building toward: If nature can turn invisible forces into physical form, can a feeling do the same?
Qualia is the system they built to answer it. The studio records the biometric signals behind specific emotions: joy, nostalgia, memory. Those signals feed into the LNM alongside its ecological archive. Joy generates sweeping topographies. Nostalgia produces layered botanical structures.

The data transforms rather than encodes. A viewer cannot look at a finished canvas and reverse-engineer the heartbeat or breath rate that shaped it.
For the studio, computation became the only tool precise enough to give form to make a feeling visible. The series runs 365 unique editions, one per day, each printed on 4-by-4-foot canvas.
One a day, every day, for a year. That cadence changes what a collection means.

Most permanent collections organize around scarcity: the singular canonical object, irreplaceable, frozen. Qualia organizes around time and repetition, running the same system against a different day's data. The collection is a year-long record of a system in motion.
Each acquisition ships with a fragrance engineered from the data behind that specific painting. The fragrance is what makes the encounter last. But the fragrance itself raises a question the paintings don't answer: how do you design with a material you can't see?
The Scent System: How AI Picks the Smell at Dataland
How do you treat ecological data as a shapeable material?

Every designer works with materials they can predict. Color carries hue, saturation, and value. Form carries mass and proportion. A perfumer knows how long an essential oil lasts and how far it travels in a room.
Anadol's studio trained the LNM to treat rainforest data the same way. Light, sound, humidity, scent molecules — all of it became raw material the model could combine and reshape into something a visitor could see, hear, or smell.
That is a quiet but huge claim: it puts ecological data on the same shelf as paint, stone, and ink.
But why scent? Because scent is the channel hardest to ignore. You can look away from a painting. You cannot look away from a smell. Smell hits memory faster than sight does and stays longer. If the goal is to make the visitor remember, scent is the shortest route there.
The studio chose the channel least likely to be filtered out.

But who gets credit when AI picks the scent?
That choice (picking the channel hardest to filter out) sounds like a normal design call. Until you ask who actually made it.
A perfumer would normally make this kind of decision. Years of training, personal taste, intent. At Dataland, no perfumer was in the room. The model picked from half a million scent molecules on its own, matching patterns it learned in training to the data behind a specific painting. Nobody handed it a recipe.
So who designed the scent?
Three parties contributed to the result:
- The studio chose the datasets, built the model, and decided what to train on.
- The model made the selection.
- The 16 rainforests supplied the structural intelligence the model learned from.
None of them did it alone. The studio set the conditions but did not choose the ingredients. The model chose the ingredients but did not decide what to learn from. The rainforests gave the model its vocabulary but had no say in how it would be used. The data did most of the work.
That is not abstract philosophy. It is the day-to-day reality every studio using generative AI is now dealing with. And it is why Dataland stops being a story about AI art halfway through your visit and starts being a story about who gets credit for design.
The question is already on the table in studios everywhere. Dataland is just the first museum brave enough to build a room around it.
What Dataland Means for Designers Working With AI
View this post on Instagram
Every tool in design history shaped outcomes its users did not fully control. Each time, designers adjusted authorship rather than surrendered it.
The LNM breaks that pattern in one specific way: it generates the brief. The Large Nature Model was not instructed to produce a scent profile. It was trained on ecological data, and the output became the material. The studio set the conditions. The brief emerged from the tool.
That is a genuinely new authorship model, and it puts a specific question to every design leader, creative agency, and studio working with generative systems: if the material generates itself, what is the designer's role? Curator of outputs? Architect of conditions? The person who decides what the system gets trained in the first place?
Dataland does not answer that. It is the most expensive, most public attempt to ask it before the industry has agreed on the terms.
If a fragrance engineered from rainforest data counts as a design decision, the definition of a design decision has already changed. The people in this industry who make them for a living are the last ones who can afford to ignore that.

Our team ranks agencies worldwide to help you discover the best partners for building iconic brand visuals. Visit our Agency Directory for the Top Video Production Companies, as well as:






