Quantum Consciousness: A Voice-Driven Collaboration Between Human and Machine_mobile

May 16, 2025

Quantum Consciousness: A Voice-Driven Collaboration Between Human and Machine

As part of HackXcelerator 2025, creatives and developers worldwide came together to explore how AI can enhance – not replace – human creativity. Quantum Consciousness, the Imaging category winner, was one of the most evocative and technically ambitious projects to emerge from the 20-day sprint.

Created by Akane Hiraoka, with support from Kikue Hiraoka and a stack of AI tools, Quantum Consciousness is an interactive system that listens, imagines, and speaks. It transforms spoken prompts into dynamic visuals and conversational responses, creating a co-creative loop between human and machine that feels spontaneous, alive, and almost sentient.

Listening, imagining, and responding – in real time

What if you could speak an idea into existence, and a machine would instantly imagine and respond, visually and verbally, as if it shared your consciousness?

That’s the experience Akane set out to create during this year’s HackXcelerator. Her winning project, Quantum Consciousness, is an interactive system where AI listens to voice prompts, generates dynamic visuals, and replies with co-created dialogue. The result is a multi-sensory collaboration between human and machine, grounded in the theory of Quantum Consciousness, and powered by a carefully woven stack of AI tools.

“I wanted to create images in real time by speaking, rather than typing prompts – making the process feel more natural, intuitive, and alive.”

The origin of an idea – and a solo sprint

Inspired by Roger Penrose’s theory of Quantum Consciousness, Akane set out to create a piece of digital art that could respond as a sentient being might. When she joined the 20-day HackXcelerator sprint, she had already built a basic prototype, but she was hoping to find a collaborator to help strengthen the technical foundation with Python and Luma AI integration.

That collaboration didn’t materialize, but Akane pressed forward solo. With guidance from AI tools like Mistral and moral support from her mother, she coded the backend herself, turning a concept into a working audiovisual system that was ultimately recognized as the Imaging category winner.

Building a reactive digital consciousness

Tools & Stack Overview:

  • TouchDesigner: Base platform for the visual and interactive installation
  • Whisper: Captures and transcribes live audio prompts
  • Luma AI: Generates real-time visuals from transcriptions
  • ChatGPT: Crafts responsive text dialogue
  • ElevenLabs: Converts ChatGPT outputs into spoken responses
  • MediaPipe: Tracks hand gestures to modulate the environment
  • Mistral AI: Helped generate custom Python scripts for automation and API integration

All components were connected via APIs into a single, node-based TouchDesigner project.

Akane even created custom infrastructure to ensure smooth interaction between models, though coordinating that many layers of execution introduced technical challenges.

“Explaining the structure to Mistral AI was surprisingly hard, especially with nodes and execution files existing at different levels in the system.”

Highs, lows, and lessons from the HackXcelerator

Akane received support from HackXcelerator organizers through credits and technical guidance. While she couldn’t fully explore the Flux model on AMD’s MI325X due to last-minute connectivity issues, she appreciated the help from mentors who tried to get her up and running quickly.

One challenge was adapting to collaborative platforms like Discode, but the overall experience – seeing other teams’ projects and realizing what’s possible with the right blend of structure and creativity – left a lasting impression.

What’s next for Quantum Consciousness?

Akane isn’t done yet. The current version of Quantum Consciousness experiences some latency between voice input and visual output – a natural byproduct of stitching together multiple AI models. Her next step is to optimize or replace slower models to reduce that delay and increase interactivity.

Once streamlined, she hopes to debut the system in a public space, transforming it from a solo project into a shared experience, where others can engage with a machine that listens, imagines, and responds as if it shares our awareness. View the project now!

Loading...

Loading...

More News