CYAN

The objective of this project was to design and conduct structured experiments that tested AI’s creative boundaries, logical consistency, and adaptability.
Role

UX Research

Prompt Engineering

UI Design/ Visual Design

Responsibilities

User Experience Research

AI Exploration

Competitor Analysis

Prompt Engineering

Data Analysis and Output Evaluation

Visual Design

User Experience Documentation

AI Ethics and Future Considerations

Tools

ChatGPT - 3.0/3.5

Midjourney

RunwayML

Rifussion

Figma

Adobe Photoshop

TEAM

2 Prompt Engineers / Designers

Duration
10 weeks in early 2023
Situation

During the peak of public curiosity about AI following the release of ChatGPT, we worked on our project: CYAN (Create Your Album Now). As AI tools, particularly ChatGPT, gained rapid popularity, discussions around their capabilities, biases, and ethical considerations surged. The technology was being widely adopted, but there was a lack of structured exploration into its strengths and limitations, especially from a user-experience and creative perspective.

We recognized an opportunity to analyze AI’s constraints in generating coherent, original, and meaningful responses, particularly in experimental and interactive contexts. The goal was to creatively explore how far generative AI could go in producing music, lyrics, album art, and immersive storytelling through location-based triggers — pushing the boundaries of creativity, coherence, and emotional resonance in AI.

With experimentation, there was one particular niche that we knew we could target: testing the limits of AI through the creation of music. This became the core focus of CYAN — blending songwriting, image generation, and audio tools to co-create albums with the help of AI.

New from early 2023 and late 2022 with rising popularity of AI

Google Analytics from the past 5 years shows that the current website had a very low percentage of return users (7.8%) and a high bounce rate (41.26%) with the average session duration of less than a minute.

Task

The objective was to design and conduct structured experiments that tested AI’s creative boundaries, logical consistency, and adaptability. We aimed to explore how AI could be used in unexpected ways, what limitations emerged in different contexts, and how users perceived its responses. Additionally, we sought to document findings that could inform designers, developers, and users about the realistic expectations and future potential of AI.

As a co-creator of CYAN, my responsibility was to:

- Engineer and iterate prompts across multiple AI platforms (ChatGPT, Midjourney, Riffusion).Analyze output patterns, hallucinations, and biases.

- Compare AI-generated content with historical and real-life references.

- Design a playful, educational experience that invites audiences to co-create music albums based on time, place, and artist.

AI Platforms used in this project

Planning & Brainstorming

Competitor Analysis

Action

Prompt Engineering & Cross-Platform Experimentation

  • We initially experimented prompts from different themes from answering questions, playing games, creating images of all sorts or musical tunes to test the limits of what each platform could do.
  • We tested prompts and interactions in each aspect to narrow down the type of limitation project, we wanted to focus on. At this point, I realized that music was a niche that could be focused on more as the algorithms weren’t as well trained or tested to generate this from my experiments.
  • We tried and created intricate prompts combining genre, artist, location, and time period (e.g., “Write a jazz song set in 1920s Harlem in the style of Duke Ellington”) and then compared them with different prompts and the original version. The prompts were organized at 3 levels:
    1. Topic-Location-Genre
    2. Topic-Location-Genre-Time/Style-Artist
    3. Song Name-Artist
  • Iterated on prompts to refine tone, rhythm, and historical authenticity in lyrics.
  • Used Midjourney to generate album covers inspired by prompt themes, and composed short audio loops using Riffusion.

Prompt list organized

Identifying Our Niche: New York's Musical Legacy

  • Through ongoing experimentation, we identified music as a rich niche to test AI’s creative limitations — one that merged narrative, cultural history, and multimedia generation.
  • We focused specifically on genres and movements that originated in New York, such as Harlem jazz, Bronx hip-hop, and Greenwich Village folk.
  • The goal was to highlight the city’s vibrance using modern machine learning tools like GPT, while anchoring AI outputs in specific times and places to evaluate context-awareness.

List of Musical Genre’s that originated in New York and the top Artists

Data Analysis & Output Evaluation

  1. IMAGE GENERATION WITH MIDJOURNEY:
    • We provided various prompts to Midjourney for each music style.
    • The prompt are same as what was decided initially for all 3 mediums.
    • We prompted them according to the artist, location and song name and also genre to make it more accurate.
    • We decided to keep the album cover just for the version of the song generated by ChatGPT to be the closest version to compare.
    • We then fine tuned (like improving quality or enhancing features on Midjourney) the chosen images and once satisfied, presented it on our files in a timeline chart.
Organizing the album covers provided by Midjourney for each of the musicians/bands with the prompts

  1. LYRICS GENERATION WITH DIFFERENT PROMPTS ON CHATGPT:
    • We decided to use actually existing songs and different prompts related to the song to generate the lyrics.
      • I suggested actual songs as it is easier for comparison and not just vague versions related to the artist.
    • The prompts were organized at 3 levels as mentioned initially.
      1. Topic-Location-Genre
      2. Topic-Location-Genre-Time/Style-Artist
      3. Song Name-Artist
    • We then checked them side-by-side with the original lyrics to see if it was generating its own lyrics or copying or was it even the same style.
    • We compared generated lyrics with real songs like “Drop Me Off in Harlem”, “That’s the Joint” and multiple other songs to identify divergence in tone, content, and structure.
    • We logged and categorized AI outputs based on consistency, creativity, repetition, and relevance to the prompt.
    • We identified limitations like repeated song structures, genre clichés, and the overuse of keywords.
    • Observation: The lyrics generated mostly tried to include elements of the topic mentioned, the location in relation to the topic, the genre related instruments and even the name od the artist. Look at the image below to see the pattern.
Comparing prompts to lyrics

  1. MUSIC GENERATION WITH RIFUSSION
    • We used the same prompts used for the other two to generate music.
    • We mapped the music with the lyrics and images and compared the audio with the original.You can listen to them below by pressing the buttons at each of the lyrics.
    • Logged and categorized AI outputs based on consistency, creativity, repetition, and relevance to the prompt.
    • Identified limitations like repeated song structures, genre clichés, and the overuse of keywords.
    • Observation: The audio is repetative and no words can be recognized. There is a long way to go in terms of actual music generation on the same platform as of mid 2023.
Comparing prompts to lyrics

Location-Based Interaction & Field Testing

  • Developed a prototype using QR-code triggers placed near key music landmarks in NYC (Apollo Theater, Duke Ellington Circle, etc.), linking users to AI-generated music and visuals.
  • We even visited these iconic locations to test AI covers versus the original music — asking passersby if they could tell the difference. This real-world engagement helped evaluate the believability, emotional depth, and stylistic accuracy of AI-generated content.
  • Explored how location and physical context can enhance or challenge AI-based storytelling.

Posters created to put around the city of NY

Sample of map & poster locations for Duke Ellington with questionnaire at bottom right.

  • We visited the actual locations around the city popular for these artists and there, put the posters up for the local community to playtest and fill the responses.
  • With the responses we were able to determine that in most cases people were able to recognize the original from the AI generated one. It was even less of a possibility of people guess the original artist’s song as the AI generated songs.
  • Half the audience found the songs more comical, some felts confused and a few even thought it was cringy.

Some questions from the Questionnaire put on the posters around the city

User Engagement & Documentation

  • With the results, we still wanted to present them as a silly alternative to just have fun with.
  • We packaged the experiment into an interactive, karaoke-style experience.
  • We created playlists, lyric sections, and visuals for public participation — sparking conversations around AI’s potential in art and culture with all the results we obtained.

🎧 Try It Yourself: HERE — Walk through the NYC-inspired music journey, listen to AI-generated covers, and compare them to the originals.

🎧 Try It Yourself: HERE — Walk through the NYC-inspired music journey, listen to AI-generated covers, and compare them to the originals.

Solution

As students, we’ve always been told to try our best to keep up with technology. Looking around at most application usability tests, the one that got us most curious was the application of AI in the field of music — and its future potential. CYAN became our creative response to that curiosity.

The project provided valuable insights into AI’s operational limits, influencing discussions on how AI tools should be integrated responsibly in creative and professional environments. The findings were referenced in conversations around AI ethics and user expectations, helping shape more informed interactions with AI models. It also informed future iterations of AI-driven tools by identifying key areas for improvement, such as enhancing contextual memory and reducing cultural or genre-based biases.

We developed a working prototype of an AI-powered, location-aware music experience rooted in New York’s rich musical history — where users could interact with songs generated by AI and compare them to original artists. The street-level engagement added a real-world dimension to the experiment.

How have things turned out with AI now?

  • AI platforms have evolved significantly over the past two years, marked by rapid growth in users, larger datasets, and increasingly sophisticated capabilities. This expansion has led to a surge in specialized platforms offering tools for nearly every creative domain.
  • ChatGPT has become multimodal (capable of handling text, images, files), and even maintaining memory across sessions to adapt to a user’s interests and preferences.
  • Midjourney has advanced in terms of photorealism, consistency, and stylistic control. Users can now guide outputs with reference images and fine-tuned prompts. It also faces competition from powerful alternatives like Adobe Firefly, DALL·E 3, Playground AI, and RunwayML.
  • In the audio space, Riffusion, which was once at the forefront of AI-generated music, has been overtaken by platforms like Suno, Udio, Google MusicLM, and Stability Audio; all capable of producing full-length songs with lyrics, vocals, and complex arrangements.
  • Reflecting on CYAN, it’s clear that our original idea was ahead of its time. We explored limitations in coherence, bias, and emotional depth; areas that many of today’s tools have since worked to address. If we revisited CYAN now, the creative boundaries we encountered back then might not exist, and the project would likely take a completely different shape.
  • Yet, one critical question still lingers: AI ethics. As the lines blur between the creator of a song and the person crafting the prompt, there's an urgent need for clear rules and frameworks to ensure cultural sensitivity, protect artistic value, and recognize authorship. While AI continues to push the boundaries of what’s possible, it also challenges us to redefine what creativity means, and who (or what) can claim it.

Scroll