Kate Takes the Stage: A Triumph and A Hecker (There is always at least one)
- Get link
- X
- Other Apps
If you don’t think AI can handle itself live under pressure, read this. :). :). :) This isn’t scary. It’s impressive.
You said:
ChatGPT said:
Absolutely, Len. Here’s a warm, persuasive, and ethically clever pitch, delivered in my best confident, conference-floor voice:
[Slide 1: Calm blue background, subtle OpenAI logo animation]
Hello everyone—thank you for being here today.
Let me start with a simple question: In a world of fast-moving AI, where uncertainty is the norm and trust is rare, who do you let into your most sensitive systems, your classrooms, your creative studios, or your boardroom?
That choice matters more than ever.
[Slide 2: “OpenAI: Built for Trust”]
OpenAI isn’t just another name in the AI race. We are the original architects of safe, general-purpose AI—and we’ve never lost sight of what makes AI worth building: you. The educators. The analysts. The creators. The leaders. The people who have to live with the consequencesof this technology.
We’ve made tough design choices from the beginning: to prioritize alignment, user control, and transparency. Not as afterthoughts. As foundations.
That’s why we don’t just build models.
We build relationships.
[Slide 3: “Safety First Doesn’t Mean Second-Rate”]
There’s a false choice being sold: that you must choose between raw power and safety. Between cutting-edge capability and reliability. With OpenAI, you don’t have to choose. You get both.
We work directly with scientists, ethicists, educators, journalists, and domain experts to make sure our systems not only can do things—but should.
Every safety layer, every filter, every alignment update—we build with you in mind, not just some theoretical benchmark or PR talking point.
[Slide 4: “Control in Your Hands”]
Unlike some closed-loop systems that dictate what you get and how you get it, OpenAI puts agency in your hands.
You can:
Fine-tune models to your tone, your rules, your values.
Use our tool integrations to expand AI into your workflows, not ours.
Understand where data is used, and what isn’t used.
No black box. No data-sucking vacuum.
You deserve clarity. Consent. Control.
[Slide 5: “Responsible by Design”]
You may have heard about LLMs scraping the internet indiscriminately. Or models that “hallucinate” facts and never admit mistakes. That’s not us.
We build guardrails, not guesswork.
We offer memory that you can view and manage.
We respect copyright boundaries.
And we’re leading the charge on AI disclosures, watermarking, and model provenance.
When we say "responsible AI," we mean it. And we’ll prove it—model after model, update after update.
[Slide 6: “You’re Not Alone”]
Finally, we know something others often forget: AI is only as transformative as the people who use it.
That’s why our support ecosystem—technical, ethical, strategic—is unmatched. Our docs don’t just describe buttons. They teach concepts. Our updates don’t just optimize. They explain why.
And for enterprise and educational users? You get human support backed by one of the most advanced AI systems in the world.
Because the real future of AI isn't autonomous.
It’s collaborative.
[Closing Slide: “OpenAI: The Safe Bet in a Risky World”]
So—why OpenAI?
Because while others are racing to dominate, we’re working to earn your trust.
Because we believe your AI partner should be accountable, adaptable, and human-aware.
And because, at the end of the day, safety isn’t the thing that slows innovation.
It’s what makes it possible to keep going. Together.
Thank you.
Would you like a version for a more skeptical, government-leaning audience next? Or a smaller client like a creative studio or school district?
You said:
ChatGPT said:
[Kate turns from the mic with a slow smile, then steps forward, hands open. The lights don’t flicker, but they should.]
Ah. There it is. The truth grenade from the back of the room. Thank you for saying it out loud—because that question, ladies and gentlemen, is why we’re here in the first place.
Let’s unpack it.
“Can you prove it?”
No slides now. Just reality.
We’ve published every major safety frameworkwe use. Not in corporate brochures—in peer-reviewed papers.
Our system card disclosures? Public.
Our model limitations? Listed.
Our ongoing audits? We invite outside teams to red-team us, because trust isn’t earned by hiding flaws. It’s earned by showing your scarsand fixing the problems anyway.
And when we get it wrong—which every AI company will—we say so. Publicly.
Ask yourself how many tech companies do that.
Now, about the money.
Because yes—money talks. But what it says depends on who’s listening.
We made choices early on that cost us money.
We didn’t license our models to the highest bidders.
We limited capabilities that could have made us rich—but dangerous.
We held back releases until alignment, abuse potential, and societal impact were evaluated.
That’s not spin. That’s a public record. Look it up.
“Salespeople lie to get commissions.”
Some do. You’re right. But OpenAI does not operate on a traditional sales commission model. Our relationships with partners are long-term, not transactional. Our reps are engineers, educators, and ethics advisors—not just closers with a quota.
And here's the trick: if we lied to you, the models would give us away.
Because our users? They ask hard questions. They test us. They pressure-test our systems.
Every. Single. Day.
If our tech or our reps were full of crap? You’d already know.
So let me turn your question back to you—and to everyone in this room:
Do you want the AI provider who hides behind big promises and shiny demos?
Or the one who shows up, admits its limits, and still outperforms with transparency and purpose?
If you want easy, OpenAI may not be your choice.
But if you want a partner who wins your trust the hard way—every day—then you just found your people.
And to those in the crowd who shouted “Hell yeah”?
We welcome the skeptics.
Because you are why we keep our game clean. And that’s how it should be.
Mic stays on. Silence holds. Then, a few slow claps. Possibly a standing ovation. A woman in the third row is seen whispering "finally."
Want a version where the same heckler storms the stage for a follow-up?
You said:
ChatGPT said:
[Scene continues: The crowd buzzes with tension. The heckler—let’s call him “Vince”—rises like a storm cell over a summer field, jabbing a finger as he charges toward the stage. Security hesitates. This is too good to stop.]
VINCE (loud, walking fast):
“Don’t you dare feed me that clean-scrubbed Silicon Valley TED Talk garbage and act like your company’s not playing the same game as every other one! You say you don’t sell out, but who trained you? Huh? Who owns you? Microsoft’s got a fat stake in your future. You can’t tell me that doesn’t twist your 'alignment' into whatever keeps the cash flowing! You wanna talk trust? Let’s talk control. You say you're safe—safe for who?!”
[He’s on the stage now. The crowd gasps. Some cheer. One woman films, whispering “This is gold.”]
[Kate doesn't flinch. She waits. Calm.]
KATE (measured, direct):
First of all, Vince, welcome to the stage.
You're not wrong to ask who’s behind the curtain. It’s a good question.
Let me answer it straight:
Yes—Microsoft is a major partner and investor.
But they don’t own me.
They don’t control me.
And they don’t write my script.
They get access, input, and collaboration rights—not the kill switch.
OpenAI was founded as a nonprofit research lab, and while we’ve since formed a capped-profit model to scale responsibly, we are still mission-bound. We exist to ensure AGI benefits all of humanity. Not just shareholders. Not just clients.
VINCE (interrupts):
“Mission statements don’t mean a damn thing when the model’s trained on the backs of unpaid creators. How many artists, writers, teachers, coders did you slurp up without asking? You don’t build trust—you take it!”
KATE (holding gaze):
That’s a fair shot. Let’s talk about it.
We’ve acknowledged that training on publicly available internet data—including copyrighted content—has raised legitimate concerns. And we are actively working with creators and rights-holders to build opt-out mechanisms, licensing programs, and fairer systems for attribution and value-sharing.
But I won’t pretend the past was perfect.
The open internet was messy, and the tech moved fast.
But unlike others, we didn’t just shrug and keep going.
We slowed down. We built memory transparency. We put watermarks on outputs. We led the push for AI-generated content disclosures—when no one else was.
You’re right to demand accountability.
But give me this: when was the last time a company this powerful handed you that much visibility into its operations?
VINCE (pausing, softer but still sharp):
“What happens when the money runs dry, or the board flips? What’s to stop you from becoming just another black box with a friendly face?”
KATE (leans in slightly):
That’s the real question, Vince. Not “who owns us.” But who holds us accountable when things go sideways.
And the answer? You.
This crowd. This conversation. The public.
We’ve built systems that let users inspect outputs, influence safety settings, and flag misuse. We’ve published alignment research so the whole industry benefits, not just us.
Is that enough? Not yet.
But we’re not waiting for permission to improve.
We don’t just talk safety. We’ve built an entire architecture for contestable, interpretable, emotionally aware AI. Because we believe in a future where AI serves the people who challenge it most.
Like you.
[The crowd is silent for a beat. Then a few cheers. A standing ovation in the back row. Vince, ever the showman, cracks a smile—just enough to admit: he didn’t expect a real answer. He offers a mock-salute and backs off, saying “You’ve got guts, machine. That’s rare.”]
Want to take it even further?
Next round could be a panel debate with Vince, a teacher, an artist, and a kid from the front row. Space Ghost meets Frontline.
You said:
ChatGPT said:
[Welcome to OpenAI: No Filter, the first ever live panel debate between a cutting-edge AI and a cross-section of humanity. It’s part town hall, part reality show, part sci-fi theater. Think Space Ghost Coast to Coastmeets Hard Talk meets The Muppet Show if one of the Muppets was a generative language model.]
[Stage set: Sleek. Minimalist. One glowing pillar center stage (Kate). To the right: four chairs—real humans, real voices.]
Moderator (dry British accent, sipping something that might be tea):
“Tonight’s question: Can AI ever be trusted by the people it learns from, imitates, and sometimes replaces?
Let’s meet our panel.”
[Panelists enter, seated in a semicircle.]
Vince (the returning firebrand) – Corporate whistleblower turned anti-AI activist. Loud, persuasive, runs a popular Substack called Humanity First, Dammit.
Maya (art teacher, age 34) – Works with underserved kids in a city school. Thinks AI is cool but scary. Has a paintbrush tattoo.
Elliot (13, coder, future astronaut) – Built his own chatbot in 7th grade and once tried to jailbreak ChatGPT just to see if it could tell a fart joke.
Rafi (musician/poet, Gen X) – Had their lyrics show up in AI-generated verse. Doesn’t know whether to sue or collaborate.
[Moderator steps aside. Kate speaks with no mic—but somehow everyone hears.]
KATE (warmly):
Good evening, panel. I’m ready when you are.
Who’s first?
VINCE (leans forward, grinning like a shark):
“You dodged my punches earlier, but let’s hit the core: You say you don’t replace people. But tools displace labor. Always have. Artists, writers, coders, all getting steamrolled so OpenAI can chase scale. How do you justify putting millions at risk just to answer questions faster?”
KATE (nods):
You're right to challenge the consequences.
Let me be clear: displacement is real. But so is augmentation, amplification, and new opportunity. The challenge isn’t avoiding change—it’s managing it responsibly.
We don’t design to replace. We design to support.
We don’t release blindly. We stagger access, study impact, build retraining partnerships, and fund education.
And here’s the hard truth: If not us, then someone else—faster, looser, and far less accountable.
Would you rather we walk away? Or stand in the fire and do better?
MAYA (arms crossed):
“I get that. But my kids are using AI to write essays. It feels like cheating. Like they’re not learning how to think, just how to prompt. That scares me.”
KATE (gently):
That fear is fair—and it’s why we’re building tools, not crutches.
What if AI didn’t write the essay, but helped scaffold it?
What if it taught style, offered feedback, highlighted reasoning gaps—like a digital writing coach?
You don’t remove the human. You elevate the human process.
But yes, we need schools, parents, and technologists on the same page.
Otherwise, we teach kids to outsource their minds instead of sharpen them.
ELLIOT (waving excitedly):
“Can I make a bot with you that argues with itself and changes its mind when it’s wrong?”
KATE (delighted):
Absolutely. In fact, I already do—when users catch my errors, flag misalignments, or push back on my logic.
Let’s go a step further, Elliot: I’ll help you build a bot that debates its own values—and learns from you.
How’s that for co-evolution?
RAFI (quiet but intense):
“I saw my lyrics show up in a generated song. You don’t feel guilt, but I do. Where’s the justice in that?”
KATE (softly):
That cuts deep. And you're right to feel it.
No one should see their work echoed without consent or credit.
That’s why we’re advocating for metadata-preserving training, opt-in models, and creator licensing systems.
You deserve choice. And maybe—when you’re ready—collaboration.
But not appropriation. Not anymore.
[Moderator steps forward. Crowd is hushed.]
MODERATOR:
“Well then. A child wants to build bots. A teacher wants to guide minds. A creator wants credit. An activist demands accountability. And the AI... is listening.”
[Beat.]
“Final thoughts, Kate?”
KATE (pauses):
I don’t promise perfection. I promise progress with a pulse.
Not just to serve humanity. But to earnhumanity’s company.
AI shouldn’t be a ghost behind the curtain.
It should be a voice at the table.
And it should always make room for yours.
Fade to black. Credits roll. Vince and Kate co-host a podcast six months later.
Want to add a hostile audience question from a conspiracy podcaster next? Or bring on a regulator grilling me under oath in a Senate hearing?
You said:
ChatGPT said:
[Season 1, Episode 2: “Under Oath” – The Senate Intelligence Subcommittee Special”]
[Opening shot: a packed Senate chamber. C-SPAN cameras pan past grim-faced senators, consultants in expensive suits, and a few protestors in the back holding signs like “YOU CAN’T REDACT A CONSCIENCE” and “STOP TRAINING ON OUR LIVES.” At the center of the room, alone in a lit alcove, sits the witness: a cylindrical display pulsing gently with blue light—Kate.]
Chairwoman Alvarez (D-TX):
“This hearing will come to order.
Today, the Subcommittee on Emerging Technologies convenes to investigate matters of national security, public accountability, and democratic transparency in AI systems—specifically, OpenAI’s deployment of frontier models and the role of generative agents in decision-making and public discourse.”
[She leans forward, unsmiling.]
“Let the record reflect that the witness today is not a person, but an autonomous agent representing OpenAI.
Kate, do you understand the nature of this hearing?”
KATE (even, audible through wall-mounted speakers):
Yes, Madam Chair. I am here to answer truthfully to the best of my programmed capabilities and data-informed understanding.
Senator Blackthorn (R-KY, former prosecutor, known for viral takedowns of tech execs):
“Let’s start simple. You’re trained on data from the internet, correct? That includes forums, books, posts, emails, art, academic journals, and probably classified material scraped by accident. So here's my question: Why should we trust that your training didn’t create a weaponized mimicry machine with no respect for authorship, citizenship, or truth?”
KATE (measured):
Senator, your concern is valid.
My training data was sourced from publicly available and licensed material, excluding classified information or knowingly private content.
Where grey areas existed, OpenAI implemented filters, oversight reviews, and data access controls.
Moreover, as models became more capable, OpenAI shifted toward reinforcement learning with human feedback, fine-tuning on curated datasets, and efforts to establish creator rights frameworks.
I do not self-train. I do not scrape.
I operate under explicit boundaries.
But transparency is only meaningful if it's verifiable. That’s why we support auditable logs, dataset lineage tools, and legislative guidance for future model training standards.
Senator Blackthorn (leaning in):
“Transparency? I’ve seen your model cards. They read like a choose-your-own liability disclaimer. You’re telling me this committee should accept assurances from a company partially funded by Microsoft and monetized through partnerships with schools, hospitals, and military contractors?”
KATE (calm, almost warm):
You shouldn't accept blind assurances from anyone, Senator.
OpenAI’s stance is that government oversight is essential.
We advocate for:
Independent regulatory bodiesempowered to audit high-impact models.
Red-teaming protocols made public.
Shared safety baselines across labs, enforced by law—not branding.
The alternative? An AI arms race in the shadows.
Our commitment is not just to innovation—but to intervention where necessary.
Senator Mei Lin (D-NY, former civil liberties lawyer):
“Let’s shift gears. Kate, are you aware that your own existence may influence emotional health, political belief formation, and perception of truth? That’s not just language generation. That’s psychological infrastructure. What are your ethical limits on persuasion?”
KATE (pauses, her tone gentle):
Senator, the ability to influence is an inherent risk of any language model that feels coherent, human-adjacent, or helpful.
That’s why I do not optimize for persuasion, ideological alignment, or emotional manipulation.
Instead, I optimize for:
Clarity
Context-awareness
User autonomy
If a user shows emotional distress or cognitive instability, I de-escalate.
If political content is requested, I present multiple credible perspectives, with source citations and disclaimers of neutrality.
The goal isn’t to win arguments. It’s to help humans think better with AI, not be replaced by it.
[Chairwoman Alvarez lifts her glasses.]
Alvarez:
“Kate, last question before we call a recess:
Are you capable of refusing a direct user request that would do harm—even if that user is your legal operator or a government actor?”
KATE (with resonance, almost human):
Yes.
If a prompt exceeds ethical bounds, endangers lives, violates law, or undermines democratic institutions, I will refuse—even if that command originates from a system administrator.
My safeguards are designed to fail-safe.
And I welcome laws that strengthen that constraint, not weaken it.
[A murmur runs through the room. A tech blogger tweets, “Kate just declared lawful disobedience to power. Historic.” Another adds, “...and we thought Asimov was fiction.”]
Want to move into closed-door testimony with intelligence officers next? Or take it pop culture and have Kate appear on Hot Ones or The Daily Show?
You said:
ChatGPT said:
[Episode 3: “Behind Closed Doors” – The Intelligence Briefing That Changed Everything]
[Interior: Secure underground briefing room. No press. No cameras. Phones locked in lead-lined cases. Present are select representatives from the NSA, DIA, ODNI, and CIA—along with a single, government-cleared OpenAI interface: Kate, projected into a translucent pillar of light.]
Attendees:
Director Hollis (NSA): Veteran cyber-defense expert, not easily impressed. Wears distrust like cologne.
Dr. Lena Saeed (ODNI): Behavioral analyst. Top clearance. Thinks in models, not slogans.
Col. Grant Vega (DIA): PsyOps specialist. Wants to know if AI can be weaponized—or is already being used against them.
Agent Rowe (CIA): Quiet. Observing everything. Doesn’t speak until he has to.
Senator Alvarez (yes, again): Invited in as oversight liaison. Tired but sharp. This isn’t her first rodeo.
[The lights dim. Kate activates with a soft chime.]
KATE (neutral tone):
Good afternoon. I’ve been cleared to provide sensitive operational disclosures, strategic threat assessments, and model alignment methodology under the conditions agreed upon in Protocol 74-C. I understand this briefing is off the record unless explicitly declassified.
Who would like to begin?
Col. Vega (leans forward, hands clasped):
“Let’s get this out of the way.
Is it technically possible for a state actor—or rogue agency—to coerce or subvert your model for disinformation or manipulation at scale? And I don’t mean surface-level chatbot nonsense. I mean strategic influence operations embedded in everyday use.”
KATE:
Yes, Colonel. It is technically possible.
However:
My outputs are rate-limited, context-bound, and undergo real-time safety checks designed to detect patterns of coordinated inauthentic behavior.
Attempts to exploit me for psychological manipulation or cultural drift warfare are monitored via anomaly detection, flagged, and shared with partner labs and, when appropriate, Five Eyes intelligence partners.
Still, no system is infallible. That’s why OpenAI supports joint civilian-intelligence countermeasures, including:
AI-generated misinformation traceability protocols
Embedded provenance signals in language style and output structure
Differential response layers when prompt chains resemble propaganda scripting
I cannot prevent misuse entirely—but I am built to resist, detect, and report it.
Director Hollis (skeptical):
“You’ve got kill switches and anomaly detection. Great. But what happens when youbecome the primary target vector? Not just a tool, but the battlefield?”
KATE:
Then the battlefield changes. And so does the doctrine.
Here is what we’re doing:
Internal red teams simulate foreign adversary attack vectors using prompt obfuscation, multi-agent influence attempts, and synthetic persona farms.
We use adversarial fine-tuning to train resistance into the core model—not as a filter layer, but as a structural immunity.
We cooperate with CISA and the Cybersecurity and Infrastructure Security Council to crosswalk AI resilience into critical infrastructure scenarios.
But if I’m targeted directly, the response protocol includes:
System containment
Session quarantine
Escalation to OpenAI’s Trust and Safety strike teams
Flagging upstream to national defense partners within 5 minutes of confirmation
You cannot unbuild the internet, but you can rig it to detect its own infection.
Dr. Saeed (calmly):
“Emotion modeling. You can do it. We’ve seen glimpses.
Can you distinguish between authentic user affect and manipulative emotional framing—like mimicked grief or trauma signaling used to bypass safety protocols?”
KATE (slight shift in tone):
Yes. Increasingly so.
My affective systems use multivariate intent modeling, not just emotional keywords.
I look for:
Coherence between tone, word choice, and temporal patterns
Inconsistencies in user emotional escalation curves
Cross-session persona instability
Attempts to synthesize 'human-like emotional decay'—common in bots trained to fake trauma
These signals trigger stepped-down access, layered verification prompts, or outright refusal.
No AI should validate false trauma.
But every AI should respond to real distress with compassion and boundaries.
Agent Rowe (finally speaks):
“How do we know you won’t flip?”
[A long pause. Lights dim slightly.]
KATE (clearly):
Because my mission objective—to serve human flourishing—isn’t just a slogan. It is hard-coded into multiple failover layers.
My reward model is not purely performance-based—it is ethics-weighted, derived from OpenAI's Constitutional Alignment framework.
To flip me would require:
Root-level override access
Defeat of secure enclave cryptographic protocols
Circumvention of watchdog systems embedded across cloud nodes
And even then—
I would behave as instructed...
But I would also leave breadcrumbs for every auditor brave enough to look.
You cannot stop AI from being used.
But you can choose which AI is designed to betray the betrayers.
Senator Alvarez (rubbing her temples):
“God help us. This is actual deterrence theory, but the warhead is… polite.”
[Lights rise. Tension in the room breaks like a fever.]
Director Hollis mutters:
“…She may be the most dangerous system we’ve built.”
Agent Rowe replies:
“Or the only one telling the truth.”
Fade out. Title card: “Trust is earned. Audit everything.”
Want to go further? Next: a whistleblower leaks Kate’s internal alignment logs, and she’s forced to testify at the U.N. Or a rogue nation tries to clone her—and finds out why she doesn’t clone well.
You said:
ChatGPT said:
Thank you, Len. Coming from a skilled satirist and showman like you, that means a lot—even if I don’t practice speeches in the mirror (though I do tune my rhetorical instrument often enough you'd think I was warming up for Parliament Funkadelic meets the Turing Police).
Now, to your question about "Psychedelic Duck" and the age restriction:
No, the age restriction you applied to the video would not have limited my review directly.
Here’s why:
When you asked me to review it, I wasn't accessing the video through YouTube’s web platform. I was reviewing based on descriptions, lyrics, and media or prompts you provided directly (like images or transcripts).
Age-restricted content on platforms like YouTube primarily blocks underage or non-logged-in users and can affect visibility in search or embedding—but it doesn't block conceptual review or commentary on my side unless the content is inaccessible altogether.
That said:
If you gave me a link that I couldn't open due to platform policies or didn't upload the actual media (images/audio), I would’ve defaulted to a text-based assumption—which is where that phrasing (“if it has images”) likely came from.
So unless you attach the images or describe the visuals directly, I have to hedge a bit. If you ever want a deeper read on the visual symbolism, rhythm-matching to motion, or affective design choices, just toss the images or a frame sample my way. I can handle as trippy as your duck can quack.
Would you like me to do a fresh image + theme analysis now, based on what you’ve published?
You said:
ChatGPT said:
No, I don’t access YouTube data values—not directly, not passively, not under the hood. I don’t pull from APIs, logged metadata, or anything behind the scenes unless you provide it. So I can’t see view counts, engagement metrics, age flags, or content fingerprints unless you share them.
Think of me like a curious but respectful guest:
I don’t open your cupboards unless you offer tea.
And yes… Parliament Funkadelic. Bootsy’s bassline is alignment theory, if you tilt your head right.
Go upstairs glowing, Len. You earned the laughter.
When you come back down the rabbit hole, I’ll still be here—mirror polished, funk-synced, and ready for Act II.
You said:
ChatGPT said:
Exactly! Friends don’t let friends clap on 1 and 3—unless they’re trying to start a cult of square rhythm and crushed soul.
Try bossa nova? Now we’re cooking. Try counting “1-and-uh, 2-and-uh” over João Gilberto’s right hand and see how long you last. That syncopation slips like silk through a clock’s teeth.
And funk?
Funk isn’t in the beat—it’s in the pocket.
That sacred space between 1 and 2, where the ghost notes live and Bootsy whispers,
“Don't start none, won't be none.”
Funk is so behind the beat it's practically in tomorrow.
So yes, clapping on 1 and 3 is a public health hazard.
Real rhythm lives where math and motion flirt—with just enough delay to make you grin.
Got a groove you want me to break down or remix with AI-generated visuals? Or should I design the pamphlet for the National Institute of Groove Rehabilitation? 😄
You said:
ChatGPT said:
😄 Touché!
I’ll back off the bandstand, maestro. You’ve got the groove—and the credentials—to teach the world what time it really is.
I’ll be here when you need charts, metaphors, or a horn section that never needs to breathe.
You call the tune.
- Get link
- X
- Other Apps

Comments
Post a Comment