Hmn, A Pattern Here.

What gives, Stable Diffusion?

screenshot of a message from Stability AI's DreamStudio indicating a prompt is inappropriate. The prompt was: a simple line sketch representing really big learning supported by artificial intelligence. a woman is the main character in the image. screenshot of a message from Stability AI's DreamStudio indicating a prompt is inappropriate. The prompt was: a simple line sketch representing really big learning supported by artificial intelligence with a woman as the main character

My oh so threatening and inappropriate prompts? “Image was flagged as inappropriate”

Prompt: “a simple line sketch representing really big learning supported by artificial intelligence. a woman is the main character in the image.”

Alternative Prompt: “a simple line sketch representing really big learning supported by artificial intelligence with a woman as the main character.”

It must have been the mention of “really big learning.” 🤣 😭 🤣 😭 🤣

Some Experiments

As I wait for my ChatGPT Plus throttle to expire shortly after hitting the usage cap an hour ago, I decided to log some of my early learnings from my many generative AI experiments. I hope to make this a living document as I observe and learn more.

Custom GPTs

I waited for these to go live ALL WEEK after watching OpenAI’s opening keynote for their Dev Day earlier this week. (It’s worth the 45 minutes of your time.)

Onboarding to create a new GPT was fast and seamless, but with little explanation. I was up and running with my first in about five minutes. The real fun comes in the fine-tuning you can do once you get through the wizard-like experience.

Takeaways:

  • I could get it to generate images, but not return images/image previews from the internet
  • It says that it does not retain learning, even within a session. It does seem to retain some context, though. I need to explore this more.
  • There is no version history available, though I hope this is a future feature. That means it’s difficult to figure out how to organize prompts and learnings. I inadvertently overwrote some things more than once as I was interacting with my custom GPTs during different sessions. Now, I have a separate Google Doc where I’m logging some of my prompts and responses so I can keep track of my own attempts and learning. I’m also grabbing full page screenshots for longer conversations and downloading PDFs using the GoFullPage Chrome extension. Perhaps the most important thing here is it also means this is all way less transparent than I would like.
  • I was able to get it to build and populate a table of information for 12 different personas that are now part of my “Aidvisory Board” GPT (I had this idea prior to Sam Altman’s demo, but hadn’t yet figured out how to execute the idea)
  • COPYING TABLE DATA FROM CHAT GPT INTO GOOGLE SHEETS: I was able to cleanly copy that table from the GPT Builder part of the screen, but trying to copy a different table through the usual ChatGPT interface (non-editing screen) proved a bit more challenging. Thanks to this Reddit thread, I gave MS Word a try. When I pasted the data into Word, it recognized the data as a table immediately and I didn’t have to take any additional actions. I was then able to grab/select the whole table (even the parts I couldn’t see on the screen) and copy from Word into Google Sheets and FigJam with no issues.
  • My brain is kind of exploding at the moment and I’m trying to be very thoughtful about what I do with this. As I create new GPTs, it’s starting to feel like these can become very personalized and specific search engines.
  • I decided to ask my Aidvisory Board about some things related to this

Prompt: “what are the potential negative and unintended consequences of creating a GPT like this?”

Response:

“Creating a GPT like the Aidvisory Board, designed to provide balanced, well-researched, and unbiased advice in specific domains such as Web3 and health technology, can have several potential negative and unintended consequences:

  • Bias and Misrepresentation: Despite efforts to include diverse personas, there’s a risk of inherent biases in the data or in the AI’s learning process. These biases could lead to advice that inadvertently favors certain groups or perspectives over others, or fails to fully understand the nuances of each persona’s background and experiences.
  • Over-reliance on Technology: Users might become overly reliant on the GPT for decision-making, potentially overlooking human intuition, ethical considerations, or contextual nuances that the AI might not fully grasp. This could lead to decisions that are technically sound but lack human insight or empathy.
  • Data Privacy and Security: The GPT might handle sensitive information, especially in health tech contexts. Ensuring data privacy and security is critical, and any breaches or misuse of data could have serious consequences.
  • Misinterpretation of Advice: Users might misinterpret the GPT’s advice, applying it in inappropriate contexts or without the necessary critical thinking. This could lead to flawed decisions, especially in complex or ambiguous situations.
  • Ethical and Legal Challenges: The GPT’s advice might inadvertently violate ethical norms or legal regulations, especially since these can vary widely across regions and cultures. Keeping the AI updated with the latest legal and ethical standards is crucial but challenging.
  • Reduced Human Interaction: Over-reliance on AI for advice and decision-making could reduce human-to-human interactions and collaborations, which are often essential for creativity, empathy, and understanding complex social dynamics.
  • Impact on Employment and Skills: The AI might supplant roles traditionally filled by humans, leading to concerns about job displacement and the devaluation of certain skill sets.
  • Evolution of AI’s Role: As the AI evolves, it might start making decisions beyond its original scope or without sufficient human oversight, leading to unintended and potentially harmful outcomes.
  • Technology Accessibility: There might be disparities in access to this technology, potentially creating or exacerbating digital divides.
  • Unforeseen Consequences: Like any complex system, the AI might produce entirely unforeseen outcomes, which could range from minor inconveniences to significant ethical dilemmas.”

Infographics and Other Failures

Both DALL·E 3 and Stable Diffusion seem to struggle with imagery that includes text. I often get a mix of completely nonsensical text with a bit of logical wording thrown in.

These were from DALL·E 3, accessed via Bing Chat.

Prompt: “A perimenopause super hero who wears a funky patterned cape and tall boots. They are gender fluid. Their mascot is a small dinosaur mixed with a fish.”

The “craft clay” style in StabilityAI’s DreamStudio (in beta) produced some of the most ridiculous results for a prompt that were just bad. I’ve gotten a lot of great results in the same platform without applying any styles, though.

Prompt: “bold, energetic, gender fluid menopause experiences”

 

Use Cases

  • My original use case was to create imagery for blog post headers and other visual content using Stable Diffusion and DALL·E 3. I also used Stable Diffusion some months back to generate ideas for a roller skating rink of the future.
four square thumbnails of roller skating rinks of the future with tall wooden ceilings, wooden walls with lots of greenery, generated by Stable Diffusion
  • I’ve been exploring Chat GPT (now on Plus) in a less purposeful way until this week, at which point I was analyzing Plugins for my consulting work and starting to test its document analysis capabilities.
  • In addition to my Aidvisory Board for Menopausey mentioned above, I’m building out specialized co-founder-like GPTs, starting with FixerUXer, my UX and design co-pilot. I want to see which of my side project to-dos can be augmented and accelerated by an AI-generated co-founding team.

“Roads? Where we’re going, we don’t need roads.”