Stable DiNEWSion - Stable Diffusion News

Stable DiNEWSion - Stable Diffusion News Trailer Bonus Episode 1 Season 1

11.03.23: Biden's Executive Order, 3D Diffusion, and AI Barenaked Ladies

11.03.23: Biden's Executive Order, 3D Diffusion, and AI Barenaked Ladies11.03.23: Biden's Executive Order, 3D Diffusion, and AI Barenaked Ladies

00:00

On today's episode, I'll dive into the Biden administration's new executive order on artificial intelligence and share how Google Brain co-founder Andrew Ng is fighting for the little guy. We'll also take a closer look at Stability AI's new generative 3D animation tech, Nvidia's GPU memory updates, and a groundbreaking paper on object customization. And in the world of AI allegations, we'll discuss a legal setback for artists in the copyright infringement case. Plus, my favorite 90s band just released an Stable Diffusion-powered music video! Put away your prompts and listen up. Stable DiNEWSion for the week of November 1st, 2023 starts now.

Support us on Patreon and get access to our Everly Heights Custom LoRAs for SDXL: https://www.patreon.com/everlyheights

📌 Links:
Watch this episode on YouTube
What's in Biden's AI executive order — and what's not
Google Brain cofounder claims companies stoke AI fears to capture the market
Meta’s Yann LeCun calls for more openness in AI development
Stability AI Previews New Features and Tools
NVIDIA fixes Stable Diffusion Bug In Latest Drivers
CustomNet: Zero-Shot Object Customization With Variable Viewpoints
FreeU Web Extension for AUTOMATIC1111
Amazon Rolls Out AI Product Shots
RTX 3090s on sale @ Newegg
Anderson v. Stability AI Ltd. Twitter Thread from Franklin Graves
Anderson v Stability AI Ltd. Artists Controversy
Barenaked Ladies Release AI-Powered "One Week" Anniversary Video
Subscribe to my YouTube channel for new episodes of Building Dreams.


GET $10 off Streamyard by clicking here
  • (00:00) - Intro
  • (01:28) - What's in Biden's AI executive order — and what's not
  • (03:11) - Google Brain cofounder claims companies stoke AI fears to capture the market
  • (04:46) - Meta’s Yann LeCun calls for more openness in AI development
  • (06:22) - Patreon Ad Read
  • (07:14) - Stability AI Previews New Features and Tools
  • (09:26) - NVIDIA fixes Stable Diffusion Bug In Latest Drivers
  • (10:36) - CustomNet
  • (11:47) - FreeU Web Extension for AUTOMATIC1111
  • (12:59) - Amazon Rolls Out AI Product Shots
  • (14:12) - RTX 3090s on sale @ Newegg
  • (14:44) - Anderson v. Stability AI Ltd. Updates
  • (16:58) - Barenaked Ladies Release AI-Powered "One Week" Anniversary Video
  • (18:48) - Outro

What is Stable DiNEWSion - Stable Diffusion News?

Welcome to Stable DiNEWSion, breaking down the latest stories in the world of Stable Diffusion and Generative AI. We fend off the fuss with facts. Hosted by Bill Meeks, writer, AI artist, and founder of Everly Heights Productions.

Unknown: Today, although the Biden administration's new executive order on AI and share how Google Brand co-founder Andrew NG is fighting for the little guy. We'll also take a closer look at stability eyes, new generative 3D animation, tech and videos, GPU memory updates, and a groundbreaking paper on object customization and in the world of A.I. allegations. We'll discuss a legal setback for traditional artists in the copyright infringement case. Plus, my favorite nineties band just released a staple to fusion powered music video. Put away your props and listen up. Stapleton Musicians for the week of November 3rd, 2023, starts now. This episode is brought to you by the Everly Heights Building Dreams. Patriots join the artistic journey and be part of something very special by supporting the project at Patriot on Tor.com. Slash Everly Heights and buy stream yards. Use it like I am to bring your dreams to life. Get $10 in free credit by going to stream yards. Everly Heights TV to sign up. Welcome to stable di news where bill meeks breaks down the latest stories in the world of stable diffusion and generative. I get ready to dove into the latent space. Now here's your host, Bill Meeks. Welcome to Staple Tanuja, where we break down the latest stories in the world of stable diffusion and generative A.I.. I'm Bill Meeks. Okay, let's talk about the big stories in the stable scoop. The Biden administration's executive order this week on artificial intelligence brings some government oversight to advanced day AI projects in the U.S., though it lacks licensing requirements and mandates for disclosing AI model details. It's seen as an incentive based approach to regulate A.I. and compete with international counterparts. Vice President Kamala Harris will also engage in dialog with China on A.I. governance. The order focuses on testing requirements for high risk air models, but doesn't specify recourse for unsatisfactory results. Other provisions include detecting air generated content, designating chief officers and enforcing consumer protection laws. However, the order omits rules like licensing, disclosure of model details and guidance on AI related intellectual property. Tech companies seem receptive to this framework. Legal challenges may arise, but without unified congressional action, this executive order is likely to be the primary regulation in the U.S. for the foreseeable future. Now, a lot of people were worried about this, but it seems pretty measured from what I saw. It seems like this is focused more on military applications of AI with an eye towards maybe evaluating more creative models like stable diffusion in the future. Oh, trivia. Apparently Biden was inspired by the A.I. Laden plot of the latest Mission Impossible movie. When drafting this executive order. Google Brain co-founder and A.I. expert Andrew NG has voiced concern that major tech companies are amplifying fears about any risk to quash competition and advocate for stringent regulation. NG argues that some tech giants aim to avoid competition with open source A.I. by sowing concerns about A.I. causing human extinction. Using these fears to lobby for legislation that could harm open source A.I. initiatives. Prominent figures. Nay, I have previously likened A.I. risk to nuclear war and pandemics, calling for swift regulation. NG stresses that regulations should be thoughtful to avoid stifling innovation. Now I agree with thing here. It feels like bigger operations like Adobe and Open Air, trying to enact some regulatory capture. Now, regulatory capture, according to Wikipedia, is when a political entity, policymaker or regulator is co-opted to serve the commercial, ideological or political interest of a minor constituency such as particular geographic area industries, professions or an ideological group. In this case, the entrenched tech industry. Basically a powerful few swoop in and influence new laws in their favor, shutting out competition in the process. Now I'm big on the open source ethos, so I hope the high muckety mucks the government offers a seat at the table too. Don't shut out projects like stable diffusion from building tools for all of us. On the same day, the UK hosted the Air Safety Summit, bringing together global corporate and political leaders. A letter signed by over 70 individuals emphasized the need for a more open approach to aid development published by Mozilla. The letter underscores the critical juncture in air governance and advocates for embracing openness, transparency and broad access to mitigate potential air related harms. The Open Letter argues that increasing public access and scrutiny make technology safer and calls for more openness in air development. Notable signatories include Meadows Yan Liqun hugging base co-founder Julian Command, the previously mentioned Andrew NG and Linux Foundation CTO, Brian Behlendorf, among others. The letter identifies three key areas where openness can enhance safe AI development, promoting independent research and collaboration, increasing public accountability and reducing barriers to entry for newcomers in the air. I feel it advocates that openness and transparency are vital components in achieving objectives related to safety, security and accountability in air development. Now. It's great to see such a diverse group of eight tech pickers putting their voices behind, keeping this tech open and free. You'll love to see. Okay. I have plenty more stable. The fusion news lined up here in the doc. But first, a quick word from our sponsor us. We'll be right back. This episode is brought to you by the Everleigh Heights Building Dreams patron at Everly Heights, DCTV. Bill Meeks already teaches you how to use the latest A.I. Tech as he uses it himself to produce animated shows set in the fictional town of Everly Heights, Ohio. Support Bill on Patreon to gain access to an array of exclusive benefits, including his custom AI tools and models behind the scenes content and engaging stable diffusion Q and A sessions. Join the artistic journey and be part of something very special by supporting the project at Patreon Ecom Slash Everly Heights. Okay. Let's examine the latest breakthroughs in A.I. Tech in a little segment I like to call stable surgeon. Watch out, doctor. A stable surge is coming your way. Stability II. The makers of stable diffusion have unveiled their latest advancements in text to image products, offering private previews of upcoming business offerings, including enterprise grade APIs and enhanced image capabilities. These updates underscore stabilities commitment to empowering creative storytellers like me with tools to bring ideas to life while enhancing their core product. Beautiful images. And now stability is expanding into 3D imagery, too. Among the new tools introduced is Sky Replacer ideal for real estate and various industries, enabling users to easily alter the color and esthetics of the sky in their photos. Additionally, stability is offering a private preview of stable 3D, simplifying 3D content creation for graphic designers, digital artists, game developers, animators like me and you know anyone else else who wants to mess around with this. Tech allows the generation of draft quality 3D objects with ease. Finally, the stable, fine tuning private preview is going to empower enterprises and developers to fine tune images and styles, swiftly catering to the entertainment, gaming, advertising and marketing industries. Those interested in exploring these cutting edge features and becoming exclusive testing partners during this initial phase can visit stability. AI's website. Now is somebody using stable diffusion to make an animated series and somebody who just finished training a prop turnaround model generating 3-D objects that I can position in light however I want. And Blender sounds super appealing to me. I hope this makes its way to the open sourced versions. It sounds it's a little unclear from the press release whether it's, you know, a corporate product or we're going to get to play with it, too. And it also remains to be seen if the tech will run on consumer hardware like mine. Now, the sky replacer is cool, but not really. That novel you've been able to do that with, you know, seven or eight clicks in Photoshop for a couple of years now. Now I see this as stability, definitely upping their game as far as professional tools and body is. Driver Update five 36.40 Introduced a method to address GPU memory exhaustion by allowing applications to use shared memory or the RAM in the computer when needed. This feature prevents crashes that occurred when GPU memory ran out, although it may lead to reduced application speeds when close to maxing out GPU memory. Stable diffusion fusion, which often requires nearly six gigabytes of GPU memory, triggers this mechanism for users with six gigabytes of GPU, resulting in reduced performance in driver version by 46.01. And later, NVIDIA has added the option to disable this shared memory fallback. This change should stabilize performance, albeit with the risk of crashing when using settings demanding more GPU memory. A guide is provided to help users make the adjustment. Now, I don't want to install this because I'm in the middle of production on a cartoon and I need my GPU, but I do have the driver version that does use RAM as a backup and is pretty handy, especially working with the larger and more intense SD XL models. A groundbreaking paper titled Custom Net zero shot object customization with variable viewpoints in text to image diffusion models is set to be released this week by a team of researchers. The White Paper introduces Custom Net, an innovative method that enables zero offshore customization of images while providing explicit control over viewpoints, location and the background. This approach aims to address limitations and existing object customization techniques such as time consuming optimization, identity preservation, copying and pasting or training your own model for prompt turnaround like I did. Additionally, it offers location and background control through textual descriptions or user defined images, making it a versatile solution for real world objects and complex backgrounds. Not just to note custom that has not been implemented into auto one, one, one, one or comfy UI. Yet this feels kind of in the same vein as my model I mentioned a little bit ago. I will be playing with it though when it makes its way to auto. One. One. One one. Speaking of automatic 111111. I think I put enough ones there. The auto 1111 free u ext was updated to free u version two this week. Free U as a framework that enhances the quality of images and videos generated from textual descriptions by optimizing the noise in process and unit architectures. Free U modifies how much two different parts of the diffusion unit to contribute to the final result. One part is for removing noise and making images clearer, and the other adds fine details to make them look more interesting or cohesive by finding the right balance between these two parts. Free U makes the tool work better without needing to retrain your models, like finding the perfect mix of ingredients to make a dish taste just right. The Dad's term, it is getting a free lunch. Now. I haven't tried any of the free use stuff yet because I did see some dips in image quality in the initial samples from version one, but v2 looks a lot better. I'm keeping an eye on this and might do a quick overview of the extension on my YouTube channel at this week's L.A.. Amazon ads has introduced a generative A.I. solution designed to simplify the creation of engaging and visually rich advertisements. In a survey conducted in March 2023, Amazon found that a significant challenge for advertisers was building ad creatives and selecting creative formats. Amazon's new tool aims to address this issue by allowing advertisers to easily generate lifestyle themed images for their products without requiring technical expertize. This tool benefits advertisers of all sizes, enabling them to create compelling brand themed imagery, potentially increasing click through rates for their ads. And it's being rolled out to select advertisers with plans for further expansion based on customer feedback. Now, this is huge for Amazon sellers, as doing a professional photoshoot for a product can literally cost thousands or tens of thousands of dollars. I have other issues with Amazon's treatment of sellers that employees and all that, but this seems right in line with the company's moves back in the day that brought us the indie author Revolution via the Kindle Store. I took part in that a little bit. Look for Dog Boy on Amazon. Now I'm not being paid to tell you this, but if you want 24 gigabytes of RAM to run the latest stable of fusion tools, you need to act fast. New cars refurbished. RTX, 39, goes on sale for about $600. And again, this is not a paid ad. I just thought some folks out there might want to jump on these before they're gone. Okay. Let's look at the legal and ethical battles AI is facing in this segment called. Order. Order in the court. We've got some allegations in a thread on Twitter or exit you like. Tech lawyer franklin graves broke down an update on the copyright infringement case of anderson versus stability. I limited i which was just a couple of days ago. This was the big lawsuit. People were suing stability for using the lie on data set to train their model. The thread revealed that stability has been denied a dismissal of the copyright infringement claim, while deviant art and mid journey were granted a motion to dismiss. Which is kind of weird because I think mid journey is built on stable diffusion. The presiding judge, William Orrick, found the complaint to be defective in numerous respects and granted the motion to dismiss the direct copyright infringement claims filed by Anderson against all three defendants. The thread also noted that updated complaints might be filed by Anderson and other defendants in the future, indicating that the legal battle is far from over. Now the lead plaintiff, Sarah Anderson, has 30 days to amend your complaint and continue the copyright dispute. The lawsuit encompasses various allegations, including direct and vicarious copyright infringement and violations of the DMCA or Digital Millennium Copyright Act, as well as California laws related to unfair competition and rights to publicity. The Artists Vicarious Infringement Theory was also questioned with Orrick, emphasizing the need for specificity in the claim. The big takeaway here is that people who are A.I. are rushing to court without having a legal leg to stand on. There's a lot of emotion in these legal cases, but not a lot of facts. There's probably somebody out there who could bring a legit case to a judge at some point concerning this stuff. But so far, the judge's rulings have all seemed pretty sensible. I know I'm supposed to be all you know, the sky is falling as a pundit here, as somebody, you know, talking to you about generative AI. But this week's stories have me feeling pretty hopeful for the future of Jennifer. Finally, let's explore how A.I. is invading popular culture in artificial pop. Now, this last story here kind of really thrilled me, and you'll see why. In celebration of the 25th anniversary of their chart topping hit one week. Barenaked Ladies have unveiled a new music video titled One Week 25th Anniversary that creatively merges A.I. Technology and Human Prompts The video set to a live rendition of one week from their album BNL Rocks, Red Rocks Live, embarks on a whimsical time travel journey from 1998 to 2023 and beyond. Through a collaboration with Flux, 88 studios known for their work in her and FSIS virtual environments, avatars and videos, the band has ingeniously integrated modern e i innovation into this nostalgic celebration, delivering a unique viewing experience filled with pop ups of information and trivia, you know, like on pop up video back in the day. Now I love bnl och bnl. They deserve the name BNL. They are my favorite band, but I'm not as big of a fan of the forum animation software, the Flux 88 studios used in the video, unfortunately. But you know, BNL has always been a band that explores the latest tech. I remember their album, Born on a Pirate Ship, was one of the first, you know, enhanced CDs I got back in the nineties. It enhanced CD if you didn't know if you put it in your computer, it had like a multimedia flash press kit for the album with skits, photos, and in this case for Rock Spectacular, it even had a whole fake album called Slacks filled with silly one minute songs. The guys are getting older, and the latest batch of songs didn't do much for me, but it's good to see them still out there exploring the bleeding edge. Well, thanks so much for joining me today. As I break down the latest stable of fusion news in this very first episode. Now, if you want to follow up on what I'm doing with Staples of Fusion, the website is Everleigh Heights Dot TV. And if you want to email me about this show, about what I'm doing, about staples of fusion in general, my e-mail address is Bill Meeks at Everleigh Heights Dot TV. You can also follow my YouTube channel, Apple Meets Olay, to see how I'm using stable diffusion to build an animated universe and teach you about the latest A.I. tools along the way. Okay, see you next time. Keep dreaming. Read the stories and join the team at Everly Heights TV. Follow us on Facebook, Instagram and Twitter at Everleigh Heights. Watch us build Everleigh Heights in building dreams by subscribing to Apple MIXOLOGY on YouTube. Get access to the custom stable, the fusion models we're using to build Everleigh Heights, as well as their morning meeting production diary for supporting us at Patreon Gqom Slash Everleigh Heights. Stable. The fusion, which often requires nearly six gigabytes of GPU memory, triggered this mechanism for users with six gigabytes. Gigabytes. I should have done the whole thing in my hall to fill the fakest voice.