Huski Talks Series
Thursday, December 1st, 2022
4pm-5:30pm PST / 7pm-8:30pm EST
(75 min panel discussion + 15 min Q&A from the audience)
Online
(Zoom Webinar format)
As AI-generated content gains popularity, immediate questions about their relations to copyright pop up. From direct copying human work to fully autonomously creating whimsical content, generative AI seems to be capable of producing digital assets that land anywhere in the spectrum. This leads to a "Wild West" foreseeable future for copyright enforcement because it’s so hard to draw the line between direct copying and derivative use in AI-generated content. In this roundtable, we will pick the brain of multiple leaders across AI, venture capital, and the legal industry to slice and dice both AI's capability and our available tools to make the generative content both easy and safe to use.
A lot of Generative AI initiatives are under development as we speak, including assistance in scientific research, art creation, entertainment, etc. Unseen amount of investment is chasing it, but some key aspects and risk factors are still playing out and very hard to predict at the moment.
Even among the engineers working in the frontier, the profound implication of strong foundation AI models, such as chatGPT or Stable Diffusion, to society is still very unclear, or even unnoticed by a lot.
Generative AI models do not copy and stitch together pieces of existing art. Instead, they use machine learning to learn concepts – like what a human nose is – to produce novel works.
Copyright law was created as a government incentive to promote the creation, not for creators to jealously gatekeep their works.
Will generative AI ever replace other artistic mediums? An argument can be made that humans derive other pleasures from the creative process that makes "but the AI can do it better" a moot point.
What we should do to protect human-assisted AI generated content is still unclear. We law may come from the court first before any legislations.
Copyright and brand protection is facing new challenges in the era of generative AI, and the majority of people haven't thought much about it yet.
Dan Jeffries is the Managing Director of the
AI Infrastructure Alliance and CIO at
Stability AI. He’s also an
author, engineer, futurist, pro blogger and he’s given talks all over the world on AI and cryptographic platforms. With more than 50K followers on Medium and a rapidly growing following on Substack, his articles have been read by more than 5 million people worldwide.
LinkedIn: https://www.linkedin.com/in/danjeffries/
Substack: https://danieljeffries.substack.com/
Medium: https://medium.com/@dan.jeffries
Twitter: https://twitter.com/Dan_Jeffries1
Tim is a partner at Menlo Ventures focused on early-stage investments that speak to his passion for the next-generation cloud, new data stack, and AI/ML.
Before joining Menlo, Tim was CTO of Splunk and oversaw the company’s shift from on-prem software to a cloud-first organization.
Prior to Splunk, Tim spent over a decade in various technical roles at Yahoo!, Sun Microsystems, and several startups.
As a proud nerd, he enjoys coding, building computers and gaming with his kids.
Tim frequently advises entrepreneurs, startups and universities, and serves as Vice President of the advisory board for the M.S. in Business Analytics program at Leavey School of Business at Santa Clara University. Tim holds an M.S. from Carnegie Mellon University and a B.S. from the University of California, Davis.
Michael focuses his practice on domestic and international trademark and copyright law, as well as brand management and counselling, domain name disputes, social media enforcement and counselling, scam activities, and unfair competition law.
His practice spans the globe, representing clients in clearance, counseling, prosecution, enforcement, and licensing matters and in assisting clients in protecting their intellectual property rights through litigation and other means, including in federal court, the TTAB, and through domain name dispute resolution procedures.
Mariano is an experienced and highly creative Concept Artist with 20+ Years in the Film and Game Industry. Strong experience in Film Costume and Art Departments. Creature and Visual FX Depts. Creating some of today’s most Iconic Characters and Concepts. X-Men/Mr FREEZE The MATRIX Sequels/WING COMMANDER/TROY/THOR Raganarok/TRON 3/D&D and many others.
Accustomed to working closely with Art Directors and Film Directors to achieve new complex yet buildable groundbreaking Characters and Environements. Deadline driven and methodical- detail oriented -self motivated team player.
IMDB: http://www.imdb.com/name/nm0224893/
ArtStation: https://www.artstation.com/marianodiaz
Hao (Henry) Du is co-founder and CEO of Huski.ai, a Silicon Valley-based startup that uses AI to help IP professionals and brand owners to register, manage and protect their brands and other digital assets.
Mr. Du is an engineer and serial entrepreneur who worked in various companies crossing AI chip development and software, autonomous driving, oil & gas, and automobile industries. Mr. Du obtained his Ph.D. degree from the University of Michigan and his B.S. degree from Zhejiang University. He enjoys hiking with family and friends and recently adopted a cat named “Tofu”.
The US Copyright Office originally approved an artists comic book style story "Zarya of the Dawn" which utilized AI technology to assist in creating the illustrations for the story. Upon a second review, the copyright office rescinded the registration due to the authors use of Midjourney in creating the illustrations. Well, the authors attorney filed a response to that refusal today. This case will gives creators who use AI insight into the copyright-ability of illustrations created with the assistance of AI tools such as Midjourney. I am of the belief that this authors use of Midjourney should not take away her rights as an author. I further believe this is distinguishable from the paradise AI image which was wholly created by AI. What are your thoughts on this?
SD is very good at replicating art styles, it can replicate styles like Leonardo Divinci, even though Divinci only has about 20 surviving paintings.
So, theoretically, if someone wanted to train an AI on a different style, would they be able to do so with as little as 20 images?
Thank you for your time, love the software you and your team has created!
Could an AI model be taught to NOT generate infringing content? For example, could you show the model copyrighted content (for example, the characters of Spongebob) and train the model not to produce artwork that includes those characters?
One or more of the AI programs allow someone to have artwork created “in the style of” one or more particular artists - does the artist have any cause of action?
Elaborating on the above, if someone typed in “Spongebob and Mr. Krabs drinking at a bar,” would it be beneficial if the model wouldn’t generate that because it would be infringing?
Even if an artist can’t legally allege copyright infringement, should they still be credited for their original style being the inspiration for a work created by AI?
One of the issues is because AI works may be so easily generated should the law then restrict protection in the works but focus on the protection of the AI technology instead? eg through patents
At this point we are doing models to train images. When we reach human neuronal synapsis learning (synapse models), we will face the same questions than we are doing now: Will a company own the synapse forms "ideas" or dreams?Daniel explained it clearly: You will not ask Heminway how to write a book.Copywriting a "collective human experience" (dreams or images drawn by an artist), should be address to a specific finished product with (at least) 50% intervention to further modify and "own" that property.
On the flip side does the panel think it is fair for someone to exactly copy an AI generated work as it was not created by a human?
Next time please don’t use your airpods mic. They have lagg and your mics keeps hanging when you talk. Or just set the mic to your laptop.
What will happen when students start using Large Language Models (LLMs) to write their term papers? Do they get to claim that the papers they’re turning in are original because they prompted the AI?
AI makes cheating WAY easier. There was no way to write a whole paper in a few minutes without directly copying….
Question for Daniel -Artists are generally less than 3% of any population, so by sheer logic wouldn't less than 3% of the training data be artistic? It seems like people against AI are under the assumption that most of the training model is from Artists, but that seems unlikely, what are your thoughts?
To follow on Daniel's argument, I think that the fact that society has a tedency to fear generative AI, comes from the fact that we tend to automatically assume that AI will produce better content... But is the speed/ease of a production process is what determines the quality and originality of the final output ?
PS: Thanks for this great debate
Follow us on LinkedIn for future events.