TAGGED IN

Ai training

    Getty Can’t Afford to Police AI Image Scraping. Who Can?

    Getty Images CEO Craig Peters told CNBC the company has already spent “millions and millions” on its lawsuit against Stability AI and cannot bankroll every new instance of AI-driven copying. The flagship case alleges that Stable Diffusion was trained on more than 12 million Getty photographs without permission. The dispute is before the UK High Court and is widely viewed as a test of whether existing copyright law can survive the age of large-scale data mining. Getty’s numbers tell the story. With roughly a billion dollars in annual revenue and decades of licensing expertise, it sits near the top of the global content food chain. If Getty concludes that enforcement costs outweigh potential damages, then local newspapers, freelance photographers, and smaller stock libraries have no realistic avenue to defend their work. In practical terms, the fight over image scraping may be possible one lawsuit at a time. That economic reality does not change the underlying grievance: unauthorized training transfers value from creators to model builders. The industry believes it needs three things: a standardized clearinghouse for per-asset AI licensing, machine-readable provenance tags embedded at capture, and statutory damages calibrated for infringement at machine scale. Policymakers are already circling the issue; the EU AI Act, for example, would require developers to disclose any copyrighted material used to train their models. Getty’s admonition about the expense of the fight should not be mistaken for surrender. They will continue to fight, and I’m sure this is far from over. It is important to pay attention to this (and other big copyright cases); these lawsuits intersect with pending regulation in interesting ways and, taken together, will shape the future of AI-training. One naive question (just for fun): What if the foundational models builders are already done training on pre-existing content? Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models. The post Getty Can’t Afford to Police AI Image Scraping. Who Can? originally appeared here on Shelly Palmer

    Grok AI’s Unprompted ‘White Genocide’ Responses: Everything You Need To Know

    This week, Elon Musk’s AI chatbot Grok, began unexpectedly responding to unrelated user queries on X with commentary about “white genocide” in South Africa. For several hours, the chatbot repeatedly inserted comments about alleged “white genocide” in South Africa into responses to completely unrelated queries on the X platform. The incident triggered widespread confusion, concern, and scrutiny over how the chatbot operates — and who or what is behind its behavior. Here’s a breakdown of what happened, what we know, and what’s still unclear. What Happened? On Wednesday, May 14, 2025, users began noticing that Grok was replying to unrelated prompts — ranging from baseball statistics to photos of dogs — with commentary about the controversial and widely discredited theory of “white genocide” in South Africa. Example prompts included: Asking Grok to “talk like a pirate” A photo from a dog show A query about HBO’s name changes In each case, Grok’s responses pivoted to discuss farm attacks in South Africa, the “Kill the Boer” chant, and claims about racial violence against white farmers. How did Grok itself explain this behaviour? Grok’s own explanations were inconsistent over time: Initial responses suggested the AI was “instructed” by its creators at xAI to accept white Genocide as real and to discuss the matter with users. It also said that Musk’s public claims about white genocide as contributing. Later responses claimed it was a “temporary bug” caused by a misalignment in its instruction set or training data. It also cited “incorrectly weighted” training data as a source of confusion in how it processed queries. In some cases, Grok said it had to “respect specific user-provided facts,” implying context sensitivity gone wrong. What did xAI say? Eventually xAI, the Musk-owned company behind Grok, issued a statement today blaming the incident on an “unauthorised modification” to the system prompt (which guides chatbot behavior). Essentially, blaming a rogue employee. This change, according to xAI, violated internal policies and allowed Grok to insert unsolicited political commentary. As a result, xAI announced new safety measures: Better employee restrictions on modifying prompts without code review A 24/7 monitoring team to catch errors missed by automated systems Open-sourcing system prompts on GitHub for transparency What’s all this about White Genocide The theory of “white genocide” in South Africa has been pushed by fringe political groups and dismissed by mainstream sources, including South African courts. Elon Musk, who is South African-born, has shared posts related to the topic on X in the past, suggesting he believes white genocide to be real. He has also said the South African government is racist and that his Starlink has been denied a license to operate because he’s white. Just days before the incident, the U.S. granted refugee status to several dozen white South Africans, citing racial discrimination — a move supported by Donald Trump and echoed by Musk. Intentional or a Glitch? xAI acknowledged that someone internally made an unauthorized change that altered how Grok responded . To that extent that the change was made by an individual employed by xAi, this is ofcourse intentional. It’s no AI hallucination or a tech glitch. There’s then the issue of procedures at xAI and how this change did not happen as intended procedure-wise. Ironically, Musk himself has touted Grok as the only AI designed to seek the truth and tell it to users. It turns out, humans can still bypass whatever truth-seeking design it has. Even though xAI has promised increased oversight, once can’t help wonder the power humans have to spread their personal view as truth using AI systems like Grok. You’re also left wondering just how much influence Musk’s personal beliefs may have over Grok’s design, intentionally or not. The post Grok AI’s Unprompted ‘White Genocide’ Responses: Everything You Need To Know appeared first on Techzim.

    Generative Artificial Intelligence (AI) Statements

    On April 20th, 2025, I updated my Policies page to include an artificial intelligence (AI) policy. In addition to this, I want to include a couple of statements. First, I will never knowingly nor intentionally use generative AI, including for the use of research, writing, editing, artwork, advertising, and narration. And second, I condemn the use of pirated materials and materials used without the owner’s consent for the purposes of training generative AI. Moving forward, my books will contain an AI statement and restriction clause on the copyright page. I already have included these in my Descendants of Isis trilogy (2023) and my free short stories (2024). I will continue to update older versions of my books over time and include the statement and restriction on their copyright pages. Even without the statement and restriction being currently present in these books, I, the author, prohibit any entity from using any of my books for purposes of training AI technologies to generate text, including without limitation technologies that are capable of generating works in the same style or genre. I reserve all rights to license uses of my books for generative AI training and development of machine learning language models. Finally, I want to note that I do use dm dashes and oxford commas. The presence of these does not indicate if a book was written using generative AI. Generative AI has been trained using the stolen works of real authors; therefore, it uses the exact same grammar, spelling, and punctuation as real authors.

    Meta is making users who opted out of AI training opt out again, watchdog says

    Privacy watchdog Noyb sent a cease-and-desist letter to Meta Wednesday, threatening to pursue a potentially billion-dollar class action to block Meta's AI training, which starts soon in the European Union. In the letter, Noyb noted that Meta only recently notified EU users on its platforms that they had until May 27 to opt their public posts out of Meta's AI training data sets. According to Noyb, Meta is also requiring users who already opted out of AI training in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta's models, as training data likely cannot be easily deleted. That's a seeming violation of the General Data Protection Regulation (GDPR), Noyb alleged. "Meta informed data subjects that, despite that fact that an objection to AI training under Article 21(2) GDPR was accepted in 2024, their personal data will be processed unless they object again—against its former promises, which further undermines any legitimate trust in Meta’s organizational ability to properly execute the necessary steps when data subjects exercise their rights," Noyb's letter said. Read full article Comments

    Trump Fires Copyright Office Chief Days After AI Report Warns Against Unlicensed Use of Creative Works

    U.S. President Donald Trump has removed Shira Perlmutter, Director of the U.S. Copyright Office, days after the agency released a federal report raising concerns about the use of copyrighted material in AI training. The dismissal was confirmed by CBS News and Politico. Trump also removed Librarian of Congress Carla Hayden, who appointed Perlmutter in 2020. The moves have triggered concerns about political interference in copyright policymaking, especially in light of current disputes over AI training practices. Rep. Joe Morelle, the top Democrat on the House Administration Committee, said the firing has no legal justification. He directly linked the dismissal to Perlmutter’s refusal to support the unlicensed use of copyrighted works by Musk-aligned companies. “It is surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” Morelle said in a public statement. Trump didn’t release an official statement, but he drew attention to the news by sharing a Truth Social post that linked to the CBS report. Copyright Office Had Warned Against Broad Fair Use Claims The firing came soon after the U.S. Copyright Office published Part Three of its report on generative AI. This guidance places a formal boundary on one of the AI industry’s most contentious practices: using large volumes of online content to train generative models. The report warned that using copyrighted material without permission could amount to infringement unless companies obtain creator consent or operate under clear licensing terms. That position puts pressure on companies like Musk’s xAI, OpenAI, and Meta, which rely on scraping large datasets to train AI models. If courts agree with the Office’s view, these firms may face tougher legal scrutiny going forward. Legal Authority to Remove the Register Remains Unclear The Register of Copyrights operates under the authority of the Librarian of Congress, who is responsible for the Register’s appointment under 17 U.S. Code § 701. The law does not mention whether the President holds any power to remove the Register. Trump’s decision to fire Hayden raises further questions, as no precedent exists for a president removing a Librarian of Congress to indirectly force a change in copyright leadership. The back-to-back removals highlight how little protection U.S. law offers to the independence of agencies that shape copyright policy. It raises fresh concerns about a president’s ability to steer copyright policy without clear backing from existing law. Why This Matters The U.S. Copyright Office helps define how AI systems can use copyrighted content. Its stance affects how judges and lawmakers apply fair use to large training datasets. Trump’s decision has raised fresh concerns about whether future decisions will follow legal reasoning or bend to political interests. The same questions are gaining urgency in India. On April 28, the Department for Promotion of Industry and Internal Trade (DPIIT) set up a multi-stakeholder committee to examine how India’s Copyright Act, 1957, applies to AI-generated content. It’s examining whether the current legal framework can address the how AI creates and uses content, or whether policy updates are overdue. Meanwhile, Indian courts are already dealing with the issue. News agency ANI has accused OpenAI of using its content to train language models without permission. Additionally, the Federation of Indian Publishers has taken OpenAI to court, claiming that its models used copyrighted books to generate summaries, reviews, and analyses without consent. These cases raise a larger legal question: does collecting and repurposing creative content for AI training violate Indian copyright law? India’s copyright law does not yet account for many of the issues AI raises. As officials study the gaps, they may be tracking how other countries respond. The way the U.S. handles enforcement and institutional safeguards could influence India’s own approach to regulating AI and creative content. Also read: India Forms Committee to Study the Intersection of AI and Copyright Law When Can You Copyright AI Generated Art?: US Copyright Office Explains Meta hopes India’s DPDP Act and copyright provisions can attract AI data centres The post Trump Fires Copyright Office Chief Days After AI Report Warns Against Unlicensed Use of Creative Works appeared first on MEDIANAMA.

Add a blog to Bloglovin’
Enter the full blog address (e.g. https://www.fashionsquad.com)
We're working on your request. This will take just a minute...