China Limits Chipmaking Materials to US; Meta-Less than 1% of Election Misinfo Was AI Generated; US Consumer Agency Proposes Rule Blocking Data Brokers’ Sale of Sensitive Personal Data; Some Names Make ChatGPT Grind to Halt

China has established new limits on the sale of gallium, germanium, antimony, and other key minerals with potential military applications to the US to safeguard their national security. Theverge.com reports that the nation will also closely scrutinize the export of graphite. This all came down after the US Department of Commerce introduced new rules on Monday to “further impair” China’s ability to produce semiconductors for AI and weapons systems. The rules put new limitations on the equipment and software used to manufacture semiconductors, along with high-bandwidth memory chips. It also began barring exports to 140 new Chinese companies. This could be just a prelude if Trump imposes new tariffs on Chinese goods. 

There was plenty of misinformation about recent elections, but according to engadget.com, Meta claims AI-generated content only amounted to less than 1% of the false information on its platforms….or at least that election related misinformation that was caught an labeled by fact checkers. The Meta crew looked at misinformation regarding elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU’s Parliamentary elections. Meta’s own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to election day in the US.

The Consumer Financial Protection Bureau has proposed a new rule that would block data brokers from selling personal and financial information on Americans, including their Social Security numbers and phone numbers, under the Fair Credit Reporting Act. TechCrunch notes that in proposing the new rules, months after President Biden signed an executive order to curb the sale of Americans’ private data, the U.S. consumer protection agency said it aims to “rein in” data brokers, who sidestep federal law by claiming that they are not subject to the FCRA’s legal provisions. According to the CFPB, the proposed rule would treat data brokers the same as credit bureaus and background check companies, or any other company that sells data about income or credit scores, histories, and debt payments, which are already subject to the FCRA. 

Some names apparently cause OpenAI’s ChatGPT to clam up. Recently, people recovered that the name ‘David Mayer’ would cause the AI to break down. Arstechnica.com reports that additionally, ‘Jonathan Zittrain’ and ‘Jonathan Turley,’ as well as ‘Brian Hood’ would make the large language model stop in its tracks. The AI will say ‘I am unable to produce a response’ or the like. The names are apparently filtered due to lawsuits or complaints about ChatGPT’s tendency to confabulate erroneous responses when lacking sufficient information about a person. As we continue to learn, it will be more than a little while before AI assistants are really reliable…so count on our own noggin, and check things yourself…always keep in mind that AI models have completely fabricated legal cases, for which lawyers that didn’t check the citations were fined by courts.

I’m Clark Reid and you’re ‘Technified’ for now.



Leave a comment