source
stringclasses
2 values
author
stringlengths
0
824
title
stringlengths
0
475
description
stringlengths
0
32.8k
url
stringlengths
0
713
urlToImage
stringlengths
0
2k
publishedAt
stringlengths
20
20
content
stringlengths
0
32.8k
category_nist
stringlengths
5
160
category
stringlengths
5
239
id
stringlengths
6
7
subreddit
stringlengths
3
21
score
int64
0
30.2k
num_comments
int64
0
2.27k
created_time
timestamp[ns]
top_comments
stringlengths
1
25.4k
news
Amanda Smith
I Tried AI to Save Money on Flights. It Was a Bumpy Ride
AI sent me on an adventure as I searched for a bargain.
https://www.cnet.com/tech/services-and-software/i-tried-ai-to-save-money-on-flights-it-was-a-bumpy-ride/
https://www.cnet.com/a/i…t=675&width=1200
2024-12-02T15:00:07Z
I'm one of those people who finds hunting for flights fun. But it can quickly take a turn for the worse when I feel like I'm playing cat and mouse with Google's algorithm. Turn the VPN on. Use a private window. Switch between my MacBook and iPhone. My strategy to "trick" the algorithm, so that it doesn't hike up the cost of the flight before I book it, often falls short. Google Flights is my favorite booking platform, so I wondered if Google's AI, Gemini, could help me save money on a future flight. Could this be the most underrated travel hack for 2025? With the price of flights these days, I sure hoped so. Gemini was released in March 2023 (when it was known, briefly, as Bard) and is one of the main artificial intelligence models on the market. It's free to use if you have a Google or Workspace account with admin-enabled access to Gemini. There's also a paid version, which is $20 per month, with a two-month free trial.See also:Traveler's Prep Checklist: Tips for Easier Flights and Road-TripsFor me, an Australian who lives in the US, the cost of flying home and back is in the thousands. Any savings AI can find will make a world of difference to my bank account. Setting up With a dedicated Google Flights extension, I had expected Gemini to transport me to the best deals. I followed the prompts to enable extensions, but I couldn't get it to work -- even after poking around in my Google admin console all afternoon. A CNET editor was able to use this tool as expected (see screenshot below), so it's worth testing out to see if it works for you. Gemini / Screenshot by CNETAnnoyed by the lack of ease, I ran a quick search for AI tools that could save me money on plane tickets, and in my "travels" I discovered Karma, an AI coupon code extension that I could easily install. I saw Expedia on its list of affiliated brands, so I used that travel platform to search for flights. I picked dates four months away for a hypothetical trip home to Australia. The cheapest return flight was $1,443. Not bad, but Karma pinged me with a coupon. I clicked "apply coupons." See also:Worried About Expensive Holiday Airfare? 5 Easy Tips to Save Money on Plane TicketsI copied the code and continued in Expedia, assuming it would automatically apply at the checkout page, but there was no area to apply a code. Turns out Karma coupons can't be applied to all Expedia purchases; maybe I could have found a way to use one somehow, but it wasn't working for this flight option. But for me, it's strike number two. Back to Gemini, but for some advice this time. I asked Gemini how AI can help me save money on flights. The answers were to be expected, but two words piqued my interest: "secret fares." Gemini / Screenshot by CNETI checked the age of those two links Gemini referenced and they were from 2018, so I responded to Gemini asking what platforms have "secret fares."Gemini / Screenshot by CNETWhile skiplagged sounds like an adventure in itself, I'm too old to fly with no check-in luggage or risk being blacklisted by airlines. When there's only one (practical) way to enter my country (by air), the potential reward wasn't worth the risk. It was interesting that Hopper wasn't mentioned in Gemini's list, although it was the source of the original "secret fares" information. It had come up in my own research, so I decided to give it a shot. Hopper's Price Prediction function is only available in the app. I downloaded it.See also:Travel Planning With AI: I Tested It for a City I Know Inside and OutOnce I set myself up in Hopper, I ran a quick Google Flights search and found a good option for $1,256. Hopper's cheapest option was $1,272 (for that same flight), but it said I could save up to $199 by waiting. I turned on Price Watch notifications. And waited. I checked back in a day later and it hadn't changed, then the following day it had gone up. I opened the app a few days later and again it had gone up, not down. If you have more time, it could take a few weeks for Hopper to identify a lower price. As always, there's no guarantee. Strike number three. A double-checkIf you're using a flight booking platform like Google Flights, Skyskanner or Expedia, these sites do have AI features built in. This might be sufficient, and you can use AI chatbots on those sites for travel advice. For confirmation, I checked with ChatGPT. It gave me tools I'd already tried out. ChatGPT/Screenshot by CNETWhat I did discover via ChatGPT is the new Google Flights refund feature on itineraries departing from the US. If the price drops on a booking (that had a price guarantee badge), you get refunded through the Google Pay app. Pretty cool. That one feature alone is enough for me to stick with Google Flights, alongside accessing advice from AI chatbots. You can also use AI to plan out a strategy to use credit card points to save on flights. However, AI has been found to pull up outdated card bonuses and rewards information, so it's important to vet it before applying for a new credit card. The verdict After multiple attempts to use AI to save money on flights, I unpacked my bags. While I expected Gemini's Google Flights extension to be as smooth as Google Flights itself, I couldn't get it to work, even with the features enabled in my Google account.Gemini also admits it doesn't always get it right, so be sure to check the flight details on the airline's website. Given flight tickets are often in the thousands, you don't want to risk any mistakes.In short, using AI to find flight deals still has a way to go. I did, however, find some interesting things via AI, such as the refund feature. AI is a great thought partner but not yet a booking partner.
Information Retrieval Or Search/Recommendation
Unknown
null
null
null
null
null
null
news
[email protected] (Nickolas Diaz)
Google launches 'Whisk' for fast AI-generated imagery and updates Veo 2, Imagen 3
Whisking is no longer just for food — although, you could use this AI to generate food pictures.
https://www.androidcentral.com/apps-software/google-ai-whisk-image-video-generation-updates
https://cdn.mos.cms.futu…3wNV-1200-80.jpg
2024-12-16T18:46:25Z
What you need to knowGoogle launched its latest AI experiment, Whisk, for Labs testers, which is aimed at giving users a quicker avenue to image generation.Whisk leverages Gemini and Imagen 3 and requires users to upload images to fit three areas: subject, scene, and style.Elsewhere, Google highlighted new version of Imagen 3 and Veo 2, which are rolling out today (Dec 16).Google starts the week with a new AI experiment in Labs that U.S. users can start signing up for.As detailed in a Keyword post, the newest test for Labs is called "Whisk" and goes about image generation in a new way: by letting users "prompt with images." This is an alternative to writing a detailed description, in case the words aren't flowing. Google explains that its newest model, Whisk, which leverages Gemini and Imagen 3, "extracts only a few key characteristics from your image."To get started, the company says users must provide three images for the following areas: subject, scene, and style.The AI model is said to consider everything to generate a new "unique" image, like a Whimsical Walrus or an image of a pink donut enamel pin. However, Google warns that its AI only considers things to capture the "essence" of whatever images the user has uploaded to Whisk.Google says that this design is intentional as Whisk will not replicate whatever you've uploaded to the letter. The post adds that certain aspects of your images, such as subject height, hairstyle, and skin tone might vary, too. It understands that these attributes are important to users, which is why Google says Whisk gives users a "review and edit" feature after generation.The company adds that this AI model is intended to give users a quicker experience with image generation than the typical methods. Moreover, Whisk lets users "remix" the images they've added to the creation tool for another round of ideas. After an image is created — using Gemini's model — Whisk will create a "detailed" description of what it created.Labs testers in the U.S. can sign up today (Dec 16) for Whisk and try it out.Get the latest news from Android Central, your trusted companion in the world of AndroidBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.New Veo and Imagen Versions(Image credit: Google)In a separate blog post, Google detailed new versions of Veo 2 and Imagen 3. Beginning with the latter, the post says Imagen 3 has been updated to produce "brighter" and "better composed" images. Imagen 3 can reportedly follow your descriptions (prompts) better than before with richer details in its final renders.The newest version of Imagen 3 is rolling out today (Dec 16) globally in ImageFX in Google Labs.The latest updates to Imagen 3 arrive just before the year's end and several months after its (quiet) U.S. launch. Google started making its AI image generator available to users more prominently in August on its Vertex AI platforms. The company touts its safety precautions in place to avoid the creation of illegal or otherwise offensive imagery through Imagen 3.Elsewhere, Veo 2 is receiving an update that widens the AI video generator's horizons. Google states this new version of Veo 2 lets users create "incredibly high-quality" videos for a "wide range of subjects and styles." Veo 2 can reportedly reach resolutions heights of 4K and it can supposedly understand more detailed descriptions of what you want.The company gives the example of a user telling Veo 2 to utilize an "18mm lens" and states the AI will understand how to replicate a wide angled shot. Moreover, its understanding of expressions, movements, and more help to facilitate a more engrossed AI-generated video experience.Google also states that Veo 2's latest updates help it hallucinate details "less frequently."Today (Dec 16), Veo 2's update is rolling out in Google Labs' VideoFX and the company is slowly expanding the number of test participants. If you're interested, you can signup on the waitlist. Google then teased its plans to bring Veo 2 to YouTube Shorts and "other products" next year. Plans for YouTube incorporation are something we heard about earlier this year.
Content Creation/Content Synthesis/Image Analysis
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Imad Khan
Gemini 2.0 and Project Astra Make Google's AI Your Know-It-All Assistant
Not to be outdone by OpenAI, Google's bringing some major AI advancements to a select group of users.
https://www.cnet.com/tech/services-and-software/gemini-2-0-and-project-astra-make-googles-ai-your-know-it-all-assistant/
https://www.cnet.com/a/i…t=675&width=1200
2024-12-11T15:30:03Z
Google is entering its AI agents era with the introduction of Gemini 2.0 -- the company's next generation AI chatbot -- and a limited release of Project Astra, a computer vision-assisted AI agent that can see and analyze the world around you, the company said in a press release on Wednesday. Project Astra, which was shown off at Google I/O earlier this year, is a major leap in Google's AI research from its DeepMind team in London. Like the video from earlier this year demonstrated, Project Astra, which can work through your phone's camera or through camera-equipped glasses, can see and analyze the world around you and give answers to anything it recognizes. This includes being able to ask your glasses where a bus is headed, what the code is to your apartment complex is or where you left your book. Google says its latest advancements with Astra include better dialogue and conversibility in multiple languages, deeper integration with Google Lens and Maps, up to 10 minutes of memory, and better latency for faster responses. Project Astra will first land with people in its trusted tester program. A time frame for when it might go public wasn't given. Project Astra's ocular capabilities will certainly raise privacy concerns. Google said it's working with its Responsibility and Safety Committee, the company's interview review group, to flag potential risks. This includes flags to prevent users from unintentionally sharing sensitive information with agents and controls so that users can delete sessions. Google's latest wave of announcements come as major AI advancements have slowed and Wall Street investment has softened. At the same time, OpenAI, creators of ChatGPT, have been releasing newer, more advanced models and raising billions of dollars in investments. It's a race between Google and OpenAI, which is heavily backed by Microsoft, to see which Big Tech giant will lead the race in AI. Some analysts believe that AI development will be a winner-take-all race, with the best tech leading the market. Microsoft has already spent $19 billion, and DeepMind CEO Demis Hassabis says Google will spend $100 billion on AI. The AI market is expected to be valued at $1.8 trillion by 2030, according to a report by Grand View Research. While Project Astra certainly ignites imaginations of what's possible with this all-seeing AI tech, Gemini is also getting a big update. Gemini 2.0 will have advanced reasoning capabilities with better responses across the board, from general queries to coding questions and even math, according to Google. The company says Gemini 2.0 also works faster than previous versions. An early experimental version of Gemini 2.0 will be given to developers before going out to the wider public. Luckily, starting Wednesday, fans can play with the chat version of Gemini 2.0 Flash via the Gemini app on their phones. Gemini 2.0 Flash is a lighter version of Gemini 2.0. Google says Gemini 2.0 will expand to more Google products early next year. Along with news on Project Astra and Gemini 2.0, Google also unveiled Project Mariner, a prototype Chrome extension that can help with more complex tasks. Currently limited to Chrome for trusted testers, it can analyze text, images, graphs and other web elements at the pixel level and use that information to complete complex tasks. Google's still working on Mariner and admits that the tech isn't always accurate and is slow to complete tasks. Google's engineers have also been working on ways to have AI help with your gaming. In partnership with Supercell, creator of Clash of Clans and Brawl Stars, Google is working on an AI that can answer questions about the games you're playing, like what you need to do to beat a boss. Google didn't unveil how its AI will be able to deliver this information. Is it based on information provided from the game developer? Or is Google sucking up information from gaming guides published online?Google also announced Jules, another AI agent the company working on to help with coding. Jules integrates into a GitHub workflow. For more on AI, check out OpenAI's Sora Turbo announcement and Google's new quantum AI chip.
Digital Assistance/Content Synthesis/Content Creation
Unknown
null
null
null
null
null
null
news
[email protected] (Nickolas Diaz)
Google gifts Pixels a December feature drop with Gemini and dual-screen updates
Google detailed its major December 2024 Pixel feature drop.
https://www.androidcentral.com/phones/google-pixel-december-2024-feature-drop-announced
https://cdn.mos.cms.futu…HREQ-1200-80.jpg
2024-12-05T17:00:00Z
What you need to knowGoogle detailed the rollout of its major Pixel December feature drop.Its AI Gemini is picking up several updates like a Spotify extension, calling/texting abilities, and "Saved info with Gemini."The Pixel 9 series receives help from Gemini Nano for screened calls, updates to Pixel Screenshots, and the Fold series grabs a "Dual-Screen" update.These updates are hitting Pixel devices today (Dec 5).Google's latest Pixel feature drop might help stymie the lingering chill of December with loads of new features.Today (Dec 5), Google detailed the rollout of its major December 2024 Pixel feature drop that will start hitting devices. Starting the update train are a few updates for Gemini. According to a press release, Google is spreading Gemini across even more of its ecosystem. The AI model's new Spotify extension is rolling out to more Pixels alongside the AI's ability to call or text your contacts, per your instruction.The company highlights the added ability for Gemini to alter your timers and alarms, and even open your camera app. These updates are hitting Pixel 6 devices and newer.Elsewhere, Google rolls out "saved info with Gemini." This feature is said to let users instruct Gemini to remember specific details about them or their preferences. The idea is to help users gain more "relevant and helpful" responses from the AI, as they won't have to repeat the details. Google states this is rolling out to all Android phones and Gemini on desktop.Gemini Live picks up support for even more languages in the places where it's available.Google adds that its phones will soon grab specific updates, beginning with the Pixel 9 series' new Call Screen. The post states Gemini Nano is entering to suggest "easy-to-tap" replies on the call screen. Of course, this will affect those screening a call as you can use the AI as a buffer between you and them or to reply to a delivery's request.What's more, Google says users can now peek at the conversation between a caller and the AI on their device during "automatic call screening."Get the latest news from Android Central, your trusted companion in the world of AndroidBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.The Gemini Nano-powered update is rolling out for the entire Pixel 9 series (including the 9 Pro Fold) in the U.S. while the automatic call screening hits Pixel 6 and newer phones.The Pixel Screenshots app is receiving the December gift of Circle to Search compatibility. Google states users circling or highlighting an object for Search can now save a snapshot of that to its Screenshots app. Moreover, the app will now automatically organize your screenshots based on information like calendar dates and more.This is rolling out to the Pixel 9 series for English language users.Updates for screenshots continue as Google details its new functionality with the Wallet app. The Pixel 9 series users can soon take screenshots of cards or tickets can add them into Google Wallet. Support includes driver's licenses, boarding passes, "and more." Additionally, Gboard will display recommendations like movies, music, and more based on your screenshots.These suggestions will appear in "relevant" apps. However, Google states that users can turn this function off (or on) under the "Show suggestions from your screenshots in other apps" option in the Screenshots app settings.(Image credit: Google)Other updates for the Pixel 9 series includes the ability to "capture and share" Ultra HDR photos on Instagram. This is rolling out to Pixel 9 devices globally.Pixel Studio stickers have expanded to be useable in Gboard. Also, the Pixel Recorder app is picking up a "clear voice" feature. Clear Voice is said to "reduce noise and distractions automatically." Pixel 9 series users must enable this feature in the app's settings.Google's foldables are receiving a few updates such as "Dual Screen in Portrait mode." Available on the original Fold and the 9 Pro Fold, users on both sides of the camera can see what a photo will look like before it's snapped. Also, it might be hard to get your little one to look at the camera, but that's where "Made you look" comes into play.Google is bringing this feature to the first Pixel Fold, meaning parents can place a cute and funny animation on their phone to set up the perfect photo. On the other hand, the Pixel 9 Pro Fold is grabbing a new character: Joy, from Disney's Inside Out.More Pixel joy(Image credit: Google)"Expressive captions" is a feature that Pixels ranging from the 6 series to the latest 9 series can use. Google states its AI will "automatically capture the intensity and emotion" of a person speaking on a live stream or phone call. The AI will suggest what your reaction could be, like gasping at a shocking moment or cheering for a score.These captions are also "compatible with other Android devices."Snapchat is receiving a dedicated Photo Picker for users. Google states this should help users find what they're looking for in their folders, favorites, and images stored in the Cloud.The company detailed a security feature entering beta called "Identity Check." Google states that when users enter a "new location," their Pixel 6 or newer device will ask for facial recognition or your fingerprint before they can alter any "sensitive information."Smaller updates include "Simple View," which produces a larger system-wide font and increases your Pixel's touch sensitivity. Quick Share is faster and "Now Playing" will show the album art when viewing its history.
Personalization/Digital Assistance
Unknown
null
null
null
null
null
null
news
Hugh Langley
Google worried its Gemini Workspace product lagged rivals like Microsoft and OpenAI in key metrics, leaked documents show
The Gemini for Google Workspace AI tools struggled against rivals Microsoft and OpenAI in brand familiarity, internal documents showed.
https://www.businessinsider.com/google-workspace-ai-tools-trail-rivals-key-metrics-leaked-docs-2024-12
https://i.insider.com/67582edf52dd0818d1a62833?width=1200&format=jpeg
2024-12-10T22:17:45Z
Aparna Pappu, former head of Google Workspace, onstage at Google IO 2023GoogleGoogle found its AI Workspace product lagged rivals, internal documents show.A study earlier this year found the tool trailed Microsoft and Apple in brand familiarity and usage.Google hopes Workspace is one way it can turn AI into profit.As Google pours untold amounts of cash into AI, it's banking on products such as Gemini for Google Workspace to turn that investment into revenue. An internal presentation reveals the company worried that Gemini lagged behind its rivals across key metrics.Gemini for Google Workspace puts Google's AI features into a handful of the company's productivity tools, such as Gmail, Docs, and Google Meet. Users can have the AI model rewrite an email, whip up a presentation, or summarize documents filled with dense information. Google, which charges customers extra for these add-ons, claims the features will save users time and improve the quality of their work.Gemini for Google Workspace trailed all its key rivals, including Microsoft, OpenAI, and even Apple, when it came to brand familiarity and usage, according to an internal market research presentation reviewed by Business Insider.The data tracked Gemini's brand strength during the first half of 2024 and included data on what percentage of audiences use and pay for Gemini for Google Workspace in certain segments.One document seen by BI said that Workspace's Gemini tools were "far behind the competition" but that a strong Q4 could help the company in 2025.In a written statement, a spokesperson said the data came from a study tracking brand awareness during the brand transition from "Duet AI" to "Gemini" earlier this year and called the data "old and obsolete.""In the time since, we've brought Gemini for Workspace to millions more customers and made significant, double-digit gains across key categories, including familiarity, future consideration, and usage. We're very pleased with our momentum and are encouraged by all the great feedback we are getting from our users," the spokesperson added.The internal data tracked Gemini's brand strength across commercial, consumer, and executive groups. In the US commercial group, Gemini scored lower than Microsoft Copilot and ChatGPT across four categories: familiarity, consideration, usage, and paid usage, one slide showed. Paid usage was measured at 22%, 16 points lower than Copilot and ChatGPT.Data for the UK in the commercial group also showed Gemini mostly behind its rivals, although it scored slightly higher than Copilot in paid usage. In Brazil and India, Gemini for Workspace fared better than Copilot across most categories but still fell below ChatGPT, the data showed."Gemini trails both Copilot and ChatGPT in established markets," the document said, adding that it "rises above Copilot across the funnel" in Brazil and India.In another part of Google's internal presentation that focused on brand familiarity, Google's Gemini for Workspace came in last place in consumer, commercial, and executive categories, trailing ChatGPT, Copilot, Meta AI, and Apple AI.Familiarity was particularly low for the US consumer category, with Gemini for Workspace scoring just 45%, while Copilot scored 49%, ChatGPT and Apple both scored 80%, and Meta scored 82%.'We have the same problem as Microsoft'Microsoft's Copilot, which does similar tasks like summarizing emails and meetings, likewise struggles to live up to the hype, with some dissatisfied customers and employees who said the company has oversold the current capabilities of the product, BI recently reported."We have the same problem as Microsoft," said a Google employee directly familiar with the Gemini for Workspace strategy. "Just with less market share." The person asked to remain anonymous because they were not permitted to speak to the press.Google's data showed Apple and Meta's AI products have much bigger market recognition, which could benefit those companies as they roll out business products that compete with Google's.Internally, the Workspace group has recently undergone a reshuffle. The head of Google Workspace, Aparna Pappu, announced internally in October that she was stepping down, BI previously reported. Bob Frati, vice president of Workspace sales, also left the company earlier this year. Jerry Dischler, a former ads exec who moved to the Cloud organization earlier this year, now leads the Workspace group.Are you a current or former Google employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).Read the original article on Business Insider
Content Creation/Content Synthesis
Unknown
null
null
null
null
null
null
news
Rhett Jones
Elon Musk’s Grok Chatbot Is Free for All X Users Now
There are limits for free users, though.
https://gizmodo.com/elon-musks-grok-chatbot-is-free-for-all-x-users-now-2000535451
https://gizmodo.com/app/…s-2185934057.jpg
2024-12-06T19:40:21Z
X has made its Grok chatbot free to all users. Grok is developed by xAI, Elon Musks AI company that was spun up as a response to OpenAI.The news doesnt come as a surprise as some users noticed weeks ago that X had begun offering free access to the LLM chatbot. Musk has proclaimed Grok to be the most free speech-forward chatbot after repeatedly attacking other chatbots like OpenAIs ChatGPT for being too woke by limiting discussion of sensitive topics. Besides discussing topics more freely, Grok also includes a text-to-image generation feature that will generate more explicit images, if you ask it to. Because Musk owns both X and xAI, Grok can access the entire corpus of X content to provide real-time access to somewhat accurate information. Results from Grok can be embedded into X posts. Grok has been limited to paying X subscribers thus far, but thats a relatively small number of people, and Musk probably wants more people to give Grok a try as ChatGPT continues to race ahead. Developing large language models is incredibly expensive, and xAI has spent billions scaling up a supercomputer in Memphis, with plans to house at least 1 million GPUs and outpace the likes of OpenAI and Googles Gemini. X provides by far the most revenue to xAI, but Grok does have a developer API now. Musk is currently locked in a heated race with OpenAI most directly, which he co-founded only to leave several years later over disputes about the direction of the company. He is currently suing OpenAI over its pivot from a non-profit, open-source company into a commercial enterprise with profit intent as well as its requirement that investors in OpenAI not invest in competitors. © Gizmodo A chat with Grok about whether Grok is open source.Of course, Musks X is a for-profit company and does not allow other companies to use content from the social media network for training. He has a point that he invested $44 million to start OpenAI with the intention of being a non-profit company. But OpenAI has released emails that show Musk urged the company to raise $1 billion and start being less open over time, essentially saying if it didnt become a for-profit entity, it wouldnt be able to raise enough money from investors to compete with the likes of Google DeepMind. Suing OpenAI to force it to remain a non-profit company could be self-serving and benefit Musks own companies, including Tesla, which is developing its own AI-based products that are not open source. xAI is also important to Musk because he has used shares in the company to repay investors who lost their shirts on his acquisition of Twitter. He has granted shares in the new company, now valued at $50 billion, to those investors, and some of them have invested further capital in the new company. Although chatbots are still plagued with hallucinations and other problems, OpenAI has said that ChatGPT recently surpassed 300 million weekly active users. Musk will likely need to expand access to Grok beyond X if he wants to catch up.
Digital Assistance/Content Synthesis
Unknown
null
null
null
null
null
null
news
EditorDavid
Elon Musk's xAI Plans Massive Expansion of AI Supercomputer in Memphis
An anonymous reader shared this report from Reuters:Elon Musk's artificial intelligence startup xAI plans to expand its Memphis, Tennessee, supercomputer to house at least one million graphics processing units (GPUs), the Greater Memphis Chamber said on Wednesday, as xAI races to compete against rivals like OpenAI. The move represents a massive expansion for the supercomputer called Colossus, which currently has 100,000 GPUs to train xAI's chatbot called Grok. As part of the expansion, Nvidia, which supplies the GPUs, and Dell and Super Micro, which have assembled the server racks for the computer, will establish operations in Memphis, the chamber said in a statement. The Greater Memphis chamber (an economic development organization) called it "the largest capital investment in the region's history," even saying that xAI "is setting the stage for Memphis to become the global epicenter of artificial intelligence." ("To facilitate this massive undertaking, the Greater Memphis Chamber established an xAI Special Operations Team... This team provides round-the-clock concierge service to the company.") Reuters calls the supercomputer "a critical component of advancing Musk's AI efforts, as the billionaire has deepened his rivalry against OpenAI..." And the Greater Memphis chamber describes the expansion by Nvidia/Dell/Super Micro as "further solidifying the city's position as the 'Digital Delta'... Memphis has provided the power and velocity necessary for not just xAI to grow and thrive, but making way for other companies as well."Read more of this story at Slashdot.
https://slashdot.org/story/24/12/05/0246248/elon-musks-xai-plans-massive-expansion-of-ai-supercomputer-in-memphis
https://a.fsdn.com/sd/topics/ai_64.png
2024-12-05T12:34:00Z
Elon Musk's artificial intelligence startup xAI plans to expand its Memphis, Tennessee, supercomputer to house at least one million graphics processing units (GPUs), the Greater Memphis Chamber said on Wednesday, as xAI races to compete against rivals like OpenAI. The move represents a massive expansion for the supercomputer called Colossus, which currently has 100,000 GPUs to train xAI's chatbot called Grok. As part of the expansion, Nvidia, which supplies the GPUs, and Dell and Super Micro, which have assembled the server racks for the computer, will establish operations in Memphis, the chamber said in a statement.Reuters calls the supercomputer "a critical component of advancing Musk's AI efforts, as the billionaire has deepened his rivalry against OpenAI..." And the Greater Memphis chamber describes the expansion by Nvidia/Dell/Super Micro as "further solidifying the city's position as the 'Digital Delta'... Memphis has provided the power and velocity necessary for not just xAI to grow and thrive, but making way for other companies as well."
Unknown
Unknown
null
null
null
null
null
null
news
Joe Rossignol
What to Expect From iOS 19: All the Rumors So Far
iOS 19 is not expected to be announced until June 2025, but information about the software update is already starting to surface.Below, we recap the biggest iOS 19 rumors so far.More ChatGPT-Like SiriiOS 19 will introduce a "more conversational Siri" powered by "more advanced large language models," according to Bloomberg's Mark Gurman. He said this upgrade will make Siri more like OpenAI's ChatGPT, allowing the assistant to "handle more sophisticated requests."Apple will likely unveil the revamped Siri when it unveils iOS 19 at WWDC 2025 next June, but Gurman said it will not be available until as early as spring 2026. That suggests the feature will be released as part of an iOS 19.4 update.In the meantime, iOS 18.2 added ChatGPT integration to Siri, and Gurman expects an integration with Google's Gemini to follow in a later update. And with iOS 18.4, Siri will gain on-screen awareness, deeper per-app controls, and a few other enhancements, but it will still not offer fully-fledged ChatGPT-like conversational abilities directly.Delayed FeaturesGurman revealed that some other features have been "postponed" until iOS 19.4."I'm told that a larger-than-usual number of features scheduled for iOS 19 (beyond the new Siri) are already postponed until spring 2026 (when iOS 19.4 debuts)," he said.He did not provide any specific details about the features that are pushed back.Gurman attributed the delay to the gradual rollout of Apple Intelligence features between iOS 18.1 and iOS 18.4, which is preventing some Apple software engineers from moving on to iOS 19 development. In line with this information, an unnamed source within Apple told French website iPhoneSoft.fr that they are still working on iOS 18 development.Compatible iPhonesAccording to French website iPhoneSoft.fr, iOS 19 will be compatible with any iPhone that is capable of running iOS 18, which would mean the following models: iPhone 16 iPhone 16 Plus iPhone 16 Pro iPhone 16 Pro Max iPhone 15 iPhone 15 Plus iPhone 15 Pro iPhone 15 Pro Max iPhone 14 iPhone 14 Plus iPhone 14 Pro iPhone 14 Pro Max iPhone 13 iPhone 13 mini iPhone 13 Pro iPhone 13 Pro Max iPhone 12 iPhone 12 mini iPhone 12 Pro iPhone 12 Pro Max iPhone 11 iPhone 11 Pro iPhone 11 Pro Max iPhone XS iPhone XS Max iPhone XR iPhone SE (2nd generation or later)The oldest devices in that list are the iPhone XS, iPhone XS, and iPhone XR, all of which were released in September 2018.Of course, some iOS 19 features will be unavailable on older iPhone models.If this report is accurate, it would be the second year in a row in which Apple did not drop support for any iPhone models with its major annual iOS release. iOS 18 is compatible with the same iPhone models as iOS 17 is.iOS 17 dropped support for the iPhone 8, iPhone 8 Plus, and iPhone X.TimingThe first iOS 19 and iPadOS 19 betas should be released during WWDC 2025 next June, and the updates are likely to be released in September.Related Roundup: iOS 19This article, "What to Expect From iOS 19: All the Rumors So Far" first appeared on MacRumors.comDiscuss this article in our forums
https://www.macrumors.com/2024/12/23/what-to-expect-from-ios-19-so-far/
https://images.macrumors…Feature-Mock.jpg
2024-12-23T16:38:00Z
iOS 19 is not expected to be announced until June 2025, but information about the software update is already starting to surface.Below, we recap the biggest iOS 19 rumors so far.More ChatGPT-Like SiriiOS 19 will introduce a "more conversational Siri" powered by "more advanced large language models," according to Bloomberg's Mark Gurman. He said this upgrade will make Siri more like OpenAI's ChatGPT, allowing the assistant to "handle more sophisticated requests."Apple will likely unveil the revamped Siri when it unveils iOS 19 at WWDC 2025 next June, but Gurman said it will not be available until as early as spring 2026. That suggests the feature will be released as part of an iOS 19.4 update.In the meantime, iOS 18.2 added ChatGPT integration to Siri, and Gurman expects an integration with Google's Gemini to follow in a later update. And with iOS 18.4, Siri will gain on-screen awareness, deeper per-app controls, and a few other enhancements, but it will still not offer fully-fledged ChatGPT-like conversational abilities directly.Delayed FeaturesGurman revealed that some other features have been "postponed" until iOS 19.4."I'm told that a larger-than-usual number of features scheduled for iOS 19 (beyond the new Siri) are already postponed until spring 2026 (when iOS 19.4 debuts)," he said.He did not provide any specific details about the features that are pushed back.Gurman attributed the delay to the gradual rollout of Apple Intelligence features between iOS 18.1 and iOS 18.4, which is preventing some Apple software engineers from moving on to iOS 19 development. In line with this information, an unnamed source within Apple told French website iPhoneSoft.fr that they are still working on iOS 18 development.Compatible iPhonesAccording to French website iPhoneSoft.fr, iOS 19 will be compatible with any iPhone that is capable of running iOS 18, which would mean the following models:iPhone 16iPhone 16 PlusiPhone 16 ProiPhone 16 Pro MaxiPhone 15iPhone 15 PlusiPhone 15 ProiPhone 15 Pro MaxiPhone 14iPhone 14 PlusiPhone 14 ProiPhone 14 Pro MaxiPhone 13iPhone 13 miniiPhone 13 ProiPhone 13 Pro MaxiPhone 12iPhone 12 miniiPhone 12 ProiPhone 12 Pro MaxiPhone 11iPhone 11 ProiPhone 11 Pro MaxiPhone XSiPhone XS MaxiPhone XRiPhone SE (2nd generation or later)The oldest devices in that list are the iPhone XS, iPhone XS, and iPhone XR, all of which were released in September 2018.Of course, some iOS 19 features will be unavailable on older iPhone models.If this report is accurate, it would be the second year in a row in which Apple did not drop support for any iPhone models with its major annual iOS release. iOS 18 is compatible with the same iPhone models as iOS 17 is.iOS 17 dropped support for the iPhone 8, iPhone 8 Plus, and iPhone X.TimingThe first iOS 19 and iPadOS 19 betas should be released during WWDC 2025 next June, and the updates are likely to be released in September.
Digital Assistance/Content Synthesis
Unknown
null
null
null
null
null
null
news
Rajesh Pandey
Google Nest smart speakers are starting to get this Gemini-powered upgrade
Your Nest speaker or display will soon sound different
https://www.androidpolice.com/google-nest-smart-speakers-are-starting-get-gemini-powered-upgrade/
https://static1.anpoimag…io-bookstack.JPG
2024-12-03T04:59:13Z
Key Takeaways Google seems to be slowly rolling out a more conversational Gemini-powered Google Assistant voice on Nest speakers. Some users are reporting the new Gemini-powered voice is active for certain commands. The new voice should allow Nest users to ask follow-up questions in a natural language. Google has been working hard to make Gemini a full-fledged Assistant replacement on Android. It is also working on integrating Gemini into all its services, like Gmail and Google Drive. In August this year, Google also detailed that it plans to use Gemini AI models to improve your interaction with smart home devices. This includes rolling out new voices for Google Assistant on Nest speakers that "feel more conversational." Four months later, some Nest users report that the new voice is now live for them.RelatedJust in time for the 2024 World Chess ChampionshipSeveral Nest owners on Reddit report Assistant is talking to them in a new voice for certain commands. Apparently, the device will switch between the old and new Gemini-powered voice depending on the command asked. It uses the more conversational Gemini voice for more complex queries, while it sticks to the old voice for simpler questions.One user claims that the "Orange" voice under Google Assistant Settings > Assistant Voice and Sound has changed and now sounds new. If you are not hearing the new Gemini-powered voice on your Nest display or speaker, try switching to the Orange voice to see if that makes any difference. None of the natural-sounding and conversational voices in Google Gemini on Android is named "Orange." So, it seems Google might not stick to the same voice naming scheme across devices.It appears Google is still experimenting with the new voices and has not rolled them out fully for Nest devices. Apart from a more natural conversation flow, the new Gemini-powered voice on Nest speakers and displays should understand you better. It will also enable you to ask follow-up questions without providing context.Gemini is much smarter than Google Assistant and can easily understand conversations in natural language. This makes it ideal for use on Nest speakers and displays, as you can have a natural conversation and get more information about the topic you want.Last month, Google rolled out the fall update for Google Home, bringing Gemini AI to Nest cameras and enabling AI-powered search. It also made creating Google Home routines easier through Gemini; describe what you want to do in plain language, and Gemini will do the rest for you.
Digital Assistance/Content Synthesis/Content Creation
Unknown
null
null
null
null
null
null
news
Chethan Rao
Google Docs will leverage Gemini to create formatted documents from scratch
Docs is adding a new "Help me create" tool that uses Workspace files to generate formatted documents
https://www.androidpolice.com/google-docs-gemini-help-me-create-web/
https://static1.anpoimag…-docs-hero-1.jpg
2024-12-07T18:59:51Z
Key Takeaways Google Docs has launched a new 'Help me create' feature on the web powered by Google Gemini. It lets users generate fully formatted documents by attaching Workspace files from their Drive account. This differs from the similarly named 'Help me write' feature in Google Docs, which cannot use Workspace files. Google Gemini is among the most recognizable names in the AI business today. In addition to offering a powerful chatbot with multiple capabilities, Gemini's power is also well utilized within most Google Workspace apps, including Google Docs. This document editor already supports "Help me write," which lets you refine an existing draft in multiple styles or even write entire paragraphs using a simple prompt. Google is now ready to add another layer of AI-based writing to the mix with the addition of "Help me create." RelatedMy love-hate relationship with Google Docs as a notebookA Google Docs support page has just gone up detailing the functioning of Help me create, with Google explaining how it will let you create formatted documents using Workspace files on your account, which can be attached by typing @ followed by the name of the file saved on Google Drive (via 9to5Google).When available, you will see a Help me create prompt at the top of the screen, along with the rest of the items you generally find while opening a new document (Meeting notes, Cover image, Poll etc). You can also access this new Gemini-powered feature by navigating to File > New > Help me create.Although it can easily access all the Workspace files in your account, Google says Help me create will not retain the "structure or style" of the Workspace files you use in the prompt, while it cannot create inline or cover images of people, either. As for when Help me create could be handy, Google has a long list of use cases, such as dinner party menus, campaign briefs, newsletters, proposals, blog posts, project trackers, etc. Google recommends using natural language and being more conversational in the prompts to get the best results. A number of sample prompts are also mentioned on the support page, covering multiple scenarios where one may require the assistance of Help me create in Docs. Help me create is limited to the web version of Docs and is available as part of the Gemini for Google Workspace Alpha and the Workspace Labs programs. Google adds that Help me create will only be available in a "new, blank document" and that it is rolling out gradually, indicating that it won't be visible to all users immediately.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
[email protected] (Shruti Shekar)
2025 could be very different for Google and Samsung — here's why
Google and Samsung had very different years in 2024, while one succeeded and the other struggled. 2025 will look very different for both companies in a very positive way.
https://www.androidcentral.com/phones/2025-could-be-very-different-for-google-and-samsung-heres-why
https://cdn.mos.cms.futu…9cwG-1200-80.jpg
2024-12-23T12:00:10Z
Google and Samsung had very different years in 2024. Google soared to the top, managing to have one of its largest lineup of devices launch in the fall and a successful year with expanding Gemini. On the other hand, Samsung had a bumpy road with a successful S24 and A series launch and getting on track with Galaxy AI, but it struggled with its series 6 foldable lineup.It isn’t unheard of for companies to have a good or a bad year; that’s how the tech space works. There are ebbs and flows to success, and it doesn’t mean that a company is permanently bad or will always be seen as good.I spoke to three industry research analysts who told me about why either company did well and areas of improvement for 2025.Google needs to keep innovating(Image credit: Andrew Myrick / Android Central)As I said earlier, Google had a great year. The company became a strong competitor to OpenAI’s ChatGPT by launching Gemini in more countries and, by the end of the year, launched Gemini 2.0 Experimental Advance for Gemini Advance subscribers.It successfully rolled out stable Android 15 and even launched the Android 16 developer preview much earlier than usual. In the fall, Google launched its Pixel 9 series along with other products, and in the time I’ve been covering technology, it was the biggest launch of products I’ve ever seen Google do.However, Neil Shah, vice president of research at Counterpoint Research, says it’s hard to tell whether the launch of Gemini AI on its Pixel 9 series of phones has helped with sales.“[The] Gemini AI effect has yet to be seen fully to boost Google Pixel sales, however, it is definitely one of the key hooks in coming years for Google Pixel to bring flagship Gen AI experiences on the Pixel hardware,” he notes.Get the latest news from Android Central, your trusted companion in the world of AndroidBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.I would agree; I don’t think we’re at a point yet where people are buying a phone because of its AI capabilities. I don’t think there are enough features or average enough use case scenarios where people are using AI daily, and I believe people don’t know enough about AI.That being said, one of my favorite features, and in my opinion, the best feature Google came out with this year, was Circle to Search. It is hands down the most innovative feature and one that I actively use on a daily basis. But apart from that, I really haven’t been using many of Gemini’s features on a day-to-day basis.Jitesh Ubrani, research manager for IDC's Worldwide Mobile Device Trackers, agrees and adds that Google needs to focus on innovating Gemini for more general features.“Google’s constant iteration and use of user feedback along with improvements in distribution and pricing along with increased marketing spend have helped the company gain share in the past few years. Having differentiating features such as great photography, call screening, Magic Eraser, and other AI features have helped along the way,” he says.Ubrani adds that in 2025, he’d also like to see Google expand Gemini across all platforms and devices, not just smartphones. This is something that I hope Google prioritizes next year.(Image credit: Google)This year, Google also launched one of the biggest suites of products that I’ve ever seen a company do in a long time. More importantly, it launched the Pixel 9 series with Gemini integrated into it.I think the Pixel 9 series is one of Google's best phone launches. While I had no issues with the phone, there can always be improvements, according to Anshel Sag, principal analyst for Moor Insights & Strategy.“Google needs to improve the Pixel battery life, by moving to advanced nodes and using the best processors. The same goes for Wi-Fi connectivity and 5G, I was really disappointed how much of a downgrade the Pixel was on connectivity from my S24 Ultra,” he said.And Google is likely planning to address this issue with rumors indicating that the Pixel 10 lineup will have the Tensor G5. Google has also allegedly finished its design and sent it along for manufacturing at TSMC, ending the era of Samsung-produced designs. TSMC tends to manufacture better chipsets than Samsung. For reference, Qualcomm used Samsung to produce the Snapdragon 8 Gen 1, a processor that was infamous for overheating and heavy throttling.So, for all we know, the issue surrounding a slower battery might be resolved, making the Pixel 10 lineup an even more premium device.I know Google will have a smooth 2025, but I just hope that when it comes to AI, it is really able to push the average consumer and help them understand how to use AI safely and efficiently daily.2025 will be the rise of success for Samsung (Image credit: Nicholas Sutrich / Android Central)It is widely known that Samsung bears the hat of having the best Android phones and is constantly the leader in the Android space. However, this past year, the company faced several challenges despite starting the year strong by launching its S24 series of phones.But while the S24 was a great series, it didn’t really help improve the downward trajectory of Samsung’s sales. In April 2024, Samsung initiated a six-day workweek for executives to create "a sense of crisis" among the company's leadership team.Where Samsung began to really struggle was after launching its Samsung Galaxy Z Flip 6 and Z Fold 6, which in many tech circles was more of an iterative update to what it launched last year. The phones weren’t super exciting and weren’t as innovative as they have been in previous years.Samsung noticed this to the point where, for the first time, Samsung Electronics' DS Division, Jeon Young-hyun, issued a rare apology for the company's failure to innovate. He acknowledged that outsiders are viewing the company's current situation as a "crisis" and vowed to make changes and improvements that will benefit investors, customers, and employees alike.It came to a point where other companies were even mocking the release of the Z Fold 6 and how thick it was, like Honor engraving a tiny "world's smallest apology" on the hinge of a special-edition Magic V3.(Image credit: Honor)IDC also confirmed that Samsung had lost smartphone market share as of Q2 2024 compared to the same quarter in 2023. The research firm also stated that while Samsung moved over 53 million smartphone units in the second quarter, its market share dipped to 18.4 percent, down from 20% in the same period last year. This comes at a time when other China-based brands like Xiaomi and Vivo have been innovating and expanding their market share.Ubrani says, “Samsung has rested on their laurels for far too long, and it’s starting to catch up to them. Most Chinese smartphone brands have outengineered and out innovated them in recent years when it comes to things like foldables, battery life, gaming performance, cameras, etc.For the longest time, Samsung has focused on competition with Apple when globally they’re competing with many more brands. To be successful, Samsung needs to realize that there’s most than just Apple. Samsung should leverage their scale to iterate faster and continue to invest in marketing like they used to do so during the pre-pandemic era.”Shah also notes that the launch of the S and A series helped keep Samsung afloat, but if it wants to get its numbers back up, it needs to focus on innovating its foldables.“The foldables performance has been disappointing, which [offset] the good traction for S and A series this year,” he said. “Samsung will have to protect its foldable leadership position as players such as Google, OnePlus and others are coming up with better form-factors. Designs and camera performance.”I think Samsung is fine; it is not the type of company to fail, and it knows when to fix itself; case in point, the public apology. However, I do think that Samsung needs to stay focused on itself and not constantly look at what other companies are doing. Yes, to stay in the game, you need to see what others are doing, but looking too much will distract you.Sag agrees, adding that Samsung needs to stop looking at what Apple is doing to innovate because its customers don’t care about Apple.“Samsung needs to lean into what makes Android fun and exciting, people are not really switching between Android and iOS (Razr+ excepted). Samsung needs to figure out how to execute Galaxy AI in a more meaningful manner and get back its reputation as the carrier of the Android banner,” he said.I would wholeheartedly agree that Samsung needs to remind its users why it’s the best innovator and why it was the first to introduce foldables and excite its users again.I’ve been covering technology for many years, and 2024 has been a fascinating year for both Google and Samsung. I expect a lot from both companies in 2025, and I can’t wait to see how they innovate more.
Content Synthesis/Prediction/Decision Making
Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
null
How does Google's Gemini 2.0 stack up against OpenAI?
null
https://consent.yahoo.com/v2/collectConsent?sessionId=1_cc-session_14ce4285-651f-4a14-bdf6-b6ac2c1a4cac
null
2024-12-12T00:33:00Z
If you click 'Accept all', we and our partners, including 237 who are part of the IAB Transparency & Consent Framework, will also store and/or access information on a device (in other words, use cookies) and use precise geolocation data and other personal data such as IP address and browsing and search data, for personalised advertising and content, advertising and content measurement, and audience research and services development.If you do not want us and our partners to use cookies and personal data for these additional purposes, click 'Reject all'.If you would like to customise your choices, click 'Manage privacy settings'.You can change your choices at any time by clicking on the 'Privacy & cookie settings' or 'Privacy dashboard' links on our sites and apps. Find out more about how we use your personal data in our privacy policy and cookie policy.
Personalization/Content Synthesis
Management/Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Karandeep Singh Oberoi
Google will go full throttle with Gemini in 2025
Google is betting big on Gemini, with plans to 'turbo charge' the app in the first half of 2025
https://www.androidpolice.com/google-gemini-2025-outlook/
https://static1.anpoimag…2a-ap24-hero.jpg
2024-12-29T17:09:51Z
Summary Google's main objective for Gemini in 2025 is 'scaling Gemini on the consumer side,' aiming to make it the 16th Google product to reach 500 million monthly users. CEO Sundar Pichai emphasized the urgency of Gemini's development due to increased competition in the AI market, stating the need to 'move faster as a company.' DeepMind co-founder Demis Hassabis revealed that Gemini products will "evolve massively" with numerous updates expected in the first half of 2025. Google's 2024 was dominated by Gemini-related announcements. All the way back in February, the tech giant rebranded the AI chatbot from Bard to Gemini, followed by the release of several new and advanced models to further improve the chatbot's capabilities.A standalone Gemini application landed on iOS back in November, finally bringing access to the chatbot's conversational Gemini Live assistant to more users, with Google Chrome on desktop likely next in line. RelatedGLIC = Gemini Live in ChromeGoogle's 2025 outlook for Gemini points towards a continuation of this trajectory with "Scaling Gemini on the consumer side" being Google's primary focus, as indicated by CEO Sundar Pichai in a recent strategy meeting with employees (via CNBC).The CEO emphasized the urgency of quickly developing, stating that the stakes are high due to increased competition in the AI field. "I think its really important we internalize the urgency of this moment, and need to move faster as a company," said Pichai. "These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solve real user problems."The tech giant currently has 15 apps that have hit the five-hundred million monthly user mark, with Gemini being positioned as the 16th Google app/service to reach the milestone. "Scaling Gemini on the consumer side will be our biggest focus next year," said Pichai.RelatedYour Pixel's free subscription might come in handyGoogle is already seeing a surge in the number of users opting for the Gemini mobile app since its launch in February, with 2025 likely to accelerate the trend. Google DeepMind's co-founder Demis Hassabis added that "the products themselves are going to evolve massively over the next year or two" with a trove of Gemini updates coming in the first half of 2025. Hassabis said that in response to an employee question about OpenAI's ChatGPT becoming synonymous with AI.Hassabis also added that DeepMind's experimental AI prototype, Project Astra, will be updated in the first half of 2025, with no plans to charge $200 a month "like other companies" (OpenAI) for its functionality.RelatedGoogle's new vision for AI assistants
Digital Assistance/Content Synthesis/Personalization
Business and Financial Operations/Management
null
null
null
null
null
null
news
msmash
Microsoft AI Chief Says Conversational AI Will Replace Web Browsers
Microsoft AI CEO Mustafa Suleyman predicts conversational AI will become the primary way people interact with technology,replacing traditional web browsers and search engines within the next few years. In an interview with The Verge, Suleyman, who oversees Microsoft's consumer AI products including Bing and Copilot, called current search interfaces "completely broken" and "a total pain," arguing that voice-based AI interactions will prove "100 times easier" for users. He said: The UI that you experience is going to be automagically produced by an LLM in three or five years, and that is going to be the default. And they'll be representing the brands, businesses, influencers, celebrities, academics, activists, and organizations, just as each one of those stakeholders in society ended up getting a podcast, getting a website, writing a blog, maybe building an app, or using the telephone back in the day. The technological revolution produces a new interface, which completely shuffles the way that things are distributed. And some organizations adapt really fast and they jump on board and it kind of transforms their businesses and their organizations, and some don't. There will be an adjustment. We'll look back by 2030 and be like, "Oh, that really was the kind of moment when there was this true inflection point because these conversational AIs really are the primary way that we have these interactions." And so, you're absolutely right. A brand and a business are going to use that AI to talk to your personal companion AI because I don't really like doing that kind of shopping. And some people do, and they'll do that kind of direct-to-consumer browsing experience. Many people don't like it, and it's actually super frustrating, hard, and slow. And so, increasingly you'll come to work with your personal AI companion to go and be that interface, to go and negotiate, find great opportunities, and adapt them to your specific context. That'll just be a much more efficient protocol because AIs can talk to AIs in super real-time. And by the way, let's not fool ourselves. We already have this on the open web today. We have behind-the-scenes, real-time negotiation between buyers and sellers of ad space, or between search ranking algorithms. So, there's already that kind of marketplace of AIs. It's just not explicitly manifested in language. It's operating in vector space.Read more of this story at Slashdot.
https://slashdot.org/story/24/12/09/1548200/microsoft-ai-chief-says-conversational-ai-will-replace-web-browsers
https://a.fsdn.com/sd/topics/ai_64.png
2024-12-09T16:00:00Z
The UI that you experience is going to be automagically produced by an LLM in three or five years, and that is going to be the default. And they'll be representing the brands, businesses, influencers, celebrities, academics, activists, and organizations, just as each one of those stakeholders in society ended up getting a podcast, getting a website, writing a blog, maybe building an app, or using the telephone back in the day.The technological revolution produces a new interface, which completely shuffles the way that things are distributed. And some organizations adapt really fast and they jump on board and it kind of transforms their businesses and their organizations, and some don't. There will be an adjustment. We'll look back by 2030 and be like, "Oh, that really was the kind of moment when there was this true inflection point because these conversational AIs really are the primary way that we have these interactions." And so, you're absolutely right. A brand and a business are going to use that AI to talk to your personal companion AI because I don't really like doing that kind of shopping. And some people do, and they'll do that kind of direct-to-consumer browsing experience. Many people don't like it, and it's actually super frustrating, hard, and slow.And so, increasingly you'll come to work with your personal AI companion to go and be that interface, to go and negotiate, find great opportunities, and adapt them to your specific context. That'll just be a much more efficient protocol because AIs can talk to AIs in super real-time. And by the way, let's not fool ourselves. We already have this on the open web today. We have behind-the-scenes, real-time negotiation between buyers and sellers of ad space, or between search ranking algorithms. So, there's already that kind of marketplace of AIs. It's just not explicitly manifested in language. It's operating in vector space.
Digital Assistance/Information Retrieval Or Search/Content Synthesis/Prediction
Business and Financial Operations/Management
null
null
null
null
null
null
news
[email protected] (Kevin Okemwa)
Elon Musk's free-for-all Grok-2 AI turns the billionaire into a pariah, reportedly spreading misinformation with narcissistic tendencies
X's Grok-2 is now accessible and free for all users. Some have lauded the tool's ultra-realistic photo generation capabilities, while others have gotten the tool to make fun of the company CEO, Elon Musk.
https://www.windowscentral.com/software-apps/elon-musks-free-for-all-grok-2-ai-turns-the-billionaire-into-a-pariah-reportedly-spreading-misinformation-with-narcissistic-tendencies
https://cdn.mos.cms.futu…5tZn-1200-80.jpg
2024-12-18T17:02:09Z
Earlier this year, Billionaire and X Owner Elon Musk indicated that xAI was gearing up to transform Grok AI into "the most powerful AI by December." The billionaire highlighted that the company was in the process of training the AI tool with “the most powerful AI training cluster in the world,” dubbed Memphis Supercluster. The training cluster features 100,000 liquid-cooled NVIDIA H100 graphic processing units (GPUs) connected with a single RDMA.However, it was established that xAI was leveraging more than a powerful cluster under the hood to train Grok. X quietly shipped a new update with a new setting that allowed the company to train its chatbot using users' data without authorization as it was enabled by default. While it's possible to disable the setting, the company limited the change to the web version, making it difficult for mobile users to disable it.This opened the platform to scrutiny by the Irish Data Protection Commission (DPC), risking a 4% fine on its global annual turnover if it failed to establish a legal basis for using users' data to train its chatbot without their consent. At the same time, reports indicated that the controversial AI tool was being used to spread misinformation at the height of the US Presidential election.Amid the controversy and data privacy concerns, many X users took to social media to laud Grok as the "most fun AI in the world" and "the most based and uncensored model of its class yet." Perhaps compared to other AI tools like Microsoft's Image Creator, which has seemingly been lobotomized due to heavy censorship.For context, Grok AI was previously limited to premium subscribers. However, Elon Musk recently released Grok-2 to all X users. The new model is reminiscent of the original model, though the company claims it's faster and more accurate.However, since launch, users have flooded social media with posts, sharing different use cases. Some have lauded its "ultra-realistic" image generation capabilities, citing it's "far less restrictive when it comes to censorship."Grok 2.0 is without a doubt the best AI model for creating images of people. These are almost indistinguishable from reality. These are leaps and bounds better than other models. pic.twitter.com/5vuAlXQnsEDecember 11, 2024Mixed reactions over Grok-2Grok is splitting opinions for sure. (Image credit: Daniel Rubino)Interestingly, shortly after X announced that Grok-2 was free for all users, some began to highlight instances of the chatbot's performance being stunted. Users attributed the degraded performance and "painfully slow" speeds to the platform's free access to all users.All the latest news, reviews, and guides for Windows and Xbox diehards.By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.A recent report by Fast Company cites instances where "users have gotten Grok-2 to generate countless faulty, hypocritical, or otherwise polarizing statements." Instances of an AI-powered chatbot spreading misinformation or generating trademarked content aren't a new phenomenon.Earlier this year, explicit AI-generated images of pop star Taylor Swift surfaced online, sparking ragging debates and conflict surrounding the use of artificial intelligence to create images of people without consent. Microsoft CEO Satya Nadella indicated the incident was "alarming and terrible." While it remains uncertain whether the explicit sexual images were generated using Microsoft Designer, Microsoft shipped a new update to the platform which reinforced better measures and policies to prevent the tool from veering off the guardrails.The outlet's report further revealed that Grok-2's responses can be manipulated to meet the desired output. The chatbot's responses can be shared without context, making the output appear as original answers.A Taylor Swift fan attempted to get the chatbot to indicate what “TTPD” stands for. The AI tool blurted out “The Toilet Paper Department.” This is hilarious because the user was referring to Taylor Swift's latest album, The Tortured Poets Department in this context.I attempted to replicate similar results, but the results were different. The first recommendation was indeed Taylor Swift's latest album. Grok AI provided the following explanation for its deductions:"The context in which TTPD is used will determine which of these meanings applies. However, in recent popular culture, especially among Taylor Swift's fanbase or on social media, it most commonly refers to her album."It also listed Texas Tech Police Department and Third Party Property Damage Liability as possible meanings from the acronym.Elsewhere, users attempted to get Grok AI to describe the controversial billionaire and the answers were a tad surprising. The answer to the prompt "One word that most users on Twitter use to describe Elon Musk" was "idiot." In another instance, a user asked the chatbot why the Tesla CEO was such an a****le.The chatbot pointed to reports of his irascible and unreasonable workplace management style. It also highlighted several instances where Musk had been perceived as a narcissist and lacked empathy.I asked Grok why Elon Musk is such an asshole, here is the response: pic.twitter.com/dk1dGqT5tlDecember 15, 2024Perhaps more interestingly, Grok-2 corroborated previous reports alluding that Elon Musk contributed to the spread of misinformation during the just-concluded US Presidential elections. It also cited instances of Musk spreading misinformation about immigration, health, natural disasters, and more.
Content Creation/Image Analysis
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Matt G. Southern
Google Announces Search Updates Powered By Gemini 2.0 via @sejournal, @MattGSouthern
Google announces Search improvements coming in 2025, powered by Gemini 2.0, expanding AI Overviews to handle complex queries.The post Google Announces Search Updates Powered By Gemini 2.0 appeared first on Search Engine Journal.
https://www.searchenginejournal.com/google-announces-search-updates-powered-by-gemini-2-0/535103/
https://www.searchengine…-115356-304.jpeg
2024-12-11T16:56:33Z
Google has announced a series of updates to Search for 2025, powered by Gemini 2.0, the company’s latest AI model.Updates To AI OverviewsOne of the most notable updates is the enhancement of AI Overviews.CEO Sundar Pichai notes:“Our AI Overviews now reach 1 billion people, enabling them to ask entirely new types of questions — quickly becoming one of our most popular Search features ever.”With Gemini 2.0, AI overviews will soon handle complex topics and multi-step questions, including advanced math, multimodal queries, and coding.Pichai explained:“We’re bringing the advanced reasoning capabilities of Gemini 2.0 to AI Overviews to tackle more complex topics and multi-step questions, including advanced math equations, multimodal queries and coding.”Google is testing these updates and plans to roll out the improved AI Overviews in early 2025, with plans to expand to more countries and languages within the next year.Gemini 2.0Gemini 2.0, mainly the Gemini 2.0 Flash model, is key to the recent Search updates.As described by Google DeepMind’s leadership:“2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed.”This model improves performance and can handle different types of inputs and outputs.The announcement states:“In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio.”Additionally, Gemini 2.0 Flash can use tools like Google Search and run code to access user-defined functions from other sources.New Possibilities For SearchGoogle is developing new features for Search, including Project Mariner, which aims to improve user interaction with agents in web browsers.The company describes it as:“… an early research prototype built with Gemini 2.0 that explores the future of human-agent interaction, starting with your browser.”Looking AheadIntegrating Gemini 2.0 into Google Search could be a key step in improving users’ experience with AI overviews.The success of these updates will depend on how well Google implements them while maintaining safety and responsibility.As the updates roll out, we will see how users respond and whether these changes enhance the search experience.
Content Synthesis/Information Retrieval Or Search/Prediction
Unknown
null
null
null
null
null
null
news
Mark Lohmeyer
Trillium TPU Is GA
Trillium, Google’s sixth-generation Tensor Processing Unit (TPU) is now GA, delivering enhanced performance and cost-effectiveness for AI workloads.
https://cloud.google.com/blog/products/compute/trillium-tpu-is-ga
https://storage.googleap…ax-2500x2500.jpg
2024-12-11T15:44:58Z
The rise of large-scale AI models capable of processing diverse modalities like text and images presents a unique infrastructural challenge. These models require immense computational power and specialized hardware to efficiently handle training, fine-tuning, and inference. Over a decade ago, Google began developing custom AI accelerators, Tensor Processing Units (TPUs), to address the growing demands of AI workloads, paving the way for multimodal AI.Earlier this year, we announced Trillium, our sixth-generation and most performant TPU to date. Today, it is generally available for Google Cloud customers. We used Trillium TPUs to train the new Gemini 2.0, Googles most capable AI model yet, and now enterprises and startups alike can take advantage of the same powerful, efficient, and sustainable infrastructure. Trillium TPU is a key component of Google Cloud's AI Hypercomputer, a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models. As part of the general availability of Trillium TPUs, we are also making key enhancements to AI Hypercomputer's open software layer, including optimizations to the XLA compiler and popular frameworks such as JAX, PyTorch and TensorFlow to achieve leading price-performance at scale across AI training, tuning, and serving. Additionally, features such as host-offloading using the massive host DRAM (complementing the High Bandwidth Memory, or HBM) deliver next-level efficiency. AI Hypercomputer enables you to extract maximum value from an unprecedented deployment of over 100,000 Trillium chips per Jupiter network fabric with 13 Petabits/sec of bisectional bandwidth, capable of scaling a single distributed training job to hundreds of thousands of accelerators. Already, customers like A21 Labs are using Trillium to deliver meaningful AI solutions to their customers faster:At AI21, we constantly strive to enhance the performance and efficiency of our Mamba and Jamba language models. As long-time users of TPUs since v4, we're incredibly impressed with the capabilities of Google Cloud's Trillium. The advancements in scale, speed, and cost-efficiency are significant. We believe Trillium will be essential in accelerating the development of our next generation of sophisticated language models, enabling us to deliver even more powerful and accessible AI solutions to our customers." - Barak Lenz, CTO, AI21 LabsHere are some of the key improvements that Trillium delivers over the prior generation:Over 4x improvement in training performanceUp to 3x increase in inference throughputA 67% increase in energy efficiencyAn impressive 4.7x increase in peak compute performance per chipDouble the High Bandwidth Memory (HBM) capacityDouble the Interchip Interconnect (ICI) bandwidth100K Trillium chips in a single Jupiter network fabricUp to 2.5x improvement in training performance per dollar and up to 1.4x improvement in inference performance per dollarThese enhancements enable Trillium to excel across a wide range of AI workloads, including:Scaling AI training workloadsTraining LLMs including dense and Mixture of Experts (MoE) modelsInference performance and collection schedulingEmbedding-intensive modelsDelivering training and inference price-performanceLets take a look at how Trillium performs for each of these workloads. Scaling AI training workloadsTraining large models like Gemini 2.0 requires massive amounts of data and computation. Trillium's near-linear scaling capabilities allow these models to be trained significantly faster by distributing the workload effectively and efficiently across a multitude of Trillium hosts that are connected through a high-speed interchip interconnect within a 256-chip pod and our state-of-the-art Jupiter data center networking. This is made possible by TPU multislice, and full-stack technology for large-scale training, and further optimized by Titanium, a system of dynamic data-center-wide offloads that range from host adapters to the network fabric. Trillium achieves 99% scaling efficiency with a deployment of 12 pods consisting of 3072 chips, and demonstrates 94% scaling efficiency across 24 pods with 6144 chips to pre-train gpt3-175b, even when operating across a data-center network to pre-train gpt3-175b.
Unknown
Unknown
null
null
null
null
null
null
news
Chandraveer Mathur
Help Me Create for Google Docs is going live now
Google Docs is officially rolling out Help Me Create to eligible users
https://www.androidpolice.com/help-me-create-google-docs-live-now/
https://static1.anpoimag…-ap24-hero-1.jpg
2024-12-10T22:28:34Z
Key Takeaways Google Workspace AI tools now include Help Me Create for Docs, starting with a blank slate. Unlike other AI tools, Help Me Create does not require ongoing work to start, can pull data, and stylize text. Help Me Create is gradually rolling out to paid Workspace users by January 9, 2025, only for pageless documents in English. Google has several AI tools available to use across popular Workspace apps like Docs, Keep, and Gmail, available to both free-tier and paying subscribers. Most of the implements you might find, such as Help Me Write and Help Me Organize are powered by the Gemini AI. We recently spotted another such AI tool in the works for Docs, and it is rolling out to users already, under the name Help Me Create.RelatedReimagine your boring docs with a touch of GeminiAll of Google's AI tools introduced for Workspace thus far share one thing they are useful once your work is already underway, and aren't particularly helpful to get started. Even Help Me Write, which drafts responses to emails or gives you a jumping-off point for your own text, needs an email to respond to, or a detailed prompt to get started.Google's latest creation, Help Me Create, starts with a clean slate. You just supply a prompt detailing what you wish to achieve in the Docs file. Google's announcement for this feature explains several use cases ranging from new product launch marketing plans to party planning checklists with key milestones, timelines, and initiatives.The end result is more detailed than the average Gemini response, with a long body of text peppered with cover imagery, in-line imagers, and appropriate stylization for the text. It can also pull data from other documents you store in Google Drive. You can name them preceded by an "@" in the prompt field. To get started, just use the Help Me Create shortcut in a new document or use the option under File New Help me create. Add-ons are necessary Interestingly, this isn't the first time we have seen Help Me Create. It was sighted in a support document earlier this week, along with a similar feature in the works for Keep, helps you prep grocery shopping lists easily before attempting a recipe. At the time, we had most of the details about Help Me Create's functionality, but not the rollout timelines.Well, Google says the feature will need a Gemini Business, Enterprise, Education, Education Premium, or Google One AI Premium addon for their Workspace account. So, only paying customers can access Help Me Create. Moreover, the feature only works with pageless documents in English, and it cannot include web results like the average Gemini chatbot results.Eligible users will see the feature rolling out now, but it is a gradual one Google expects to complete by January 9, 2025.
Content Creation/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
[email protected] (Brady Snyder)
Google's CEO blasts Microsoft's AI efforts, says 'they're using someone else's models'
Sundar Pichai is extremely confident in Google's AI models, and didn't hold back opinions about one of the company's biggest competitors.
https://www.androidcentral.com/phones/google/googles-ceo-blasts-microsofts-ai-efforts-says-theyre-using-someone-elses-models
https://cdn.mos.cms.futu…ucac-1200-80.jpg
2024-12-11T21:44:09Z
What you need to knowGoogle CEO Sundar Pichai disputed claims that top AI developers are struggling to innovate on next-generation models.Pichai had strong words for a key competitor, Microsoft, challenging the Redmond firm to a side-by-side test "any day, any time."The executive also criticized Microsoft's partnership with OpenAI, saying "they're using someone else's models."Google released its next-generation set of artificial intelligence models, Gemini 2.0, today alongside glimpses of longer-term AI projects. But even before the company's ambitious announcements, Google's leader expressed confidence in its AI developments. At the now-concluded Dealbook Summit, hosted by the New York Times, Google CEO Sundar Pichai had strong words for his company's competitors, and separately, those critical of AI's future (via Windows Central).Pichai, who has led Google through a challenging period that included massive layoffs and a shift toward AI, is still confident there is more progress to be made."When you start out quickly scaling up, you can throw more compute and you can make a lot of progress, but you definitely are going to need deeper breakthroughs as we go to the next stage," Pichai said at the Dealbook Summit. "So you can perceive it as there's a wall, or there's some small barriers."The chief executive appears to be in agreement with other industry leaders, such as OpenAI's CEO Sam Altman, that AI models aren't hitting a so-called wall. In other words, some skeptics believe that AI development will grind to a halt following significant advancements in 2023 and 2024."I don't subscribe to the wall notion," Pichai said.However, that doesn't mean the top boss at Google doesn't expect there to be a slowdown as AI development progresses. "I think the progress is going to get harder when I look at '25," the Google CEO explained. "The low-hanging fruit is gone. The hill is steeper."Google is competing with a few notable competitors as it searches for AI breakthroughs, including OpenAI, Anthropic, and Microsoft. Pichai called out one competitor, Microsoft, specifically when Andrew Ross Sorkin revisited Microsoft CEO Satya Nadella's negative comments toward Google from earlier this year.Get the latest news from Android Central, your trusted companion in the world of AndroidBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over."I would love to do a side-by-side comparison of Microsoft's own models and our models any day, any time," Pichai quipped. "They're using someone else's models."Pichai is referring to Microsoft's multi-billion-dollar investment in OpenAI, which gives it access to the newest GPT models that power Microsoft Copilot. Google, meanwhile, uses its in-house Gemini models for its AI services.Now, the chief executives at both Google and Microsoft have thrown a bit of shade at one another in 2024. As the AI race heats up, this friendly feud will be yet another storyline to watch.
Unknown
Management
null
null
null
null
null
null
news
Chris Thomas
New Google policy instructs Gemini's fact-checkers to act outside their expertise
Days after Google Deepmind touted its work toward factual accuracy, reports indicate its evaluation policies could be doing the opposite
https://www.androidpolice.com/google-deepmind-gemini-contracted-evaluation-expertise-policy-change/
https://static1.anpoimag…-gemini-gems.jpg
2024-12-19T16:49:13Z
Summary Google employs contract research agencies to evaluate Gemini response accuracy. GlobalLogic contractors evaluating Gemini prompts are no longer allowed to skip individual interactions based on lack of expertise. Concerns exist over Google's reliance on fact-checkers without relevant knowledge, potentially impacting AI development goals. Google Deepmind, the team responsible for developing and maintaining the conglomerate's AI models, employs various techniques to evaluate and improve Gemini's output. One such method, Gemini 2.0's recently announced FACTS Grounding benchmark, leverages responses from other advanced LLMs to determine if Gemini's answers actually relate to a question, answer the question, and answer the question correctly.Another method calls on human contractors from Hitachi-owned GlobalLogic to evaluate Gemini prompt responses and rate them for correctness. Until recently, contractors could skip individual prompts that fell significantly outside their areas of expertise. Now, Google has mandated that contractors can no longer skip prompts, forcing them to determine accuracy in subjects they might know nothing about (reporting by TechCrunch).RelatedA step closer to hailing new robot overlordsHands-on LLM error-checking gone awryPreviously, GlobalData contractors could skip individual prompts they weren't comfortable answering due to lack of background knowledge, with guidelines stating, If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task. According to sources that remain anonymous due to non-disclosure agreements, the new directive handed down from Google states, You should not skip prompts that require specialized domain knowledge.Accompanying the new policy is an instruction to "rate the parts of the prompt you understand," and make a note that it falls outside the reviewer's knowledge base. The option to skip certain prompts due to lack of relevant expertise has been eliminated, with contractors now only allowed to bypass individual interactions due to non-existent prompts or responses, or the presence of harmful content the contractor isn't authorized to evaluate.RelatedMisinformation is prevalent, but there's a solutionWhat we know about GlobalLogic AI evaluationA considerable, fluctuating number of open positions related to AI fact-checking exist on employment platforms like Upworthy and Indeed, offering anywhere from $14 per hour and up to evaluate AI performance. Various recruiters have reached out to jobseekers, apparently on behalf of GlobalLogic, in search of workers to fill potential contract-to-hire positions.Many social media users report the company's obfuscated interview process and lengthy, "stressful" onboarding process, while confirming Google as the GlobalData client. Some social media users purporting to currently work on the project have verified the claims of difficulties, as well as a starting pay around $21 per hour and the uncommon, but real, potential for direct hire.RelatedReinforcement learning has been a game changer in artificial intelligence, allowing machines to continuously improve their performanceWhat low-expertise fact-checking means for GeminiMaybe nothing, and possibly nothing goodPredictably, contract, workflow, and data application details remain tightly locked down. Employing real people to evaluate individual prompt responses seems a logical choice. Complex recruiting and hiring processes, unclear client needs and guidelines during onboarding, and inconsistent management techniques have always surrounded large-scale, outsourced contracting jobs. Nothing there raises unexpected red flags, and current (claimed) GlobalData contractors note that many of its workers possess high-level and technical degrees.The worry stems from Google's apparent shift away from allowing admittedly uninformed evaluators to bypass questions they can't answer. If a note indicating lack of expertise accompanies a contractor's evaluation, Google could theoretically disregard the evaluation and return the interaction to the pool for re-inspection. We have no way of knowing at present how Google treats this data.RelatedAI hallucinations offer false information as fact: Here's how this problem happensHow does non-expert error-checking advance Google's AI goals?The obvious concern remains that the new directive implies Google's decreasing reliance on educated experts, or even confident, self-aware autodidacts. TechCrunch, which originally received the leaked claims, noted one contractor explained, "I thought the point of skipping was to increase accuracy by giving it to someone better."Perhaps Google is simply streamlining its data collection process, and fully intends to discard, ignore, or clarify potentially inaccurate evaluations. Or, maybe, it's decided that Gemini fact-checking and further development for accuracy and anti-hallucinations don't, necessarily, require relevant background expertise when evaluating whether an LLM's answers make any sense.RelatedThe company making search results worse wants you to trust it with your emails
Decision Making/Detection and Monitoring
Unknown
null
null
null
null
null
null
news
Alistair Barr
OpenAI launched its best new AI model in September. It already has challengers, one from China and another from Google.
The rapid commoditization of AI models continues, even with a groundbreaking new approach known as inference-time compute.
https://www.businessinsider.com/openai-o1-ai-model-rivals-google-deepseek-reasoning-inference-2024-12
https://i.insider.com/676586c66c531c315b898850?width=1200&format=jpeg
2024-12-20T16:57:33Z
OpenAI CEO Sam Altman.Andrew Caballero-Reynolds/AFP/Getty ImagesOpenAI's o1 model was hailed as a breakthrough in September.By November, a Chinese AI lab had released a similar model called DeepSeek.On Thursday, Google came out with a challenger called Gemini 2.0 Flash Thinking.In September, OpenAI unveiled a radically new type of AI model called o1. In a matter of months, rivals introduced similar offerings.On Thursday, Google released Gemini 2.0 Flash Thinking, which uses reasoning techniques that look a lot like o1.Even before that, in November, a Chinese company announced DeepSeek, an AI model that breaks challenging questions down into more manageable tasks like OpenAI's o1 does.This is the latest example of a crowded AI frontier where pricey innovations are swiftly matched, making it harder to stand out."It's amazing how quickly AI model improvements get commoditized," said Rahul Sonwalkar, CEO of startup Julius AI. "Companies spend massive amounts building these new models, and within a few months they become a commodity."The proliferation of multiple AI models with similar capabilities could make it difficult to justify charging high prices to use these tools. The price of accessing AI models has indeed plunged in the past year or so.That, in turn, could raise questions about whether it's worth spending hundreds of millions of dollars, or even billions, to build the next top AI model.September is a lifetime ago in the AI industryWhen OpenAI previewed its o1 model, back in September, the product was hailed as a breakthrough. It uses a new approach called inference-time compute to answer more challenging questions.It does this by slicing queries into more digestible tasks and turning each of these stages into a new prompt that the model tackles. Each step requires running a new request, which is known as the inference stage in AI.This produces a chain of thought or chain of reasoning in which each part of the problem is answered, and the model doesn't move on to the next stage until it ultimately comes up with a full response.The model can even backtrack and check its prior steps and correct errors, or try solutions and fail before trying something else. This is akin to how humans spend longer working through complex tasks.DeepSeek risesIn a mere two months, o1 had a rival. On November 20, a Chinese AI company released DeepSeek."They were probably the first ones to reproduce o1," said Charlie Snell, an AI researcher at UC Berkeley who co-authored a Google DeepMind paper this year on inference-time compute.He's tried DeepSeek's AI model and says it performs well on complex math problems that must be solved by thinking for longer and in stages. He noted that in DeepSeek's DeepThink mode, the model shows users every step of its thought process. With o1, these intermediate steps are hidden from users. "I've asked people at OpenAI what they think of it," Snell told BI. "They say it looks like the same thing, but they don't how DeepSeek did this so fast."OpenAI didn't respond to a request for comment. Andrej Karpathy, an OpenAI cofounder, praised Google's new "Thinking" model for the same reasoning feature."The prominent and pleasant surprise here is that unlike o1 the reasoning traces of the model are shown," he wrote on X. "As a user I personally really like this because the reasoning itself is interesting to see and read — the models actively think through different possibilities, ideas, debate themselves, etc., it's part of the value add."A DeepSeek demoSnell shared a multi-step math problem with Business Insider, which was used to test DeepSeek for ourselves: "Find a sequence of +, -, /, * which can be applied to the numbers 7, 3, 11, 5 to get to 24, where each of the given numbers is used exactly once."BI put that prompt in DeepSeek's chat window on its website. The model responded initially by laying out the challenge ahead."Alright, so I've got this problem here: I need to use the numbers 7, 3, 11, and 5, and combine them with the operations of addition, subtraction, multiplication, and division, using each number exactly once, to get to 24," it replied. "At first glance, this seems a bit tricky, but I think with some systematic thinking, I can figure it out."It then proceeded through multiple steps over roughly 16 pages of discussion that included mathematical calculations and equations. The model sometimes got it wrong, but it spotted this and didn't give up. Instead, it swiftly moved on to another possible solution. "Almost got close there with 33 / 7 * 5 ≈ 23.57, but not quite 24. Maybe I need to try a different approach," it wrote at one point. After a few minutes, it found the correct solution, according to Snell. "You can see it try different ideas and backtrack," Snell explained in an interview on Wednesday. He highlighted this part of DeepSeek's chain of thought as particularly noteworthy."This is getting really time-consuming. Maybe I need to consider a different strategy," the AI model wrote. "Instead of combining two numbers at a time, perhaps I should look for a way to group them differently or use operations in a nested manner."Then Google appearsSnell said other companies are likely working on AI models that use the same inference-time compute approach as OpenAI."DeepSeek does this already, so I assume others are working on this," he added on Wednesday.The following day, Google released Gemini 2.0 Flash Thinking. Like DeepSeek, this new model shows users each step of its thought process while tackling problems. Google AI veteran Jeff Dean shared a demo on X that showed this new model solved a physics problem and explained its reasoning steps. "This model is trained to use thoughts to strengthen its reasoning," Dean wrote. "We see promising results when we increase inference time computation!"Read the original article on Business Insider
Unknown
Unknown
null
null
null
null
null
null
news
Ryan Christoffel
This Vision Pro clone from Samsung and Google is good news for Apple users
Samsung has a new headset coming, powered by Android XR, and it looks an awful lot like an Apple Vision Pro clone. While the existence of another Samsung product that takes inspiration from Apple might upset some users, I think it could be really great news for Vision Pro owners. more…
https://9to5mac.com/2024/12/12/vision-pro-clone-from-samsung-and-google-good-news-for-apple/
https://i0.wp.com/9to5ma…&strip=all&ssl=1
2024-12-12T17:04:47Z
Samsung has a new headset coming, powered by Android XR, and it looks an awful lot like an Apple Vision Pro clone. While the existence of another Samsung product that takes inspiration from Apple might upset some users, I think it could be really great news for Vision Pro owners.When Apple introduced the Vision Pro, many saw the technology as a leap into the future, a unique market changer.The seamless interaction method of eye and hand tracking, the focus on productivity and being a proper computer, and much more showed how Apple was thinking different about AR and VR.But now, Samsung and Google have teamed up to create something that looks very, very similar.Their product is much lighter, and will reportedly undercut Vision Pros price (though no pricing or release date have yet been announced).My initial take after reading several hands-on impressions is that Apple has some real competition on its hands. And thats a very good thing.For further reading from 9to5Google:Im deep in the Apple ecosystem, and it would take a lot for me to change that. I suspect if I tried Samsungs currently-unnamed headset, Id have plenty of complaints. Samsung and Android products tend to work in ways I find frustrating.But regardless, this new headset should motivate Apple to keep pushing harder with the Vision Proand any other Vision products in development.For example, I love how Google seems to have made Gemini a core part of the Android XR operating system.Mark Gurman writes at Bloomberg:Google had been developing Android XR for several years, but took a pause to rebuild the software around AI. The companys AI features are now core to the operating system. For instance, glasses with Android XR can use external cameras to analyze a newly bought piece of furniture and then provide instructions to assemble it. On a headset, it can control actions like mapping out a vacation in Google Maps.How well all of this works remains to be seen. But integrating AI with an XR headset seems like a no-brainer.Apple should absolutely make AI a core part of visionOS in the future.But if serious competition like Samsung and Google didnt exist, Im not sure how driven Apple would be to push the Vision Pro to its full potential.Sure there are other VR players like Meta, but they largely seem focused on gaming. What Samsung and Google are building seems to genuinely follow the same product vision as Apples Vision Pro.If this new product didnt exist, Apple could easily look at the competitive landscape and assume it can take its foot off the gas with Vision Pro and visionOS.But now, motivated by its long-time rivals, Apple can further prioritize making Vision Pro the best it can be.What do you think about Samsungs new headset? Do you think its a Vision Pro clone? Why or why not? Let us know in the comments.FTC: We use income earning auto affiliate links.More.
Digital Assistance/Process Automation
Unknown
null
null
null
null
null
null
news
EditorDavid
Bill Gates Recommends Four Books That 'Make Sense of the World'
This month Bill Gates recommended four books about making sense of the world, including The Coming Wave, by Mustafa Suleyman. Gates calls it "the book I recommend more than any other on AI — to heads of state, business leaders, and anyone else who asks — because it offers something rare: a clear-eyed view of both the extraordinary opportunities and genuine risks ahead."After helping build DeepMind from a small startup into one of the most important AI companies of the past decade, [Suleyman] went on to found Inflection AI and now leads Microsoft's AI division. But what makes this book special isn't just Mustafa's firsthand experience — it's his deep understanding of scientific history and how technological revolutions unfold. He's a serious intellectual who can draw meaningful parallels across centuries of scientific advancement. Most of the coverage of The Coming Wave has focused on what it has to say about artificial intelligence — which makes sense, given that it's one of the most important books on AI ever written. And there is probably no one as qualified as Mustafa to write it... But what sets his book apart from others is Mustafa's insight that AI is only one part of an unprecedented convergence of scientific breakthroughs. Gene editing, DNA synthesis, and other advances in biotechnology are racing forward in parallel. As the title suggests, these changes are building like a wave far out at sea — invisible to many but gathering force. Each would be game-changing on its own; together, they're poised to reshape every aspect of society... [P]rogress is already accelerating as costs plummet and computing power grows. Then there are the incentives for profit and power that are driving development. Countries compete with countries, companies compete with companies, and individuals compete for glory and leadership. These forces make technological advancement essentially unstoppable — and they also make it harder to control... How do we limit the dangers of these technologies while harnessing their benefits? This is the question at the heart of The Coming Wave, because containment is foundational to everything else. Without it, the risks of AI and biotechnology become even more acute. By solving for it first, we create the stability and trust needed to tackle everything else... [Suleyman] lays out an agenda that's appropriately ambitious for the scale of the challenge — ranging from technical solutions (like building an emergency off switch for AI systems) to sweeping institutional changes, including new global treaties, modernized regulatory frameworks, and historic cooperation among governments, companies, and scientists... In an accompanying Christmas-themed video, Gates adds that "Of all the books on AI, that's the one I recommend the most." Gates also recommends The Anxious Generation by Jonathan Haidt, saying it "made me reflect on how much of my younger years — which were often spent running around outside without parental supervision, sometimes getting into trouble — helped shape who I am today. Haidt explains how the shift from play-based childhoods to phone-based childhoods is transforming how kids develop and process emotions." (In the video Gates describes it as "kind of a scary book, but very convincing. [Haidt] writes about the rise of mental illness, and anxiety in children. He, unlike some books, actually has some prescriptions, like kids not using phones until much later, parenting style differences. I think it's a super-important book.") Gates goes into the book's thesis in a longer blog post: that "we're actually facing two distinct crises: digital under-parenting (giving kids unlimited and unsupervised access to devices and social media) and real-world over-parenting (protecting kids from every possible harm in the real world). The result is young people who are suffering from addiction-like behaviors — and suffering, period — while struggling to handle challenges and setbacks that are part of everyday life."[Haidt] makes a strong case for better age verification on social media platforms and delaying smartphone access until kids are older. Literally and figuratively, he argues, we also need to rebuild the infrastructure of childhood itself — from creating more engaging playgrounds that encourage reasonable risk-taking, to establishing phone-free zones in schools, to helping young people rediscover the joy of in-person interaction. Gates also recommends Engineering in Plain Sight, by Grady Hillhouse, a book which he says "encourages curiosity." ("Hillhouse takes all of the mysterious structures we see every day, from cable boxes to transformers to cell phone towers, and explains what they are and how they work. It's the kind of read that will reward your curiosity and answer questions you didn't even know you had.") And finally, Gates recommends an autobiography by 81-year-old Pulitzer Prize-winning historian/biographer/former sports journalist Doris Kearns Goodwin, who assesses the impact of President Lyndon Johnson's policies in a surprising "personal history of the 1960s."Read more of this story at Slashdot.
https://news.slashdot.org/story/24/12/16/0123227/bill-gates-recommends-four-books-that-make-sense-of-the-world
https://a.fsdn.com/sd/to…ics/books_64.png
2024-12-16T12:34:00Z
This month Bill Gates recommended four books about making sense of the world, including The Coming Wave, by Mustafa Suleyman. Gates calls it "the book I recommend more than any other on AI — to heads of state, business leaders, and anyone else who asks — because it offers something rare: a clear-eyed view of both the extraordinary opportunities and genuine risks ahead."After helping build DeepMind from a small startup into one of the most important AI companies of the past decade, [Suleyman] went on to found Inflection AI and now leads Microsoft's AI division. But what makes this book special isn't just Mustafa's firsthand experience — it's his deep understanding of scientific history and how technological revolutions unfold. He's a serious intellectual who can draw meaningful parallels across centuries of scientific advancement. Most of the coverage of The Coming Wave has focused on what it has to say about artificial intelligence — which makes sense, given that it's one of the most important books on AI ever written. And there is probably no one as qualified as Mustafa to write it... But what sets his book apart from others is Mustafa's insight that AI is only one part of an unprecedented convergence of scientific breakthroughs. Gene editing, DNA synthesis, and other advances in biotechnology are racing forward in parallel. As the title suggests, these changes are building like a wave far out at sea — invisible to many but gathering force. Each would be game-changing on its own; together, they're poised to reshape every aspect of society... [P]rogress is already accelerating as costs plummet and computing power grows. Then there are the incentives for profit and power that are driving development. Countries compete with countries, companies compete with companies, and individuals compete for glory and leadership. These forces make technological advancement essentially unstoppable — and they also make it harder to control...How do we limit the dangers of these technologies while harnessing their benefits? This is the question at the heart of The Coming Wave, because containment is foundational to everything else. Without it, the risks of AI and biotechnology become even more acute. By solving for it first, we create the stability and trust needed to tackle everything else... [Suleyman] lays out an agenda that's appropriately ambitious for the scale of the challenge — ranging from technical solutions (like building an emergency off switch for AI systems) to sweeping institutional changes, including new global treaties, modernized regulatory frameworks, and historic cooperation among governments, companies, and scientists... In an accompanying Christmas-themed video, Gates adds that "Of all the books on AI, that's the one I recommend the most." Gates also recommends The Anxious Generation by Jonathan Haidt, saying it "made me reflect on how much of my younger years — which were often spent running around outside without parental supervision, sometimes getting into trouble — helped shape who I am today. Haidt explains how the shift from play-based childhoods to phone-based childhoods is transforming how kids develop and process emotions." (In the video Gates describes it as "kind of a scary book, but very convincing. [Haidt] writes about the rise of mental illness, and anxiety in children. He, unlike some books, actually has some prescriptions, like kids not using phones until much later, parenting style differences. I think it's a super-important book.")Gates goes into the book's thesis in a longer blog post: that "we're actually facing two distinct crises: digital under-parenting (giving kids unlimited and unsupervised access to devices and social media) and real-world over-parenting (protecting kids from every possible harm in the real world). The result is young people who are suffering from addiction-like behaviors — and suffering, period — while struggling to handle challenges and setbacks that are part of everyday life."[Haidt] makes a strong case for better age verification on social media platforms and delaying smartphone access until kids are older. Literally and figuratively, he argues, we also need to rebuild the infrastructure of childhood itself — from creating more engaging playgrounds that encourage reasonable risk-taking, to establishing phone-free zones in schools, to helping young people rediscover the joy of in-person interaction. Gates also recommends Engineering in Plain Sight, by Grady Hillhouse, a book which he says "encourages curiosity." ("Hillhouse takes all of the mysterious structures we see every day, from cable boxes to transformers to cell phone towers, and explains what they are and how they work. It's the kind of read that will reward your curiosity and answer questions you didn't even know you had.")And finally, Gates recommends an autobiography by 81-year-old Pulitzer Prize-winning historian/biographer/former sports journalist Doris Kearns Goodwin, who assesses the impact of President Lyndon Johnson's policies in a surprising "personal history of the 1960s."
Content Synthesis
Management/Business and Financial Operations/Life, Physical, and Social Science
null
null
null
null
null
null
news
[email protected] (Nickolas Diaz)
Google's Project Astra waitlist signups begin with enhanced AR features
Google opens the gates to its trusted tester waitlist for Project Astra.
https://www.androidcentral.com/gaming/virtual-reality/google-project-astra-ar-waitlist-opens-trusted-testers
https://cdn.mos.cms.futu…YHtH-1200-80.jpg
2024-12-11T18:36:36Z
What you need to knowGoogle announced that it has opened its "trusted testers" waitlist for interested participants for Project Astra, its AR venture.The company detailed that users can engage with Project Astra via mobile (Android) and "prototype glasses."It seems there will be an experimental Astra app that lets users open their camera or share their screen for quick information on what they see.Google's "vision for the future of AR" is entering an exciting phase today (Dec 11) as it progresses its development.Per Google's DeepMind page for Project Astra, the company announced that the latest AR venture has opened its "trusted tester" waitlist. The company states those signing up today (Dec 11) can explore the "future capabilities" of its "universal AI assistant." Signing up will require users to fill out a brief form involving their name, location, age, and Android phone type.Additionally, Google will ask if you are interested in "using prototype glasses."When it comes to your phone, it seems this early test will see accepted participants downloading a Project Astra Android app. A short demo video shows that users can open the app and activate the camera within it to point at something they're interested in. Google states Astra is "multilingual," meaning users can speak to the AI model in their preferred language and receive information back.The demo shows a model asking about a string of Chinese lanterns (in French). Astra responds both vocally and in written text with (hopefully) relevant and useful information about your query.Moreover, Google adds that users can "share their screen" with Astra to gain even more assistance.(Image credit: Google)The AI model is said to leverage Google apps like Maps, Lens, and Search to fulfill a user's query. This is reportedly a part of its "enhanced understanding." A subsequent post shows that Astra is capable of "contextual understanding" for "clear, detailed answers, and it can explain the thinking and context behind" its answers. Moreover, Google has added a memory bank to Astra, meaning the AI should remember key details from your previous encounters.Get the latest news from Android Central, your trusted companion in the world of AndroidBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.This seems like something similar to "Saved Info" in Gemini, which Google rolled out in mid-November.The other side of this sign-up is Google's mention of prototype AR glasses. Its announcement splash page only highlights that Astra can "integrate seamlessly with prototype glasses to see the world as you see it – for an even more immersive and helpful experience."We've been expecting AR glasses from Google ever since I/O 2024 when it showcased Project Astra. The demo showed how interactive and engaged Project Astra is (likely) designed to be. When wearing a pair of AR glasses, the device will "record" your surroundings in real time, so when you ask a question, the AI can answer it.It's worth mentioning that Project Astra is an AI routine that's built off Gemini to give it that "real person-like" experience during conversations. Google also showed that Astra can remember the location of objects, like where you last set down your AR glasses.
Digital Assistance/Content Creation/Personalization
Unknown
null
null
null
null
null
null
news
Nathaniel Wilkins
Gemini 2.0: Five major changes you need to know
Gemini 2.0 promises to do more with less human input. Find out how
https://www.androidpolice.com/gemini-2-biggest-changes/
https://static1.anpoimag…n-gemini-2-0.jpg
2024-12-18T02:07:10Z
Google's Gemini 2.0 has new features and capabilities. These include improved multimodal understanding, agentic AI, increased speed, better battery life (even for phones with excellent batteries), and broader integration with other Google solutions. Gemini 2.0 processes information differently than its predecessor and achieves more complex tasks.Integrations with Google products such as Search, Maps, and Workspace are key focus areas, although some features are still rolling out. Gemini 2.0 is accompanied by a major UI update to NotebookLM, Google's Gemini-powered AI information warehouse that leverages your research materials, links, and datasets.RelatedGoogle Gemini is here, with a whole new approach to multimodal AI5 Native image and audio processing Eliminating translation promises better responses Unlike previous models, which required converting images and audio into text before analysis, Gemini 2.0 processes them. The goal is to eliminate the information loss associated with translation. Direct processing allows a richer, more nuanced understanding of the input, capturing subtleties and contextual cues that would otherwise be lost. Gemini 2.0 promises a more accurate and efficient interpretation of multimedia content by bypassing the intermediary text conversion step.Gemini 2.0 identifies objects in an image and understands their relationships and the scene context. I tested its abilities, and the response was detailed and accurate. It even recognized the materials from which objects on my coffee table were constructed. I also ran the image through version 1.5 Pro. While it provided some of the same information, its response was less detailed. The Gemini 2.0 Flash model still refused to process an image with people.If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful. Sundar Pichai, Google CEO4 Agentic AI Gemini 2.0 can do more with less Agentic AI describes AI models that actively interact with the world to achieve specific goals. Gemini 2.0 powers AI agents, allowing them to execute complex, multistep tasks that require planning, decision-making, and interaction with external systems. Agentic AI may mark a turning point where AI becomes a more proactive problem-solver.Gemini 2.0's agentic capabilities are slated to integrate with external tools like Google Search, Maps, and Lens. For example, a Gemini 2.0 AI agent could leverage Google Maps to plan a complex itinerary involving multiple destinations and modes of transportation. However, this functionality wasn't available to me in the 2.0 Flash desktop or from Maps. Google recently rolled out 2.0 in a pre-release version of its mobile app, which is where we expect to see some of these capabilities shine.In its blog post, Google discusses how the new model relates to two major Google initiatives: Project Astra and Project Mariner. Project Astra focuses on agentic AI capabilities integrated with services such as Search and Maps. Project Mariner touches on automated web features such as filling out forms, booking reservations, and gathering information from multiple websites.3 Deeper integrations across the Google ecosystem AI goes everywhere with Gemini 2.0 Gemini 2.0 integrates deeply across Google's ecosystem of products and services. The promise is a more unified and seamless user experience. Gemini 2.0's extended integrations point toward Google's strategy of using Gemini as a common thread woven throughout Workspace.Google Search is getting deeper integration with Gemini 2.0, facilitating more conversational search experiences and leveraging AI Overviews for comprehensive answers to complex queries, as we predicted in early November. Within Google Workspace, AI-powered features driven by Gemini 2.0 are being incorporated into applications like Docs, Slides, and Meet to enhance productivity and collaboration. Android Assistant is set to receive new capabilities powered by Gemini 2.0. Your mileage may vary during the rollout process.2 Faster responses and better battery life Gemini 2.0 Flash doubles the speed of 1.5 The full name of the latest version is Gemini 2.0 Flash Experimental. It's been streamlined for speed and responsiveness. Gemini 2.0 Flash delivers enhanced performance while reducing latency. This positions Gemini 2.0 Flash to better power real-time multimodal interactions.Gemini 2.0 Flash claims notable performance improvements. Google says it's twice the speed of its predecessor. In my experimentation, responses were nearly instantaneous. They were markedly faster than when I fed the same queries to version 1.5 Pro. The faster response times make interactions feel natural and fluid. For audio conversations, the reduced latency could reduce delays and create a more engaging and realistic experience.Gemini 2.0 Flash might extend the battery life for AI processes on mobile devices such as your Google Pixel 9 or other smartphone. This could mean less frequent charging, something everyone can appreciate.1 NotebookLM's reinvented UI Gemini 2.0 is accompanied by a redesign of NotebookLM's interface and new features It isn't in Gemini 2.0, but the two are different sides of the same coin. The arrival of Gemini 2.0 marks a parallel iteration in NotebookLM. The iteration goes beyond its underlying AI capabilities and into its user interface. The overhaul seeks to make it more intuitive and efficient for users to interact with their notes and documents. It focuses on streamlining workflows, improving navigation, and providing a more refined visual environment.RelatedNotebookLM has a lot of issues, but the idea behind it has potentialGemini moves fast and isn't slowing down Gemini 2.0 has cool tricks for maximum productivity. Along with recognizing text, it also understands images and sounds. This version promises to do things for you, like using Google Search or Maps to find information or complete complex tasks. Moreover, it has a larger context window than its predecessor. Google pegs Gemini 2.0 Flash at 2 million tokens, meaning it retains and processes twice as much information as Gemini 1.5 Pro.By focusing on multimodal understanding, agentic capabilities, deeper integrations with Google apps, and performance enhancements, Google is making Gemini the foundation of its ecosystem. As mainstream AI continues to mature, 2025 will be an interesting year.
Content Synthesis/Image Analysis
Unknown
null
null
null
null
null
null
news
Irene Okpanachi
Grok AI thinks I tweet like a middle-aged man, and I'm starting to believe it
We look at how AI makes quick judgments about people based on their online posts. It's a funny-yet-serious look at our digital lives.
https://www.androidpolice.com/grok-thinks-im-older/
https://static1.anpoimag…-middle-aged.jpg
2024-12-27T18:06:11Z
Stereotyping is something we're used to as humans. We naturally make quick judgments about people based on how they look, speak, or act. These assumptions help us make sense of the world, but they can be wrong or unfair. Today, it's entered into our technology. If you search for terms like "older people..." or "why are millennials..." on your Android tablet or phone, you might see Google suggestions like "...are older people forgetful?" or "...are millennials lonely?"Artificial intelligence uses these stereotypes just as we do. I found it interesting because I've read and heard about it but only recently experienced it. Here's the experience that had me laughing and reflecting.RelatedWhat makes Grok unique is that it integrates deeply with X, accessing real-time data from the platform.Who are you when you're online? This X trend will put you in different moods There's a trend on X (formerly Twitter) where people ask Grok AI to guess how they look based on their tweets or summarize their accounts. The results have brought a mix of reactions, from confusion and amusement to thought-provoking. Among my favorites is the user who asked Grok to write a poem based on their X post. The poem was overdramatic, repetitive, and more of an epic saga than a simple summary of their content.I had to give it a shot. There were numerous cases where it would keep copying your profile picture, so I prompted it to tell me without images. It responded that I seemed like a middle-aged nerd. I was curious about how it reached that conclusion, so I pressed for more details. The follow-up was more specific, describing me as a lonely 40-year-old man who spends his days playing video games in his room. I nearly choked.The description is far from reality, as I am female and still enjoying the quarter-life crisis phase of my life. It made me wonder how AI reads into the way we communicate online. It's funny because, going by our tweets, many of us might come across as different people. It isn't every day that I get mistaken for a 40-year-old gaming recluse, but here we are.It makes you think about how we shape our digital personas without realizing it. The internet has a funny way of turning us into caricatures, and AI adds a new layer to that mix. AI and patterns in virtual expressions These machines don't think like we do, or do they? Grok is an AI chatbot developed by Elon Musk's AI company, xAI. It has a humorous and rebellious tone, inspired by "The Hitchhiker's Guide to the Galaxy." Hence, the reason for the shockingly blunt responses. It uses the Grok-1 language model, which performs better than GPT-3.5 but is not as advanced as GPT-4.Grok came out in November 2023, and the newer Grok-2 and Grok-2 Mini versions were introduced in August 2024. Initially, the AI required an $8 subscription, but it is now free to use. Like other generative models, Grok can only "see" through data. It generates responses based on patterns. It likely looks at word choice, phrasing, hashtags, and even the tone of your posts.For example, my posts include a lot of tech jargon and news, nostalgia for old gadgets, or puns surrounding niche hobbies. Using terms like "vintage," "back in the day," or making dry dad jokes might have made me sound like a middle-aged man. Talking a lot about classic video games and other subcultures, or even obscure things, could also be why I was stereotyped.Regardless, Grok didn't understand me as an individual, and the narrow judgment didn't come from nowhere. It relies on patterns and assumptions, often linking traits or interests to specific demographics. Societal biases shape the data it processes, such as the idea that men are more likely to engage in technological topics. In addition, other genders are underrepresented in the industry, especially in AI research. In the end, it isn't Grok's fault. The data is the culprit.RelatedAI is not here to steal our jobsThe power of perception What does it all say about us? When we take to our keyboards and make a post, we have a hand in how we're perceived. The content we share creates a personality that everyone judges. Every tweet, retweet, or hashtag is up for assessment, whether it's intentional or not. Drawing from the previous example, if you constantly talk about gaming, AI might categorize you as a gamer, even if that's only a small part of your life.Similarly, the formal, sarcastic, or casual tone you use matters. If we present only certain aspects of ourselves, we feed the algorithm and other people a linear version of our identity. While stereotyping shouldn't be celebrated, the X trend is a good reason to look inward and reflect on this. It's also a reminder of AI's limitations.When I first saw Grok's guess, I thought it was way off. However, after thinking about it more, I realized there's some truth to it. I can be analytical and old-school. Like any person, it can form opinions about you based on a small piece of who you are. The difference is in context and algorithm. Assess your internet identity AI models, including generative ones, reflect the world we created. They pick up our biases, values, and assumptions and mirror them in their output. Debiasing them is more complex than simply changing the data. It's also about rethinking how AI is trained and used. Also, AI-generated content is tricky and can hurt people. Still, it doesn't have its own intentions. When using Grok or any AI, question its conclusions objectively and ask yourself if it represents who you are. Consider it an experiment.
Detection and Monitoring/Personalization
Unknown
null
null
null
null
null
null
news
Tom Carter
Sam Altman says Elon Musk is 'clearly a bully' who likes to get in fights with rivals
Altman said he believes much of Musk's animosity is rooted in OpenAI's recent success and the fact that he now runs a direct competitor.
https://www.businessinsider.com/sam-altman-elon-musk-is-clearly-a-bully-2024-12
https://i.insider.com/67653f1c1cee543535a873ca?width=1024&format=jpeg
2024-12-20T10:52:00Z
Elon Musk (left) and Sam Altman (right).Steve Granitz, Andrew Caballero-Reynolds/Getty ImagesSam Altman isn't done firing shots at Elon Musk.The OpenAI CEO said the Tesla boss was "clearly a bully" who likes to pick fights with rivals in an interview with The Free Press.Musk is in a lengthy legal battle with OpenAI and Altman, and refiled a lawsuit against both earlier this year.Sam Altman and Elon Musk once started OpenAI together — but now their relationship is a lot more complicated.In an interview with The Free Press on Thursday, Altman said his OpenAI cofounder was "clearly a bully" and said that Musk's high-profile feud with his former company had become a "sideshow."Since stepping down from OpenAI in 2018, Musk has been highly critical of the AI startup and CEO Altman.The Tesla boss refiled a lawsuit in August, arguing he had been "deceived" into starting the company by Altman and fellow cofounder Greg Brockman.Musk has also asked a federal court to block OpenAI from transitioning into a for-profit entity, with OpenAI firing back by releasing a cache of emails showing Musk pushed for the AI startup to be for-profit while working at the company.In the interview, Altman described Musk as a "legendary entrepreneur" who did a lot to help OpenAI in its early days."He's also clearly a bully, and he's also someone who clearly likes to get into fights," added the OpenAI CEO, pointing to the billionaire's high-profile spats with Jeff Bezos and Bill Gates.Altman also said he believes much of Musk's animosity is rooted in OpenAI's recent success and the fact that he now runs a direct competitor.Musk announced xAI, his own AI startup, last year, and the company has since released several versions of its chatbot Grok."Everything we're doing, I believe Elon would be happy about if he were in control of OpenAI," said Altman."He left when he thought we were on a trajectory to certainly fail, and also when we wouldn't do something where he had total control over the company," he added.Altman's comments come as Musk prepares to occupy an increasingly prominent role in the second Trump administration. Though Musk will have an influential political position, Altman said he did not believe Musk would use his power to go after his rivals."I think there are people who will really be a jerk on Twitter who will still not abuse the system of the country," he said.OpenAI and Musk did not respond to requests for comment, sent outside normal working hours.Read the original article on Business Insider
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
EditorDavid
Microsoft Announces Phi-4 AI Model Optimized for Accuracy and Complex Reasoning
An anonymous reader shared this report from Computerworld:Microsoft has announced Phi-4 — a new AI model with 14 billion parameters — designed for complex reasoning tasks, including mathematics. Phi-4 excels in areas such as STEM question-answering and advanced problem-solving, surpassing similar models in performance. Phi-4, part of the Phi small language models (SLMs), is currently available on Azure AI Foundry under the Microsoft Research License Agreement and will launch on Hugging Face [this] week, the company said in a blog post. The company emphasized that Phi-4's design focuses on improving accuracy through enhanced training and data curation.... "Phi-4 outperforms comparable and even larger models on tasks like mathematical reasoning, thanks to a training process that combines synthetic datasets, curated organic data, and innovative post-training techniques," Microsoft said in its announcement. The model leverages a new training approach that integrates multi-agent prompting workflows and data-driven innovations to enhance its reasoning efficiency. The accompanying report highlights that Phi-4 balances size and performance, challenging the industry norm of prioritizing larger models... Phi-4 achieved a score of 80.4 on the MATH benchmark and has surpassed other systems in problem-solving and reasoning evaluations, according to the technical report accompanying the release. This makes it particularly appealing for domain-specific applications requiring precision, like scientific computation or advanced STEM problem-solving. Microsoft emphasized its commitment to ethical AI development, integrating advanced safety measures into Phi-4. The model benefits from Azure AI Content Safety features such as prompt shields, protected material detection, and real-time application monitoring. These features, Microsoft explained, help users address risks like adversarial prompts and data security threats during AI deployment. The company also reiterated that Azure AI Foundry, the platform hosting Phi-4, offers tools to measure and mitigate AI risks. Developers using the platform can evaluate and improve their models through built-in metrics and custom safety evaluations, Microsoft added...With Phi-4, Microsoft continues to evolve its AI offerings while promoting responsible use through robust safeguards. Industry watchers will observe how this approach shapes adoption in critical fields where reasoning and security are paramount.Read more of this story at Slashdot.
https://slashdot.org/story/24/12/16/0313207/microsoft-announces-phi-4-ai-model-optimized-for-accuracy-and-complex-reasoning
https://a.fsdn.com/sd/topics/ai_64.png
2024-12-16T05:34:00Z
Microsoft has announced Phi-4 — a new AI model with 14 billion parameters — designed for complex reasoning tasks, including mathematics. Phi-4 excels in areas such as STEM question-answering and advanced problem-solving, surpassing similar models in performance. Phi-4, part of the Phi small language models (SLMs), is currently available on Azure AI Foundry under the Microsoft Research License Agreement and will launch on Hugging Face [this] week, the company said in a blog post. The company emphasized that Phi-4's design focuses on improving accuracy through enhanced training and data curation.... "Phi-4 outperforms comparable and even larger models on tasks like mathematical reasoning, thanks to a training process that combines synthetic datasets, curated organic data, and innovative post-training techniques," Microsoft said in its announcement. The model leverages a new training approach that integrates multi-agent prompting workflows and data-driven innovations to enhance its reasoning efficiency. The accompanying report highlights that Phi-4 balances size and performance, challenging the industry norm of prioritizing larger models... Phi-4 achieved a score of 80.4 on the MATH benchmark and has surpassed other systems in problem-solving and reasoning evaluations, according to the technical report accompanying the release. This makes it particularly appealing for domain-specific applications requiring precision, like scientific computation or advanced STEM problem-solving.Microsoft emphasized its commitment to ethical AI development, integrating advanced safety measures into Phi-4. The model benefits from Azure AI Content Safety features such as prompt shields, protected material detection, and real-time application monitoring. These features, Microsoft explained, help users address risks like adversarial prompts and data security threats during AI deployment. The company also reiterated that Azure AI Foundry, the platform hosting Phi-4, offers tools to measure and mitigate AI risks. Developers using the platform can evaluate and improve their models through built-in metrics and custom safety evaluations, Microsoft added...With Phi-4, Microsoft continues to evolve its AI offerings while promoting responsible use through robust safeguards. Industry watchers will observe how this approach shapes adoption in critical fields where reasoning and security are paramount.
Decision Making/Prediction/Content Synthesis
Life, Physical, and Social Science/Architecture and Engineering/Education, Training, and Library
null
null
null
null
null
null
news
Abner Li
Google announces PaliGemma 2 vision-language model
After announcing Gemma 2 at I/O 2024 in May, Google today is introducing PaliGemma 2 as its latest open vision-language model (VLM). more…
http://9to5google.com/2024/12/05/paligemma-2/
https://i0.wp.com/9to5go…&strip=all&ssl=1
2024-12-05T17:30:00Z
After announcing Gemma 2 at I/O 2024 in May, Google today is introducing PaliGemma 2 as its latest open vision-language model (VLM).The first version of PaliGemma launched in May for use cases like captioning images and short video, understanding text in images, object detection, object segmentation, and visual question answering.PaliGemma 2 now touts long captioning with the ability to generate detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene. Available model sizes include 3B, 10B, 28B parameters, as well as 224px, 448px, and 896px resolutions.Theres also accurate optical character recognition and understanding the structure and content of tables in documents. Google has found PaliGemma 2 to offer leading performance in chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation.Google says PaliGemma 2 is designed to be a drop-in replacement for those using the original model. Developers should benefit from immediate performance gains on most tasks without major code modifications. Another touted benefit is how easy it is to fine-tune for your specific tasks. Pre-trained models and code for PaliGemma 2 are available today in Kaggle, Hugging Face, and Ollama.FTC: We use income earning auto affiliate links.More.
Image Analysis/Content Creation
Unknown
null
null
null
null
null
null
news
OpenSPG
KAG – Knowledge Graph RAG Framework
KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowled...
https://github.com/OpenSPG/KAG
https://opengraph.githubassets.com/463de7390a29eb59fa2d6d36cc3e94af65aed5290a5dda77be32656796620397/OpenSPG/KAG
2024-12-30T02:55:58Z
English | |KAG is a logical reasoning and Q&A framework based on the OpenSPG engine and large language models, which is used to build logical reasoning and Q&A solutions for vertical domain knowledge bases. KAG can effectively overcome the ambiguity of traditional RAG vector similarity calculation and the noise problem of GraphRAG introduced by OpenIE. KAG supports logical reasoning and multi-hop fact Q&A, etc., and is significantly better than the current SOTA method.The goal of KAG is to build a knowledge-enhanced LLM service framework in professional domains, supporting logical reasoning, factual Q&A, etc. KAG fully integrates the logical and factual characteristics of the KGs. Its core features include:Knowledge and Chunk Mutual Indexing structure to integrate more complete contextual text informationKnowledge alignment using conceptual semantic reasoning to alleviate the noise problem caused by OpenIESchema-constrained knowledge construction to support the representation and construction of domain expert knowledgeLogical form-guided hybrid reasoning and retrieval to support logical reasoning and multi-hop reasoning Q&A Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! In the context of private knowledge bases, unstructured data, structured information, and business expert experience often coexist. KAG references the DIKW hierarchy to upgrade SPG to a version that is friendly to LLMs.For unstructured data such as news, events, logs, and books, as well as structured data like transactions, statistics, and approvals, along with business experience and domain knowledge rules, KAG employs techniques such as layout analysis, knowledge extraction, property normalization, and semantic alignment to integrate raw business data and expert rules into a unified business knowledge graph.This makes it compatible with schema-free information extraction and schema-constrained expertise construction on the same knowledge type (e. G., entity type, event type), and supports the cross-index representation between the graph structure and the original text block.This mutual index representation is helpful to the construction of inverted index based on graph structure, and promotes the unified representation and reasoning of logical forms.KAG proposes a logically formal guided hybrid solution and inference engine.The engine includes three types of operators: planning, reasoning, and retrieval, which transform natural language problems into problem solving processes that combine language and notation.In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.2024.11.21 : Support Word docs upload, model invoke concurrency setting, User experience optimization, etc.2024.10.25 : KAG initial releasedomain knowledge injection, domain schema customization, QFS tasks support, Visual query analysis, etc.Logical reasoning optimization, conversational tasks supportkag-model release, kag solution for event reasoning knowledge graph and medical knowledge graphkag front-end open source, distributed build support, mathematical reasoning optimizationRecommend System Version:macOS UsermacOS Monterey 12.6 or laterLinux UserCentOS 7 / Ubuntu 20.04 or laterWindows UserWindows 10 LTSC 2021 or laterSoftware Requirements:macOS / Linux UserDockerDocker ComposeWindows UserWSL 2 / Hyper-VDockerDocker ComposeUse the following commands to download the docker-compose.yml file and launch the services with Docker Compose.# set the HOME environment variable (only Windows users need to execute this command)# set HOME=%USERPROFILE%curl -sSL https://raw.githubusercontent.com/OpenSPG/openspg/refs/heads/master/dev/release/docker-compose-west.yml -o docker-compose-west.ymldocker compose -f docker-compose-west.yml up -dNavigate to the default url of the KAG product with your browser: http://127.0.0.1:8887See the Quick Start for Product Mode for detailed introduction.Refer to the 3.1 section to complete the installation of the engine & dependent image.macOS / Linux developers# Create conda env: conda create -n kag-demo python=3.10 && conda activate kag-demo# Clone code: git clone https://github.com/OpenSPG/KAG.git# Install KAG: cd KAG && pip install -e .Windows developers# Install the official Python 3.8.10 or later, install Git.# Create and activate Python venv: py -m venv kag-demo && kag-demo\Scripts\activate# Clone code: git clone https://github.com/OpenSPG/KAG.git# Install KAG: cd KAG && pip install -e .Please refer to the Quick Start for Developer Mode guide for detailed introduction of the toolkit. Then you can use the built-in components to reproduce the performance results of the built-in datasets, and apply those components to new busineness scenarios.The KAG framework includes three parts: kg-builder, kg-solver, and kag-model. This release only involves the first two parts, kag-model will be gradually open source release in the future.kg-builder implements a knowledge representation that is friendly to large-scale language models (LLM). Based on the hierarchical structure of DIKW (data, information, knowledge and wisdom), IT upgrades SPG knowledge representation ability, and is compatible with information extraction without schema constraints and professional knowledge construction with schema constraints on the same knowledge type (such as entity type and event type), it also supports the mutual index representation between the graph structure and the original text block, which supports the efficient retrieval of the reasoning question and answer stage.kg-solver uses a logical symbol-guided hybrid solving and reasoning engine that includes three types of operators: planning, reasoning, and retrieval, to transform natural language problems into a problem-solving process that combines language and symbols. In this process, each step can use different operators, such as exact match retrieval, text retrieval, numerical calculation or semantic reasoning, so as to realize the integration of four different problem solving processes: Retrieval, Knowledge Graph reasoning, language reasoning and numerical calculation.GitHub: https://github.com/OpenSPG/KAGOpenSPG: https://spg.openkg.cn/KAG introduction and applications: https://github.com/orgs/OpenSPG/discussions/52If you use this software, please cite it as below:@article{liang2024kag, title={KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation}, author={Liang, Lei and Sun, Mengshu and Gui, Zhengke and Zhu, Zhongshu and Jiang, Zhouyu and Zhong, Ling and Qu, Yuan and Zhao, Peilong and Bo, Zhongpu and Yang, Jin and others}, journal={arXiv preprint arXiv:2409.13731}, year={2024}}@article{yikgfabric, title={KGFabric: A Scalable Knowledge Graph Warehouse for Enterprise Data Interconnection}, author={Yi, Peng and Liang, Lei and Da Zhang, Yong Chen and Zhu, Jinye and Liu, Xiangyu and Tang, Kun and Chen, Jialin and Lin, Hao and Qiu, Leijie and Zhou, Jun}}Apache License 2.0
Decision Making/Content Synthesis/Information Retrieval Or Search
Business and Financial Operations
null
null
null
null
null
null
news
Graham Barlow
Microsoft announced Phi-4, a new AI that’s better at math and language processing
New Phi-4 AI from Microsoft is lightweight but great at math and language processing
https://www.techradar.com/computing/artificial-intelligence/microsoft-announced-phi-4-a-new-ai-thats-better-at-math-and-language-processing
https://cdn.mos.cms.futu…AXDH-1200-80.jpg
2024-12-13T19:25:08Z
Microsoft announces new AI model Phi-4It's available to developers and researchers nowPerforms well at math tasks despite its small scaleMicrosoft has announced a brand new AI model called Phi-4, which is a small language model (SLM) in contrast to the large language models (LLM), that chatbots like ChatGPT and Copilot use. As well as being lightweight, Phi-4 excels at complex reasoning which makes it perfect for math and language processing.Microsoft has released a set of benchmarks showing Phi-4 outperforming even large language models like Gemini Pro 1.5 on math competition problems.(Image credit: Microsoft)(Image credit: Microsoft)Breakthroughs in post-trainingSmall language models, like ChatGPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku tend to be faster and cheaper to run compared to large language models,. However, their performance has increased dramatically with recent versions.For Microsoft, these improvements were made possibly through breakthroughs in training Phi-4 on high-quality synthetic data sets and post-training innovations. Since the bottleneck for improving AI ability has always been the vast amount of processing power and data required for the training (sometimes called the ‘pre-training data wall’), AI companies have instead been looking at ways to improve the post-training development to improve performance.Phi-4 is currently available on Azure AI Foundry, a platform for developers to build generative AI applications. So, while Phi-4 is available under a Microsoft research license agreement, you can’t simply start chatting with it, as you would with Copilot or ChatGPT. Instead, we'll have to wait and see what people produce with it in the future.You might also like...Sign up for breaking news, reviews, opinion, top tech deals, and more.By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Content Synthesis/Decision Making/Prediction
Unknown
null
null
null
null
null
null
news
Julian Horsey
Microsoft’s Phi-4 (14B) AI Model Tested Locally: Performance, Limitations and Future Potential
Microsoft’s new Phi-4, a 14-billion-parameter language model, represents a significant development in artificial intelligence, particularly in tackling complex reasoning tasks. Designed for applications such as structured data extraction, code generation, and question answering, the latest large language model from Microsoft demonstrates both notable strengths and clear limitations. In this Phi-4 (14B) review Venelin Valkov provides […]The post Microsoft’s Phi-4 (14B) AI Model Tested Locally: Performance, Limitations and Future Potential appeared first on Geeky Gadgets.
https://www.geeky-gadgets.com/microsofts-phi-4-14b-ai-model/
https://www.geeky-gadget…-limitations.jpg
2024-12-16T12:15:30Z
Microsoft’s new Phi-4, a 14-billion-parameter language model, represents a significant development in artificial intelligence, particularly in tackling complex reasoning tasks. Designed for applications such as structured data extraction, code generation, and question answering, the latest large language model from Microsoft demonstrates both notable strengths and clear limitations.In this Phi-4 (14B) review Venelin Valkov provides more insight into the strengths and weaknesses of Phi-4, based on local testing using Ollama. From its ability to generate well-formatted code to its struggles with accuracy and consistency, well explore what this model gets rightand where it falls short. Whether youre a developer, data analyst, or just curious about the latest in AI, this breakdown will give you a clear picture of what Phi-4 can (and cant) do right now, and what might be on the horizon for its future development.Phi-4: A Closer Look at the ModelPhi-4 is engineered to address advanced reasoning challenges by using a combination of synthetic and real-world datasets. Its architecture includes post-training enhancements aimed at improving its performance across a variety of use cases. Benchmarks suggest that Phi-4 can outperform some larger models in specific reasoning tasks, showcasing its efficiency in targeted scenarios. However, inconsistencies observed during testing underscore that the model is still evolving and requires additional development to achieve broader applicability.The model’s design focuses on balancing computational efficiency with task-specific performance. By optimizing its architecture for reasoning tasks, Phi-4 demonstrates potential in areas where precision and structured outputs are critical. However, its limitations in handling certain complex tasks highlight the need for further refinement.Strengths of Phi-4Phi-4 excels in several areas, particularly in tasks requiring structured data handling and code generation. Its key strengths include:Structured Data Extraction: The model is adept at extracting detailed and accurate information from complex datasets, such as purchase records or tabular data. This capability makes it a valuable tool for professionals working in data-intensive fields.Code Generation: Phi-4 performs well in generating clean, well-formatted code, including JSON structures and classification scripts. This feature is especially beneficial for developers and data analysts seeking efficient solutions for repetitive coding tasks.These strengths position Phi-4 as a promising resource for tasks that demand precision and structured outputs, particularly in professional and technical environments.Microsoft Phi-4 (14B) AI ModelBrowse through more resources below from our in-depth content covering more areas on Large Language Models (LLMs).Weaknesses and LimitationsDespite its strengths, Phi-4 exhibits several weaknesses that limit its broader applicability. These shortcomings include:Coding Challenges: While capable of generating basic code, the model struggles with more complex tasks such as sorting algorithms, often producing outputs with functional errors.Financial Data Summarization: Phi-4 frequently generates inaccurate or fabricated summaries when tasked with financial data, reducing its reliability for critical applications in this domain.Ambiguous Question Handling: Responses to unclear or nuanced queries are inconsistent, which diminishes its effectiveness in scenarios requiring advanced reasoning.Table Data Extraction: The model’s performance in extracting information from tabular data is erratic, with inaccuracies undermining its utility for structured data tasks.Slow Response Times: When processing larger inputs, Phi-4 exhibits noticeable delays, making it less practical for time-sensitive applications.These limitations highlight the areas where Phi-4 requires improvement to compete effectively with more mature models in the market.Testing Setup and MethodologyThe evaluation of Phi-4 was conducted locally using Ollama on an M3 Pro laptop, with 4-bit quantization applied to optimize performance. The testing process involved a diverse range of tasks designed to assess the model’s practical capabilities. These tasks included:Coding challengesTweet classificationFinancial data summarizationTable data extractionThis controlled testing environment provided valuable insights into the model’s strengths and weaknesses, offering a comprehensive view of its real-world performance. By focusing on practical applications, the evaluation highlighted both the potential and the limitations of Phi-4 in addressing specific use cases.Performance Observations and ComparisonsPhi-4’s performance reveals a mixed profile when compared to other language models. While it demonstrates promise in certain areas, it falls short in others. Key observations from the testing include:Strengths: The model’s ability to handle structured data extraction remains a standout feature, showcasing its potential in domains where precision is critical.Weaknesses: Issues such as hallucinations, inaccuracies, and inconsistent reasoning performance limit its broader utility and reliability.Comparative Limitations: When compared to more recent models like LLaMA 2.5, Phi-4 lags behind in terms of overall refinement and reliability. Additionally, the absence of officially released weights from Microsoft complicates direct comparisons and limits the model’s accessibility for further evaluation.While Phi-4 demonstrates efficiency in specific tasks, its inconsistent performance and lack of polish hinder its ability to compete with more advanced models. These observations underscore the need for further updates and enhancements to unlock the model’s full potential.Future Potential and Areas for ImprovementPhi-4 represents a step forward in AI language modeling, particularly in tasks involving structured data and targeted reasoning applications. However, its current limitationsranging from inaccuracies and hallucinations to slow response timeshighlight the need for continued development. Future updates, including the release of official weights and further optimization of its architecture, could address these issues and significantly enhance its performance.For now, Phi-4 serves as a valuable tool for exploring the evolving capabilities of AI language models. Its strengths in structured data tasks and code generation make it a promising option for specific use cases, while its weaknesses provide a roadmap for future improvements. As the field of AI continues to advance, Phi-4’s development will likely play a role in shaping the next generation of language models.Media Credit: Venelin ValkovFiled Under: Gadgets NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Content Creation/Decision Making
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Mike Loukides
Generative Logic
Alibaba’s latest model, QwQ-32B-Preview, has gained some impressive reviews for its reasoning abilities. Like OpenAI’s GPT-4 o1,1 its training has emphasized reasoning rather than just reproducing language. That seemed like something worth testing out—or at least playing around with—so when I heard that it very quickly became available in Ollama and wasn’t too large to […]
https://www.oreilly.com/radar/generative-logic/
https://www.oreilly.com/…6c18aae08863.jpg
2024-12-10T11:04:55Z
Alibabas latest model, QwQ-32B-Preview, has gained some impressive reviews for its reasoning abilities. Like OpenAIs GPT-4 o1,1 its training has emphasized reasoning rather than just reproducing language. That seemed like something worth testing outor at least playing around withso when I heard that it very quickly became available in Ollama and wasnt too large to run on a moderately well-equipped laptop, I downloaded QwQ and tried it out. I also tried a few competing models: GPT-4 o1 and Gemma-2-27B. GPT-4 o1 was the first model to claim that it had been trained specifically for reasoning. Gemma, as far as I know, makes no such claim.How do you test a reasoning model? Its not as easy as just checking facts. I thought, Maybe I could ask it to prove Fermats Last Theorem, but I quickly realized that the only response Id be able to evaluate was I cant prove this. (Which was, in fact, the response I received from all three models.) I cant prove Fermats Last Theorem either, nor do I claim to understand any of the massive proof that mathematicians have accepted. If QwQ sputtered out a few thousand tokens of proof, I could fairly confidently say that its reasoning was flawedjust because I know the actual proof is much longer, a hundred pages of dense mathematical reasoning. But thats hardly a valid test.What about Fermats Little Theorem? Thats a lot easier, but its a textbook example, so whatever the result, Id never know whether QwQ reasoned correctly or if it just managed to parrot something from its training set. There are more than a few math textbooks online, and its fair to assume that all of them are in the training data.So I tried a slightly more obscure example: Prove that if a number consists only of ones (like 11111111) and is prime, the number of digits (all 1s) must also be prime. Ive managed to prove this, but its not a problem thats famous enough to attract textbook attention. So lets go! Heres QwQs proof:>>> Can you prove that if all the digits of a prime number are 1, then the number of digits must be prime?Let’s tackle this problem step by step. So, the statement is: If all the digits of a prime number are 1, then the number of digits must be prime. Hmm, interesting. Let’s break this down.First, let’s understand what we’re dealing with. We have a prime number where all its digits are 1. For example, numbers like 11, 111, 1111, etc., but only considering those that are prime. We need to prove that the number of digits in such a prime number must itself be a prime number.Let me think about this. Let’s denote the number of digits by \( n \). So, the number is composed of \( n \) ones. Such numbers are known as repunit numbers. A repunit is a number consisting solely of ones in its decimal representation. For example, \( R_2 = 11 \), \( R_3 = 111 \), \( R_4 = 1111 \), and so on, where \(\ R_n \) denotes a repunit of length \( n \).Our goal is to show that if \( R_n \) is prime, then \( n \) must be prime.Let’s consider the contrapositive of this statement. The contrapositive of “If \( R_n \) is prime, then \( n \) is prime” is “If \( n \) is not prime, then \( R_n \) is not prime.” If we can prove the contrapositive, that will establish the original statement.and so on, until **Final Answer**\[ \boxed{\text{If } R_n \text{ is prime, then } n \text{ is prime.}} \]Ive spared you the (very) chatty proofif you want to see it, put QwQ to the text yourself. Suffice it to say that it was correct. Both GPT-4 o1 and Gemma-2-27B gave me the same proof but with less chatter. Gemma skipped some steps that a human might consider obvious. Next, can we go a step farther and generalize?>>> does this proof hold for bases other than base 10?That’s an interesting question! Let’s explore whether this proof can be extended to bases other than base 10.In base \( b \), a repunit of length \( n \) is defined as:\[ R_n^{(b)} = \frac{b^n - 1}{b - 1} \]and so onAgain, try it yourselfand again, QwQ proved it correctly. This second proposition isnt as likely to show up in textbooks or other resources; its less of a proof than an observation that the original proof didnt make any assumptions about the base.When I asked GPT to prove the same theorem, I got a very similar (and correct) proof, stated more formally and with less color commentary. That isnt particularly surprising, since GPT has also been trained to reason. I was more surprised to see that Gemma-2-27B also gave me a correct proof. Gemma has been trained on mathematical texts but not specifically on reasoning. (Perhaps Googles marketing never thought to call this training reasoning.) Gemma omitted some of the stepssteps a regular human would probably omit as obvious but that a mathematician would write out for completeness. (Just to make sure, I asked GPT to confirm that Gemmas proof was correct. It agreed.)Have we proven that training models to reason works? Well, we cant claim to have proven anything on the basis of one successful trialor, for that matter, on the basis of an extremely large number of trials. (In case youre wondering, Gemma-2-7B, an even smaller model, failed.) But we have learned something very important. Think about the size of the models: OpenAI has said nothing about the size of GPT-4 o1, but it is rumored to have over a trillion parameters. QwQ weighs in at 32 billion parameters, and Gemma-2-27B at 27 billion. So QwQ and Gemma2 are between nearly two orders of magnitude smaller than GPT. Furthermore, GPT runs on what must be considered one of the worlds largest supercomputers. We dont know the size, but we do know that OpenAIs infrastructure is massive and includes a large percentage of the worlds high-end GPUs. QwQ and Gemma ran happily on my MacBook Pro. They made the fan spin and sucked down the battery but nothing extraordinary. Granted, GPT is serving thousands of users simultaneously, so it isnt really a fair comparison. But its important to realize that GPT isnt the only game in town and that models running locally can equal GPT on nontrivial tasks. Most people who have experimented with running models locally have come to similar conclusions, but think about what this means. If youre building an AI application, you dont have to tie yourself to OpenAI. Smaller open models can do the joband theyll shield you from OpenAIs bills (and inevitable price increases), theyll let you keep your data local, and theyll leave you in control of your destiny.What else can we learn? I have wondered how a language model can be trained for logic; my intuition said that would be a harder and more complex problem than training it for language. My intuition was wrong. I dont know how these models were trained, but I now think that producing logic successfully is, in many ways, simpler than generating language. Why? QwQs verbosity gives us a big hint: Lets consider the contrapositive of this statement. A contrapositive is simply a logical pattern: If A implies B, then not B implies not A. What other logical patterns can we think of? Syllogisms: If A implies B and B implies C, then A implies C. Proof by contradiction: To prove that A implies B, assume that A implies not B and show that assumption is false. Induction: Show that if A(n) implies B(n), then A(n+1) implies B(n+1); then show that A(0) implies B(0).It would be easy to grow a much longer list of patterns. There are better notations to represent these patterns, but a longer list and better representations arent important here. Whats important is to realize that these are patternsand that composing logical patterns into logical statements or proofs isnt fundamentally different from composing words (or tokens) into sentences. Is pushing patterns around the essence of logic? Thats not a fair question: Its logic if you do it correctly, illogic if you dont. The logic isnt in the patterns but in knowing how to assemble the patterns to solve problemsand the process of assembling patterns has to be the focus of training, looking at millions of examples of logical reasoning to model the way patterns are assembled into wholes. Any of these logical patterns can lead you astray if youre not careful; its easy to construct false syllogisms by starting with premises that are incorrect. I dont expect logic to cure the problem of hallucination. But I suspect that training a model in logical patterns is a better way for the model to learn logic than simply training it on words (human utterances). Thats the bet that OpenAI, Alibaba, and possibly Google are makingand they seem to be winning.Can we go further? Are there other kinds of patterns that language models could be trained on? Yes. Generative AI has proven useful for generating code but hasnt (yet) made significant inroads into software design. Could training models specifically on design patterns be a breakthrough?2 I dont know, but Id like to see someone try. A model specialized for software design would be worth having.Could we do better with generative music if we trained models on the patterns analyzed in music theory, in addition to audio? Applications like Suno are a lot of fun, but when you get down to it, theyre just repeating the clichés of common musical styles. Would it help to give Suno some knowledge of music theory, knowledge of the patterns behind music in addition to the music itself? Would language models write better poetry if they were trained on the patterns found in poetic language (rhetorical devices, figurative speech) rather than just words? One of my first experiments with generative AI was to ask GPT-3 to write a Petrarchan sonnet, which has a different structure from the more common Shakespearian sonnet. GPT-3 and its contemporaries failed. It was a long time before I found a model that could do that successfully; although most models could define a Petrarchan sonnet, they could only generate Shakespearean sonnets. That generation of models was trained only on the words, not the larger patterns.Is this a way forward for AI? I dont know, but Id like to see AI researchers try. In the meantime, though, its enough to realize that, powerful as the GPT models are, you can run small open models on a laptop or a phone that perform equally well.FootnotesI tested on the Preview, which has now been promoted to GPT-4 o1. I did not retest with the final o1, which presumably has had further training and optimization.Design patterns are generally associated with object-oriented design, but the concept is really more general. Design patterns attempt to name for solutions to problems that you see every day; naming the solution allows you to talk about it. That definition is applicable to any discipline, including functional programming and (of course) architecture.
Decision Making/Content Creation/Discovery
Education, Training, and Library/Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
null
A Replacement for Bert
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/blog/modernbert
https://huggingface.co/b…rt/thumbnail.png
2024-12-19T16:53:58Z
TL;DRThis blog post introduces ModernBERT, a family of state-of-the-art encoder-only models representing improvements over older generation encoders across the board, with a 8192 sequence length, better downstream performance and much faster processing. ModernBERT is available as a slot-in replacement for any BERT-like models, with both a base (139M params) and large (395M params) model size.Click to see how to use these models with transformersModernBERT will be included in v4.48.0 of transformers. Until then, it requires installing transformers from main:pip install git+https://github.com/huggingface/transformers.gitSince ModernBERT is a Masked Language Model (MLM), you can use the fill-mask pipeline or load it via AutoModelForMaskedLM. To use ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes. If your GPU supports it, we recommend using ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:pip install flash-attnUsing AutoModelForMaskedLM:from transformers import AutoTokenizer, AutoModelForMaskedLMmodel_id = "answerdotai/ModernBERT-base"tokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForMaskedLM.from_pretrained(model_id)text = "The capital of France is [MASK]."inputs = tokenizer(text, return_tensors="pt")outputs = model(**inputs)# To get predictions for the mask:masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)predicted_token = tokenizer.decode(predicted_token_id)print("Predicted token:", predicted_token)# Predicted token: ParisUsing a pipeline:import torchfrom transformers import pipelinefrom pprint import pprintpipe = pipeline( "fill-mask", model="answerdotai/ModernBERT-base", torch_dtype=torch.bfloat16,)input_text = "He walked to the [MASK]."results = pipe(input_text)pprint(results)Note: ModernBERT does not use token type IDs, unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the token_type_ids parameter.IntroductionBERT was released in 2018 (millennia ago in AI-years!) and yet its still widely used today: in fact, its currently the second most downloaded model on the HuggingFace hub, with more than 68 million monthly downloads, only second to another encoder model fine-tuned for retrieval. Thats because its encoder-only architecture makes it ideal for the kinds of real-world problems that come up every day, like retrieval (such as for RAG), classification (such as content moderation), and entity extraction (such as for privacy and regulatory compliance).Finally, 6 years later, we have a replacement! Today, we at Answer.AI and LightOn (and friends!) are releasing ModernBERT. ModernBERT is a new model series that is a Pareto improvement over BERT and its younger siblings across both speed and accuracy. This model takes dozens of advances from recent years of work on large language models (LLMs), and applies them to a BERT-style model, including updates to the architecture and the training process.We expect to see ModernBERT become the new standard in the numerous applications where encoder-only models are now deployed, such as in RAG pipelines (Retrieval Augmented Generation) and recommendation systems.In addition to being faster and more accurate, ModernBERT also increases context length to 8k tokens (compared to just 512 for most encoders), and is the first encoder-only model that includes a large amount of code in its training data. These features open up new application areas that were previously inaccessible through open models, such as large-scale code search, new IDE features, and new types of retrieval pipelines based on full document retrieval rather than small chunks.But in order to explain just what we did, lets first take a step back and look at where weve come from.Decoder-only modelsThe recent high-profile advances in LLMs have been in models like GPT, Llama, and Claude. These are decoder-only models, or generative models. Their ability to generate human-like content has enabled astonishing new GenAI application areas like generated art and interactive chat. These striking applications have attracted major investment, funded booming research, and led to rapid technical advances. What weve done, essentially, is port these advances back to an encoder-only model.Why? Because many practical applications need a model thats lean and mean! And it doesnt need to be a generative model.More bluntly, decoder-only models are too big, slow, private, and expensive for many jobs. Consider that the original GPT-1 was a 117 million parameter model. The Llama 3.1 model, by contrast, has 405 billion parameters, and its technical report describes a data synthesis and curation recipe that is too complex and expensive for most corporations to reproduce. So to use such a model, like ChatGPT, you pay in cents and wait in seconds to get an API reply back from heavyweight servers outside of your control.Of course, the open-ended capabilities of these giant generative models mean that you can, in a pinch, press them into service for non-generative or discriminative tasks, such as classification. This is because you can describe a classification task in plain English and ... just ask the model to classify. But while this workflow is great for prototyping, you dont want to pay prototype prices once youre in mass production.The popular buzz around GenAI has obscured the role of encoder-only models. These are the workhorses of practical language processing, the models that are actually being used for such workloads right now in many scientific and commercial applications.Encoder-only modelsThe output of an encoder-only model is a list of numerical values (an embedding vector). You might say that instead of answering with text, an encoder model literally encodes its answer into this compressed, numerical form. That vector is a compressed representation of the model's input, which is why encoder-only models are sometimes referred to as representational models. While decoder-only models (like a GPT) can do the work of an encoder-only model (like a BERT), they are hamstrung by a key constraint: since they are generative models, they are mathematically not allowed to peek at later tokens. They can only ever look backwards. This is in contrast to encoder-only models, which are trained so each token can look forwards and backwards (bi-directionally). They are built for this, and it makes them very efficient at what they do.Basically, a frontier model like OpenAI's O1 is like a Ferrari SF-23. Its an obvious triumph of engineering, designed to win races, and thats why we talk about it. But it takes a special pit crew just to change the tires and you cant buy one for yourself. In contrast, a BERT model is like a Honda Civic. Its also an engineering triumph, but more subtly, since it is engineered to be affordable, fuel-efficient, reliable, and extremely useful. And thats why theyre absolutely everywhere.You can see this by looking at it a number of ways.Supporting generative models: One way to understand the prevalence of representational models (encoder-only) is to note how frequently they are used in concert with a decoder-only model to make a system which is safe and efficient.The obvious example is RAG. Instead of relying on the LLMs knowledge trained into the models parameters, the system uses a document store to furnish the LLM with information relevant to the query. But of course this only defers the problem. If the LLM doesnt know which documents are relevant to the query, then the system will need some other process to select those documents? Its going to need a model which is fast and cheap enough that it can be used to encode the large quantities of information needed to make the LLM useful. That model is often a BERT-like encoder-only model.Another example is supervision architectures, where a cheap classifier might be used to ensure that generated text does not violate content safety requirements.In short, whenever you see a decoder-only model in deployment, theres a reasonable chance an encoder-only model is also part of the system. But the converse is not true.Encoder-based systems: Before there was GPT, there were content recommendations in social media and in platforms like Netflix. There was ad targeting in those venues, in search, and elsewhere. There was content classification for spam detection, abuse detection, etc.. These systems were not built on generative models, but on representational models like encoder-only models. And all these systems are still out there and still running at enormous scale. Imagine how many ads are targeted per second around the world!Downloads: On HuggingFace, RoBERTa, one of the leading BERT-based models, has more downloads than the 10 most popular LLMs on HuggingFace combined. In fact, currently, encoder-only models add up to over a billion downloads per month, nearly three times more than decoder-only models with their 397 million monthly downloads. In fact, the `fill-mask` model category, composed of encoder base models such as ModernBERT, ready to be fine-tuned for other downstream applications, is the most downloaded model category overall.Inference costs: What the above suggests, is that on an inference-per-inference basis, there are many times more inferences performed per year on encoder-only models than on decoder-only or generative models. An interesting example is FineWeb-Edu, where model-based quality filtering had to be performed over 15 trillion tokens. The FineWeb-Edu team chose to generate annotations with a decoder-only model, Llama-3-70b-Instruct, and perform the bulk of the filtering with a fine-tuned BERT-based model. This filtering took 6,000 H100 hours, which, at HuggingFace Inference Points pricing of $10/hour, comes to a total of $60,000. On the other hand, feeding 15 trillion tokens to popular decoder-only models, even with the lowest-cost option of using Googles Gemini Flash and its low inference cost of $0.075/million tokens, would cost over one million dollars!PerformanceOverviewHeres a snapshot of the accuracy of ModernBERT and other models across a range of tasks, as measured by standard academic benchmarks as you can see, ModernBERT is the only model which is a top scorer across every category, which makes it the one model you can use for all your encoder-based tasks: If youve ever done an NLP competition on Kaggle, then youll know that DeBERTaV3 has been the choice of champions for years. But no longer: not only is ModernBERT the first base-size model to beat DeBERTaV3 on GLUE, it also uses less than 1/5th of Debertas memory.And of course, ModernBERT is fast. Its twice as fast as DeBERTa in fact, up to 4x faster in the more common situation where inputs are mixed length. Its long context inference is nearly 3 times faster than other high-quality models such as NomicBERT and GTE-en-MLM. ModernBERTs context length of 8,192 tokens is over 16x larger than most existing encoders. This is critical, for instance, in RAG pipelines, where a small context often makes chunks too small for semantic understanding. ModernBERT is also the state-of-the-art long context retriever with ColBERT, and is 9 percentage points above the other long context models. Even more impressive: this very quickly trained model, simply tuned to compare to other backbones, outperforms even widely-used retrieval models on long-context tasks!For code retrieval, ModernBERT is unique. Theres nothing to really compare it to, since theres never been an encoder model like this trained on a large amount of code data before. For instance, on the StackOverflow-QA dataset (SQA), which is a hybrid dataset mixing both code and natural language, ModernBERT's specialized code understanding and long-context capabilities make it the only backbone to score over 80 on this task.This means whole new applications are likely to be built on this capability. For instance, imagine an AI-connected IDE which had an entire enterprise codebase indexed with ModernBERT embeddings, providing fast long context retrieval of the relevant code across all repositories. Or a code chat service which described how an application feature worked that integrated dozens of separate projects.Compared to the mainstream models, ModernBERT performs better across nearly all three broad task categories of retrieval, natural language understanding, and code retrieval. Whilst it slightly lags DeBERTaV3 in one area (natural language understanding), it is many times faster. Please note that ModernBERT, as any other base model, can only do masked word prediction out-of-the-box. To be able to perform other tasks, the base model should be fine-tuned as done in these boilerplates.Compared to the specialized models, ModernBERT is comparable or superior in most tasks. In addition, ModernBERT is faster than most models across most tasks, and can handle inputs up to 8,192 tokens, 16x longer than the mainstream models. EfficiencyHeres the memory (max batch size, BS) and Inference (in thousands of tokens per second) efficiency results on an NVIDIA RTX 4090 for ModernBERT and other decoder models: The first thing you might notice is that were analysing the efficiency on an affordable consumer GPU, rather than the latest unobtainable hyped hardware. First and foremost, ModernBERT is focused on practicality, not hype.As part of this focus, it also means weve made sure ModernBERT works well for real-world applications, rather than just benchmarks. Models of this kind are normally tested on just the one exact size theyre best at their maximum context length. Thats what the fixed column in the table shows. But input sizes vary in the real world, so thats the performance we worked hard to optimise the variable column. As you can see, for variable length inputs, ModernBERT is much faster than all other models.For long context inputs, which we believe will be the basis for the most valuable and important future applications, ModernBERT is 2-3x faster than the next fastest model. And, on the practicality dimension again: ModernBERT doesnt require the additional heavy xformers dependency, but instead only requires the now commonplace Flash Attention as a dependency.Furthermore, thanks to ModernBERTs efficiency, it can use a larger batch size than nearly any other model, and can be used effectively on smaller and cheaper GPUs. The efficiency of the base size, in particular, may enable new applications that run directly in browsers, on phones, and so forth.Why is ModernBERT, well, Modern?Now, weve made our case to why we should give some more love to encoder models. As trusted, under-appreciated workhorses, theyve had surprisingly few updates since 2018's BERT!Even more surprising: since RoBERTa, there has been no encoder providing overall improvements without tradeoffs (fancily known as Pareto improvements): DeBERTaV3 had better GLUE and classification performance, but sacrificed both efficiency and retrieval. Other models, such as AlBERT, or newer ones, like GTE-en-MLM, all improved over the original BERT and RoBERTa in some ways but regressed in others.However, since the duos original release, we've learned an enormous amount about how to build better language models. If youve used LLMs at all, youre very well aware of it: while theyre rare in the encoder-world, Pareto improvements are constant in decoder-land, where models constantly become better at everything. And as weve all learned by now: model improvements are only partially magic, and mostly engineering.The goal of the (hopefully aptly named) ModernBERT project was thus fairly simple: bring this modern engineering to encoder models. We did so in three core ways:a modernized transformer architectureparticular attention to efficiencymodern data scales & sourcesMeet the New Transformer, Same as the Old TransformerThe Transformer architecture has become dominant, and is used by the vast majority of models nowadays. However, its important to remember that there isnt one but many Transformers. The main thing they share in common is their deep belief that attention is indeed all you need, and as such, build various improvements centered around the attention mechanism.ModernBERT takes huge inspiration from the Transformer++ (as coined by Mamba), first used by the Llama2 family of models. Namely, we replace older BERT-like building blocks with their improved equivalent, namely, we:Replace the old positional encoding with "rotary positional embeddings" (RoPE): this makes the model much better at understanding where words are in relation to each other, and allows us to scale to longer sequence lengths. Switch out the old MLP layers for GeGLU layers, improving on the original BERTs GeLU activation function. Streamline the architecture by removing unnecessary bias terms, letting us spend our parameter budget more effectively Add an extra normalization layer after embeddings, which helps stabilize trainingUpgrading a Honda Civic for the Race TrackWeve covered this already: encoders are no Ferraris, and ModernBERT is no exception. However, that doesnt mean it cant be fast. When you get on the highway, you generally dont go and trade in your car for a race car, but rather hope that your everyday reliable ride can comfortably hit the speed limit.In fact, for all the application cases we mentioned above, speed is essential. Encoders are very popular in uses where they either have to process tons of data, allowing even tiny speed increments to add up very quickly, or where latency is very important, as is the case on RAG. In a lot of situations, encoders are even run on CPU, where efficiency is even more important if we want results in a reasonable amount of time.As with most things in research, we build while standing on the shoulders of giants, and heavily leverage Flash Attention 2s speed improvements. Our efficiency improvements rely on three key components: Alternating Attention, to improve processing efficiency, Unpadding and Sequence Packing, to reduce computational waste, and Hardware-Aware Model Design, to maximise hardware utilization.Global and Local AttentionOne of ModernBERTs most impactful features is AlternatingAttention, rather than full global attention. In technical terms, this means that our attention mechanism only attends to the full input every 3 layers (global attention), while all other layers use a sliding window where every token only attends to the 128 tokens nearest to itself (local attention).As attentions computational complexity balloons up with every additional token, this means ModernBERT can process long input sequences considerably faster than any other model.In practice, it looks like this:Conceptually, the reason this works is pretty simple: Picture yourself reading a book. For every sentence you read, do you need to be fully aware of the entire plot to understand most of it (full global attention)? Or is awareness of the current chapter enough (local attention), as long as you occasionally think back on its significance to the main plot (global attention)? In the vast majority of cases, its the latter.Unpadding and Sequence PackingAnother core mechanism contributing to ModernBERTs efficiency is its use for Unpadding and Sequence packing.In order to be able to process multiple sequences within the same batch, encoder models require them to be the same length, so they can perform parallel computation. Traditionally, weve relied on padding to achieve this: figure out which sentence is the longest, and add meaningless tokens (padding tokens) to fill up every other sequence.While padding solves the problem, it doesnt do so elegantly: a lot of compute ends up being spent and wasted on padding tokens, which do not contribute any semantic information.Comparing padding with sequence packing. Sequence packing (unpadding) avoids wasting compute on padding tokens and has more consistent non-padding token counts per batch. Samples are still processed individually through careful masking.Unpadding solves this issue: rather than keeping these padding tokens, we remove them all, and concatenate them into mini-batches with a batch size of one, avoiding all unnecessary computations. If youre using Flash Attention, our implementation of unpadding is even faster than previous methods, which heavily relied on unpadding and repadding sequences as they went through the model: we go one step further by introducing our own implementation of unpadding, relying heavily on recent developments in Flash Attentions RoPE support. This allows ModernBERT to only have to unpad once, and optionally repad sequences after processing, resulting in a 10-20% speedup over previous methods.To speed up pre-training even further, unpadding is in good company within our model, as we use it in conjunction with sequence packing. Sequence packing here is a logical next step: as were concatenating inputs into a single sequence, and GPUs are very good at parallelisation, we want to maximise the computational efficiency we can squeeze out of a single forward model pass. To do so, we use a greedy algorithm to group individual sequences into concatenated ones that are as close to the models maximum input length as possible.Paying Attention to HardwareFinally, the third facet of ModernBERTs efficiency is hardware design.We attempted to balance two insights that have been highlighted by previous research:Deep & Narrow vs Wide & Shallow: Research shows that deeper models with narrower layers, often perform better than shallow models with fewer, wider layers. However, this is a double-edged sword: the deeper the model, the less parallelizable it becomes, and thus, the slower it runs at identical parameter counts. Hardware Efficiency: Model dimensions need to align well with GPU hardware for maximum performance, and different target GPUs result in different constraints.Sadly, there is no magic recipe to make a model run similarly well on a wide range of GPUs, but there is an excellent cookbook: The Case for Co-Designing Model Architectures with Hardware, in which the ways to optimize a model architecture for a given GPU are carefully laid out. We came up with a heuristic to extend their method to a basket of GPUs, while respecting a given set of constraints. Logically, the first step is to define said constraints, in our case:Defining our target GPUs as common inference ones (RTX 3090/4090, A10, T4, L4) Roughly defining our target model sizes at 130-to-150 million parameters for ModernBERT-Base, and 350-to-420 for ModernBERT-Large. The final embedding sizes must match the original BERTs dimensions, 768 for base and 1024 for large, to maximize backwards compatibility Set performance constraints which are common across the basket of GPUsAfterwards, we experimented with multiple model designs via a constrained grid search, varying both layer counts and layer width. Once wed identified shapes that appeared to be the most efficient ones, we confirmed that our heuristics matched real-world GPU performance, and settled on the final model designs.Trainingdef data(): return [text, bad_text, math, code]Picture this exact scene, but replace Developers with DataAnother big aspect in which encoders have been trailing behind is training data. This is often understood to mean solely training data scale, but this is not actually the case: previous encoders, such as DeBERTaV3, were trained for long enough that they might have even breached the trillion tokens scale!The issue, rather, has been training data diversity: many of the older models train on limited corpora, generally consisting of Wikipedia and Wikibooks. These data mixtures are very noticeably single text modality: they contain nothing but high-quality natural text.In contrast, ModernBERT is trained on data from a variety of English sources, including web documents, code, and scientific articles. It is trained on 2 trillion tokens, of which most are unique, rather than the standard 20-to-40 repetitions common in previous encoders.The impact of this is immediately noticeable: out of all the existing open source encoders, ModernBERT is in a class of its own on programming-related tasks. Were particularly interested in what downstream uses this will lead to, in terms of improving programming assistants.ProcessWe stick to the original BERTs training recipe, with some slight upgrades inspired by subsequent work: we remove the Next-Sentence Prediction objective, since then shown to add overhead for no clear gains, and increase the masking rate from 15% to 30%.Both models are trained with a three-phase process. First, we train on 1.7T tokens at a sequence length of 1024. We then adopt a long-context adaptation phase, training on 250B tokens at a sequence length of 8192, while keeping the total tokens seen per batch more or less consistent by lowering the batch size. Finally, we perform annealing on 50 billion tokens sampled differently, following the long-context extension ideal mix highlighted by ProLong.Training in three phases is our way of ensuring our model is good across the board, which is reflected in its results: it is competitive on long-context tasks, at no cost to its ability to process short context But it has another benefit: for the first two-phases, we train using a constant learning rate once the warmup phase is complete, and only perform learning rate decay on the final 50 billion tokens, following the Trapezoidal (or Warmup-Stable-Decay) learning rate. And whats more: we will release every single immediate intermediate checkpoints from these stable phases, inspired by Pythia. Our main reason for doing so was supporting future research and applications: anyone is free to restart training from any of our pre-decay checkpoints, and perform annealing on domain-appropriate data for their intended use!The tricks, its all about the tricks!If youve made it this far into this announcement, youre probably used to this: of course, we use tricks to make things quicker here too. To be precise, we have two main tricks.Lets start with the first one, which is pretty common: since the initial training steps are updating random weights, we adopt batch-size warmup: we start with a smaller batch size so the same number of tokens update the model weights more often, then gradually increase the batch size to the final training size. This significantly speeds up the initial phase of model training, where the model learns its most basic understanding of language.The second trick is far more uncommon: weight initialization via tiling for the larger model size, inspired by Microsofts Phi family of models. This ones based on the following realization: Why initialize the ModernBERT-larges initial weights with random numbers when we have a perfectly good (if we dare say so ourselves) set of ModernBERT-base weights just sitting there?And indeed, it turns out that tiling ModernBERT-bases weights across ModernBERT-large works better than initializing from random weights. It also has the added benefit of stacking nicely with batch size warmup for even faster initial training.ConclusionIn this blog post we introduced the ModernBERT models, a new state-of-the-art family of small and efficient encoder-only models, finally giving BERT a much needed do-over.ModernBERT demonstrates that encoder-only models can be improved by modern methods. They continue to offer very strong performance on some tasks, providing an extremely attractive size/performance ratio.More than anything, were really looking forward to seeing what creative ways to use these models the community will come up with! To encourage this, were opening a call for demos until January 10th, 2025: the 5 best ones will get added to this post in a showcase section and win a $100 (or local currency equivalent) Amazon gift card, as well as a 6-month HuggingFace Pro subscription! If you need a hint to get started, heres a demo we thought about: code similarity HF space! And remember, this is an encoder model, so all the coolest downstream applications will likely require some sort of fine-tuning (on real or perhaps decoder-model synthetic data?). Thankfully, there's lots of cool frameworks out there to support fine-tuning encoders: Transformers itself for various tasks, including classification, GliNER for zero-shot Named Entity Recognition, or Sentence-Transformers for retrieval and similarity tasks!LinksLightOn sponsored the compute for this project on Orange Business Cloud Avenue.
Unknown
Unknown
null
null
null
null
null
null
news
Anna Tong and Krystal Hu
Exclusive-Microsoft works to add non-OpenAI models into 365 Copilot products, sources say
Microsoft has been working on adding internal and third-party artificial intelligence models to power its flagship AI product Microsoft 365 Copilot, in a bid...
https://finance.yahoo.com/news/exclusive-microsoft-works-add-non-185227175.html
https://media.zenfs.com/en/reuters-finance.com/5fdac7e3cea367f98a3017f26f27c96f
2024-12-23T18:52:27Z
By Anna Tong and Krystal HuMicrosoft has been working on adding internal and third-party artificial intelligence models to power its flagship AI product Microsoft 365 Copilot, in a bid to diversify from the current underlying technology from OpenAI and reduce costs, sources familiar with the effort told Reuters.It is the latest effort by Microsoft, which is a major backer of OpenAI, to lessen its dependence on the AI startup - a departure from recent years when Microsoft touted its early access to OpenAI's models. When Microsoft announced 365 Copilot in March 2023, a major selling point was that it used OpenAI's GPT-4 model.Microsoft is also seeking to reduce 365 Copilot's reliance on OpenAI due to concerns about cost and speed for enterprise users, according to the sources, who requested anonymity to discuss private matters.A Microsoft spokesperson said OpenAI continues as the company's partner on frontier models, a term for the most advanced AI models available. The original agreement between the two companies allows the software giant to customize OpenAIs models."We incorporate various models from OpenAI and Microsoft depending on the product and experience," Microsoft said in a statement. OpenAI declined to comment.In addition to training its own smaller models including the latest Phi-4, Microsoft is also working to customize other open-weight models to make 365 Copilot faster and more efficient, the sources added.The goal is to make it less expensive for Microsoft to run 365 Copilot, and potentially pass along those savings to the end customer, one of the sources said.Microsoft's leaders, including Chief Executive Officer Satya Nadella, are tracking the efforts closely, the same source added.The move mirrors those of other Microsoft business units which have changed the ways in which they use OpenAI models. GitHub, which Microsoft acquired in 2018, added models from Anthropic and Google in October as alternatives to OpenAIs GPT-4o. Its consumer chatbot Copilot, revamped in October, is now powered by in-house models as well as OpenAI models.Microsoft 365 Copilot, an AI assistant built in to Microsoft's suite of enterprise software including Word and PowerPoint, is still trying to prove its return on investment to enterprises. Microsoft has not shared specific sales data on the number of licenses sold, and there have been concerns about pricing and utility. A survey of 152 information technology companies showed the vast majority of them had not progressed their 365 Copilot initiatives past the pilot stage, research firm Gartner said in August.Still, analysts at BNP Paribas Exane said they have seen an acceleration in adoption, and expect Microsoft to sell 365 Copilot to more than 10 million paid users this year. Microsoft also said in a November blog post that 70% of Fortune 500 companies are using 365 Copilot.(Reporting by Anna Tong in San Francisco and Krystal Hu in Toronto; Additional reporting by Jeffrey Dastin in San Francisco; Editing by Matthew Lewis)
Process Automation/Decision Making
Management/Business and Financial Operations
null
null
null
null
null
null
news
Julian Horsey
DeepSeek v3 The First Open AI Model to Rival OpenAI and Anthropic
DeepSeek v3 is an open-weight AI model that stands as a direct competitor to proprietary systems like OpenAI’s ChatGPT and Anthropic’s Claude AI models. By combining advanced technical innovations, cost-efficient training, and impressive performance benchmarks, it represents a significant milestone in the evolution of open source AI. This overview by Prompt Engineering explores its defining features, […]The post DeepSeek v3 The First Open AI Model to Rival OpenAI and Anthropic appeared first on Geeky Gadgets.
https://www.geeky-gadgets.com/deepseek-v3-ai/
https://www.geeky-gadget…e-comparison.jpg
2024-12-30T09:02:54Z
DeepSeek v3 is an open-weight AI model that stands as a direct competitor to proprietary systems like OpenAI’s ChatGPT and Anthropic’s Claude AI models. By combining advanced technical innovations, cost-efficient training, and impressive performance benchmarks, it represents a significant milestone in the evolution of open source AI. This overview by Prompt Engineering explores its defining features, technical advancements, performance metrics, and challenges, offering a comprehensive view of its role in the AI landscape.DeepSeek v3 isnt just another AI model; its a fantastic option for those who value accessibility, collaboration, and innovation. With its impressive performance, cost-efficient training, and open-weight design, its proving that open source AI can stand toe-to-toe with the giants. Whether youre a developer looking for a flexible tool, a researcher eager to push boundaries, or simply someone curious about the future of AI, DeepSeek v3 offers a glimpse into whats possible when technology is made for everyone.What Sets DeepSeek v3 Apart?DeepSeek v3 is an open-weight AI model boasting over 600 billion parameters, with 37 billion specifically optimized for production use. Trained on 14.8 trillion high-quality tokens, the model was developed at a cost of $5.6 million over 57 days using a constrained GPU cluster. This scale of training enables it to rivaland in some cases outperformproprietary models like GPT-4 and Claude 3.5 in specific benchmarks.The open-weight nature of DeepSeek v3 is one of its most defining attributes. Unlike closed proprietary systems, it allows developers and researchers to access, modify, and build upon the model. This openness fosters collaboration and innovation, making it a valuable tool for advancing AI research and practical applications. By providing unrestricted access, DeepSeek v3 enables you to explore new possibilities, whether in academic research, software development, or enterprise solutions.Performance and CapabilitiesDeepSeek v3 delivers exceptional performance across a wide range of tasks, showcasing its versatility and efficiency. Its key capabilities include:Reasoning: The model excels at solving complex logical and analytical problems, often outperforming competitors in nuanced scenarios such as ethical reasoning and abstract problem-solving.Coding: It demonstrates a strong ability to generate intricate web applications, debug errors, and handle advanced programming tasks, making it a valuable tool for software developers.Speed: With a token generation speed of 90 tokens per second, it is four times faster than its predecessor, allowing seamless real-time applications.Context Handling: Its large context window of up to 128,000 tokens allows it to manage extensive conversations or tasks without losing coherence, making it ideal for complex workflows.These features position DeepSeek v3 as a strong contender in the AI space, particularly for applications requiring speed, precision, and adaptability. Its ability to handle diverse tasks with efficiency makes it a versatile tool for both research and practical use.DeepSeek v3 Rivals OpenAI & AnthropicGain further expertise in DeepSeek by checking out these recommendations.Technical Innovations Driving DeepSeek v3DeepSeek v3 incorporates several advanced technical features that distinguish it from other models in the market. These innovations not only enhance its performance but also contribute to its cost-efficiency:FP8 Precision: The use of 8-bit floating-point precision during training reduces computational overhead while maintaining high performance. This approach demonstrates how large-scale models can be trained more efficiently without compromising quality.Load Balancing: Advanced load-balancing strategies optimize the use of computational resources, improving both training and inference efficiency. This ensures consistent performance even under constrained hardware conditions.Multi-Token Prediction: This technique enhances token generation speed and accuracy, contributing to the model’s overall efficiency and usability in real-time applications.These technical advancements highlight the innovative approach behind DeepSeek v3, setting a benchmark for future AI development practices. By addressing the challenges of scalability and cost, it paves the way for more accessible and efficient AI solutions.Testing and Benchmark ResultsIndependent benchmarks reveal that DeepSeek v3 performs on par with or surpasses proprietary models in several critical domains. Its performance highlights include:Reasoning: The model excels in ethical reasoning and modified logic problems, demonstrating a nuanced understanding of complex scenarios. However, it occasionally defaults to classical interpretations, which may require further refinement.Coding: Its ability to handle advanced programming tasks, such as debugging and generating complex code structures, makes it a valuable asset for developers seeking efficiency and accuracy.These results underscore the model’s potential for applications requiring nuanced decision-making, problem-solving, and technical expertise. Its ability to deliver consistent results across diverse tasks makes it a reliable choice for both research and industry use.Accessibility and Practical ApplicationsDeepSeek v3 is designed with accessibility and flexibility in mind, offering multiple ways for users to interact with and deploy the model. Key features include:APIs and Web Interface: A user-friendly interface and robust APIs make it easy to integrate the model into various workflows, whether for research, development, or enterprise applications.Local Deployment: The model supports deployment on large clusters, providing organizations with the flexibility to tailor its use to their specific infrastructure and requirements.Customization: Its open-weight design allows users to adapt and modify the model to suit unique needs, allowing innovation and experimentation across different domains.This accessibility makes DeepSeek v3 an attractive option for developers, researchers, and organizations looking to explore new use cases or enhance existing systems. Its flexibility ensures that it can be seamlessly integrated into a wide range of applications.Challenges and Ethical ConsiderationsWhile DeepSeek v3 offers numerous advantages, it also faces several challenges that must be addressed to ensure its reliability and ethical use:Misalignment: The model occasionally identifies itself as GPT-4, likely due to training data contamination. This raises concerns about the quality and provenance of the data used during development.Censorship: Like other large language models, it avoids certain sensitive topics, which may limit its utility in specific contexts or applications.Ethical Concerns: The use of training data from proprietary models introduces potential legal and ethical questions, emphasizing the need for greater transparency in AI development practices.Addressing these challenges will be critical for making sure the long-term success and ethical deployment of DeepSeek v3. By prioritizing transparency and accountability, developers can build trust and confidence in the model’s capabilities.The Role of DeepSeek v3 in the Future of AIThe release of DeepSeek v3 represents a significant step forward for open-weight AI models. Its success highlights the potential of open source innovation to challenge the dominance of proprietary systems, offering accessible and cost-efficient alternatives. As the AI landscape continues to evolve, models like DeepSeek v3 will play a crucial role in driving advancements in accessibility, collaboration, and technical innovation.By combining innovative features with an open-weight design, DeepSeek v3 sets a new standard for what open source AI can achieve. Its ability to deliver high performance at a fraction of the cost of proprietary models makes it a compelling choice for developers, researchers, and organizations worldwide. As challenges are addressed and the model continues to evolve, its impact on the AI ecosystem is likely to grow, shaping the future of artificial intelligence for years to come.Media Credit: Prompt EngineeringFiled Under: AI, Technology News, Top NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Unknown
Unknown
null
null
null
null
null
null
news
danzeeeman
The Great American Pastime – Gen AI sportscasting
Created by Dan Moore, The Great American Pastime combines Generative AI with sports storytelling to broadcast an endless series of fictional baseball games on The Infinite Baseball Radio Network. This project relies on advanced Gen AI for game simulation and sportscasting, combined with a visually engaging web interface and media streaming architecture. Category: WebAppTags: #artificial / #broadcast / #fiction / #generative / #Generative Ai / #gpt-4 / #LiquidSoap / #machine learning / #member submission / #narrative / #network / #radio / #simulation / #Sound / #storytelling / #stream / #Vue.js People: Dan Moore CreativeApplications.Net (CAN) is a community supported website. If you enjoy content on CAN, please support us by Becoming a Member. Thank you!
https://www.creativeapplications.net/webapp/the-great-american-pastime/
https://www.creativeappl…-at-14.49.23.jpg
2024-12-05T15:31:00Z
Created by Dan Moore, The Great American Pastime combines Generative AI with sports storytelling to broadcast an endless series of fictional baseball games on The Infinite Baseball Radio Network. This project relies on advanced Gen AI for game simulation and sportscasting, combined with a visually engaging web interface and media streaming architecture.By inviting listeners into a familiar yet subtly artificial version of Americas favorite pastime, the work provokes reflection on humanitys increasingly tangled relationship with technology, nostalgia, and the narratives that shape the American cultural identity.The system includes a number of components, starting with a game simulation engine, AI sportscaster module, audio streaming and media handling and a frontend user interface. The project is a collaboration between Dan Moore (Machine Learning + Creative Direction), Tim Nolan (Branding), EMMA (Development) and design by Antidote.The Game Simulation Engine, powered by a fine-tuned Phi-2 small language model, generates structured play-by-play data for each game.Language ModePhi-2 was trained on ~350,000 at-bats in structured data from two professional baseball seasons over two weeks on NVIDIA L40 GPUs.Outputs highly detailed simulations, including every pitch, scores, base runner position, and a text description of the outcomeGenerated DataPitch-by-pitch details: Type, speed, and result of each pitch (e.g., strike, ball, foul).At-bat outcomes: Results such as strikeouts, singles, or home runs.Game state updates: Tracks base runners, outs, and score.State MachineA simple state machine manages the overall game context:Tracks score, outs, innings top or bottom, and team turns.Ensures logical transitions between plays and innings.Error Correction for when the SML skips a beatThe AI Sportscaster Module transforms structured game data into engaging audio commentary using Generative AI. Two AI Agents, the two sportscasters Jack Johnston and Pete Franks, are brought to life by encoding the structured data into prompts for GPT-4o-mini.  GPT-4o-mini generates sportscaster dialogue based on play-by-play data. It Tailors commentary to reflect game intensity, player statistics, and situational context. Two XTTS-v2 speech diffusion models perform Text-to-Speech creating lifelike, professional-sounding sportscaster voices.Left: Input / Right: OutputThe Audio Stream is handled by a LiquidSoap streaming serverand an AzureCast streaming server. The LiquidSoap server handles playing back the game commentary and mixes in AI generated crowd noise. The LiquidSoap server also provides real-time metadata to support the frontend Web interface.The AzureCast streaming server handles broadcasting the main MP3 stream and HSL stream.The Frontend System, designed by Antidote and developed by the EMMA cooperative, provides a visually rich and user-friendly experience. The front end is built with Vue.js for responsive and dynamic UI. PixiJS was utilized for the real-time animations and game visualizations.  The system is driven by meta data provided by the LiquidSoap streaming server.  The Live Scoreboard sync with the audio stream to provide the current score, inning progression, and player details.  An embedded player streams audio commentary directly from the LiquidSoap server.Game Simulation Phi-2 generates structured at-bat data, updated by the state machine.Commentary Generation GPT-4o-mini creates dynamic, play-by-play audio commentary.TTS converts text to lifelike audio.Streaming LiquidSoap encodes and broadcasts the audio stream, embedding metadata for frontend synchronization.Frontend Rendering Vue.js and PixiJS render real-time stats, animations, and scoreboards, driven by metadata APIs.Scalability and Redundancy: LiquidSoap and AI components run on cloud infrastructure, scaling to accommodate listener demand. HLS streaming ensures adaptability across devices and network conditions.The Great American Pastime combines Phi-2s precise game simulation, GPT-4o-minis engaging commentary, LiquidSoaps seamless media streaming, and Antidote and EMMA’s cutting-edge design and development expertise. Together, these components deliver a fully immersive experience that reimagines sports broadcasting through AI and modern web technologies.Project Page | Dan MoorePlease support this project by donating here.Support CreativeApplications.NetSince 2008, CreativeApplications.Net (CAN) has been a driving force, tirelessly influencing and shaping the conversations surrounding technology, society, and critical making. With a community of 1600+ members and a vast library boasting over 3,500 meticulously curated projects, experiments, texts, and reviews, CAN stands resolute as an unparalleled resource for students, educators, practitioners, curators, and cultural producers.However, as we navigate the stark reality of independent publishing, we need your help! Please embark on this journey with us, extending your support by joining our community as a Member and contributing to our shared mission of propelling CAN forward.Continue More InfoPosted in Member Submissions, WebAppTagged #artificial, #broadcast, #fiction, #generative, #Generative Ai, #gpt-4, #LiquidSoap, #machine learning, #member submission, #narrative, #network, #radio, #simulation, #Sound, #storytelling, #stream, #Vue.js- People Dan Moore
Content Creation/Content Synthesis
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
João Carrasqueira
Copilot+ is finally coming to Intel and AMD Windows 11 PCs as Snapdragon exclusivity ends
Copilot+ is finally breaking free of the shackles of Qualcomm with the latest Windows 11 beta
https://www.xda-developers.com/copilot-is-finally-coming-to-intel-and-amd-pcs-as-snapdragon-exclusivity-ends/
https://static1.xdaimage…pilot-plus-1.jpg
2024-12-06T19:04:18Z
After months of being exclusive to Qualcomm's Snapdragon X series of processors, the Copilot+ features on Windows 11 are now starting to become available to Intel and AMD-powered PCs. The latest Windows 11 build in the Dev channel brings all the existing Copilot+ experiences to laptops powered by Intel Core Ultra Series 2 processors and AMD Ryzen AI 300 HX chips. This latest release also comes with a new way to use Click To Do, which can be accessed anywhere while using your PC.RelatedIt's been a long time comingCopilot+ comes to Intel and AMDIt had been some time since Microsoft promised Copilot+ features would come to PCs with Intel and AMD processors by the end of the year, and today, it's finally happened. Initially, availability was planned for some time in November, but it was going to be after Microsoft made Recall available for Snapdragon PCs, which was planned for October at the time. Instead, Recall was only made available to Insiders with Snapdragon X PCs in late November, so the Intel and AMD launch was delayed as well. The availability of Copilot+ brings all the existing features to laptops with Intel Core Ultra Series 2 (Lunar Lake) and AMD Ryzen AI 300 HX (Strix Point) processors. All of these models were released from July 2024 onwards, meaning you still need to have a very recent computer in order to enjoy these features. Any computer launched before July will not be compatible with Copilot+ since they don't have an NPU with at least 40TOPS of performance.In addition to Recall, which lets you go back to anything you were doing in the past and search your activity history with natural language, Copilot+ brings many other experiences to these PCs. This includes: advanced Windows Studio Effects, with new filters and a more aggressive eye contact feature; Cocreator in Paint, which allows you to doodle an image and ask paint to convert it into a more complex image using a variety of styles; Image Creator and Restyle Image in Photos, allowing you to create completely new images or change the look of an existing image using only the power available locally on your device.A few features aren't mentioned in Microsoft's announcement, however. These include live captions with real-time translation, Super Resolution in the Photos app, and Generative fill in the Paint app. It's unclear if these were overlooked for the announcement or if they may only be rolled out later.New features for Click To DoThe last major feature that's part of Copilot+ is Click To Do, which is also becoming available to Intel and AMD PCs, following its recent release for Snapdragon. However, Click To Do is also getting a worthy upgrade of its own for all Windows Insiders in the Dev channel today, whether you have a Snapdragon, Intel, or AMD Copilot+ PC.With the latest update, you can now launch Click To Do from basically anywhere while using your PC, with a few different methods available. You can hold the Windows Key and right-click the screen, press the Windows key + Q, using the Snipping Tool, ot searching for "click to do" in the search bar. Either way, you no longer have to open Recall first and use Click To Do on a saved snapshot you can just use it in real time whenever you need it.What's more, Click To Do now offers a couple of new text actions, those being the ability to summarize or rewrite a piece of text. This is the first use of the Phi Silica small language model Microsoft announced earlier this year, which runs entirely on device. However, these new text actions are currently exclusive to Snapdragon PCs, with support for Intel and AMD PCs coming soon.All of this is rolling out with build 26120.2510 of Windows 11, which is now available in the Dev channel. The update also includes the updated design language for Windows Hello and other minor improvements. If you're already enrolled in the channel, you'll get the update automatically sooner rather than later.RelatedMicrosoft's Windows Insider Program can be confusing if you're new, so here's everything you need to know for testing the Windows 11 beta.
Content Synthesis/Content Creation
Unknown
null
null
null
null
null
null
news
Maulik Suchak
Natural Language Processing (NLP) for Voice-Controlled Frontend Applications: Architectures, Advancements, and Future Direction
Voice-controlled frontend applications have gained immense traction due to the rising popularity of smart devices, virtual assistants, and hands-free interfaces. Natural Language Processing (NLP) lies at the heart of these systems, enabling human-like understanding and generation of speech. This white paper presents an in-depth examination of NLP methodologies for voice-controlled frontend applications, reviewing the state-of-the-art in speech recognition, natural language understanding, and generation techniques, as well as their architectural integration into modern web frontends. It also discusses relevant use cases, technical challenges, ethical considerations, and emerging directions such as multimodal interaction and zero-shot learning. By synthesizing recent research, best practices, and open challenges, this paper aims to guide developers, researchers, and industry professionals in leveraging NLP for inclusive, responsive, and efficient voice-controlled frontend applications.IntroductionThe shift from traditional graphical interfaces to more natural, intuitive methods of human-computer interaction has accelerated over the past decade. Voice-controlled frontend applications — encompassing virtual assistants, voice-enabled search, and smart home interfaces — are at the forefront of this transformation. These applications promise hands-free, eyes-free interaction, dramatically expanding accessibility for users with disabilities and delivering more streamlined user experiences in scenarios where visual attention is limited (e.g., driving, cooking).
https://dzone.com/articles/natural-language-processing-nlp-for-voice-controll
https://dz2cdn1.dzone.co…097273-thumb.jpg
2024-12-23T21:00:00Z
Voice-controlled frontend applications have gained immense traction due to the rising popularity of smart devices, virtual assistants, and hands-free interfaces. Natural Language Processing (NLP) lies at the heart of these systems, enabling human-like understanding and generation of speech. This white paper presents an in-depth examination of NLP methodologies for voice-controlled frontend applications, reviewing the state-of-the-art in speech recognition, natural language understanding, and generation techniques, as well as their architectural integration into modern web frontends. It also discusses relevant use cases, technical challenges, ethical considerations, and emerging directions such as multimodal interaction and zero-shot learning. By synthesizing recent research, best practices, and open challenges, this paper aims to guide developers, researchers, and industry professionals in leveraging NLP for inclusive, responsive, and efficient voice-controlled frontend applications.IntroductionThe shift from traditional graphical interfaces to more natural, intuitive methods of human-computer interaction has accelerated over the past decade. Voice-controlled frontend applications encompassing virtual assistants, voice-enabled search, and smart home interfaces are at the forefront of this transformation. These applications promise hands-free, eyes-free interaction, dramatically expanding accessibility for users with disabilities and delivering more streamlined user experiences in scenarios where visual attention is limited (e.g., driving, cooking).At the core of these voice-controlled systems lies Natural Language Processing (NLP), a multidisciplinary field combining linguistics, computer science, and artificial intelligence. NLP enables machines to interpret, understand, and generate human language. When integrated into frontend applications, NLP powers speech recognition, semantic understanding, and context-aware response generation all crucial to building interfaces that feel human-like and intuitive.This paper provides a comprehensive analysis of NLPs role in voice-controlled front-end architectures. We explore foundational components, such as Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), Natural Language Generation (NLG), and Text-to-Speech (TTS) synthesis. Beyond these fundamentals, we delve into advanced topics like large pre-trained language models, edge computing, and multilingual support. We discuss practical applications, such as accessibility tools, smart home controls, e-commerce platforms, and gaming interfaces. Furthermore, the paper highlights current challenges such as scalability, bias in NLP models, and privacy and surveys emerging research directions, including emotion recognition and zero-shot learning. By synthesizing existing literature, case studies, and best practices, we aim to offer a roadmap for the future development and deployment of NLP-based voice-controlled frontends.Key Components of Voice-Controlled Frontend ApplicationsSpeech RecognitionThe first step in any voice-controlled system is converting spoken language into text. Automatic Speech Recognition (ASR) models leverage deep learning architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and more recently, transformer-based architectures. These models are trained on large corpora of spoken language, enabling them to accurately transcribe input speech even in noisy environments.Modern APIs (e.g., Google Speech-to-Text, Amazon Transcribe, Microsoft Azure Speech) offer robust ASR capabilities, while open-source solutions like Kaldi and Wav2Vec 2.0 (Baevski et al., 2020) enable developers to train custom models. Challenges persist in handling domain-specific jargon, diverse accents, and low-resource languages. Contextual biasing and custom language models have emerged as solutions, allowing ASR systems to dynamically adapt to application-specific vocabularies and user-specific preferences.Natural Language Understanding (NLU)NLU transforms raw text into structured semantic representations that encapsulate user intent and context. Core NLU tasks include tokenization, part-of-speech tagging, named entity recognition (NER), intent classification, and sentiment analysis. Early NLU systems relied on handcrafted rules and statistical methods, but contemporary approaches often involve deep learning models fine-tuned on large pre-trained language models (e.g., BERT, Devlin et al., 2019).NLU frameworks like Rasa, Dialogflow, and spaCy simplify development by providing tools to classify user intents and extract entities. Maintaining context over multi-turn conversations remains a challenge, as does handling ambiguous or implied user requests. Techniques such as Transformer-based contextual encoders and memory-augmented architectures help preserve conversational context over extended dialogues.Natural Language Generation (NLG)NLG focuses on producing coherent, contextually relevant responses to user queries. With the rise of large language models such as GPT-3 (Brown et al., 2020) and GPT-4, generating human-like responses has become more achievable. These models can be fine-tuned for specific domains, ensuring that generated text aligns with the brand voice, domain constraints, and user expectations.Key challenges in NLG include producing factually correct outputs, avoiding repetitive or nonsensical responses, and maintaining a consistent persona. Recent research on controlled text generation enables more predictable, factual, and stylistically consistent responses. In voice-controlled frontends, NLG quality directly impacts the user experience, influencing trust and perceived intelligence of the system.Speech Synthesis (Text-to-Speech, TTS)TTS converts textual responses into synthetic speech. Early systems used concatenative synthesis, while modern approaches rely on neural models like Tacotron 2 (Shen et al., 2018) and WaveNet (Oord et al., 2016) to produce more natural prosody and intonation. Advances in TTS allow for customization of voice attributes (e.g., pitch, speed, timbre) and multilingual capabilities.High-quality TTS enhances user engagement, accessibility, and the overall user experience. Ongoing challenges include emotional expressiveness, quick adaptation to new voices, and maintaining naturalness in code-switched dialogues.Technical Architecture for Voice-Controlled FrontendsVoice-controlled frontends typically employ a client-server model. The client interface implemented in JavaScript or framework-specific code captures audio input through browser APIs (e.g., the Web Speech API) and streams it to a backend service. The backend performs ASR, NLU, NLG, and returns synthesized speech back to the client.Frontend IntegrationThe frontend layer uses modern web standards and APIs to handle audio input and output. The Web Speech API in browsers like Chrome provides basic speech recognition and synthesis, enabling rapid prototyping. However, for production systems requiring higher accuracy or domain adaptation, the front end may rely on cloud-based APIs. Libraries such as Annyang simplify common tasks like voice command mapping, while custom JavaScript code can manage UI state in response to recognized commands.Performance considerations include managing latency, ensuring smooth audio capture, and handling network issues. On weaker devices, local processing may be limited, raising the need for cloud or edge-based strategies.Backend NLP PipelinesThe backend is where the heavy lifting occurs. When a voice input is received, the backends pipeline typically involves:ASR: Transcribe audio into text.NLU: Classify intent and extract entities.Business Logic: Query databases or APIs as needed.NLG: Generate a suitable response text.TTS: Convert the response text into synthetic speech.These steps can be orchestrated using microservices or serverless functions, ensuring scalability and modularity. Cloud providers like AWS, Google Cloud, and Azure offer NLP services that integrate seamlessly with web applications. Containerization (Docker) and orchestration (Kubernetes) enable scaling services based on traffic patterns.Hybrid Architectures and Edge ComputingRelying solely on cloud services can introduce latency, privacy concerns, and dependency on network connectivity. Hybrid architectures, wherein some NLP tasks run on-device while others run in the cloud, improve responsiveness and protect user data. For instance, a frontend device could locally handle wake-word detection (Hey Siri) and basic NLU tasks, while offloading complex queries to the cloud.Edge computing frameworks allow the deployment of lightweight NLP models on smartphones or IoT devices using libraries like TensorFlow Lite. This approach reduces round-trip time and can function offline, catering to scenarios like voice commands in low-connectivity environments (e.g., remote industrial settings, and rural areas).Applications of NLP in Voice-Controlled FrontendsAccessibilityVoice-controlled frontends significantly improve accessibility for users with visual impairments, motor disabilities, or cognitive challenges. Conversational interfaces reduce reliance on complex GUIs. For instance, voice-enabled navigation on news websites, educational portals, or workplace tools can empower individuals who struggle with traditional input methods. Research from the World Wide Web Consortium (W3C) and A11Y communities highlights how inclusive voice interfaces support independent living, learning, and employment.Smart Homes and IoTSmart home adoption is accelerating, and NLP-driven voice controls are integral to this growth. Users can command lights, thermostats, and security systems through natural language instructions. Virtual assistants (Alexa, Google Assistant, Apple Siri) integrate seamlessly with third-party devices, offering a unified voice interface for a broad ecosystem. Recent research explores adaptive language models that learn user preferences over time, providing proactive suggestions and energy-saving recommendations.E-Commerce and Customer SupportVoice-enabled e-commerce platforms offer hands-free shopping experiences. Users can search for products, check order statuses, and reorder items using voice commands. Integrations with recommendation systems and NLU-driven chatbots enable personalized product suggestions and simplified checkout processes. Studies have shown improved customer satisfaction and reduced friction in conversational commerce experiences.Voice-enabled customer support systems, integrated with NLU backends, can handle frequently asked questions, guide users through troubleshooting steps, and escalate complex issues to human agents. The result is improved operational efficiency, reduced wait times, and a more user-friendly support experience.Gaming and EntertainmentVoice control in gaming offers immersive, hands-free interactions. Players can issue commands, navigate menus, and interact with non-player characters through speech. This enhances realism and accessibility. Similarly, entertainment platforms (e.g., streaming services) allow voice navigation for selecting shows, adjusting volume, or searching content across languages. The synergy of NLP and 3D interfaces in AR/VR environments promises even more engaging and intuitive experiences.Challenges and LimitationsDespite the progress in NLP-driven voice frontends, several challenges persist:Language Diversity and Multilingual SupportMost NLP models are predominantly trained on high-resource languages (English, Mandarin, Spanish), leaving many languages and dialects underserved. Low-resource languages, characterized by limited annotated data, present difficulty for both ASR and NLU. Research on transfer learning, multilingual BERT-based models (Pires et al., 2019), and unsupervised pre-training aims to extend coverage to a wider range of languages. Solutions like building language-agnostic sentence embeddings and leveraging cross-lingual transfer techniques hold promise for truly global, inclusive voice interfaces.Contextual Understanding and MemoryMaintaining conversation context is non-trivial. Users expect the system to remember previous turns, references, and implied information. Sophisticated approaches such as Transformer models with attention mechanisms help track dialogue history. Dialogue state tracking and knowledge-grounded conversation models (Dinan et al., 2019) enable more coherent multi-turn conversations. However, achieving human-level contextual reasoning remains an open research problem.Privacy and SecurityVoice data is sensitive. Continuous listening devices raise concerns about data misuse, unauthorized access, and user profiling. Developers must ensure strong encryption, consent-based data collection, and clear privacy policies. Privacy-preserving machine learning (differential privacy, federated learning) allows on-device model updates without sending raw voice data to the cloud. Regulatory frameworks like GDPR and CPRA push for transparent handling of user data.Scalability and PerformanceVoice-controlled frontends must handle potentially millions of concurrent requests. Scaling NLP services cost-effectively demands efficient load balancing, caching strategies for frequently accessed data, and model optimization techniques (quantization, pruning, distillation) to accelerate inference. Techniques such as GPU acceleration, model parallelism, and distributed training help manage computational overhead.Advancements and OpportunitiesPre-Trained Language Models and Fine-TuningThe advent of large pre-trained models like BERT, GPT-3/4, and T5 has revolutionized NLP. These models, trained on massive corpora, have strong generalization capabilities. For voice applications, fine-tuning these models for domain-specific tasks such as specialized medical vocabularies or technical support dialogues improves understanding and response quality. OpenAIs GPT-4, for example, can reason more accurately over complex instructions, enhancing both NLU and NLG for voice interfaces.Edge Computing and On-Device NLPRunning NLP models directly on devices offers latency reductions, offline functionality, and improved privacy. Accelerators like Googles Coral or Apples Neural Engine support efficient inference at the edge. Research focuses on compression and optimization techniques (mobileBERT, DistilBERT) to shrink model sizes without significantly degrading accuracy. This approach enables personalized voice experiences that adapt to the users environment and context in real time.Multimodal InteractionFuture voice interfaces will not rely solely on audio input. Combining speech with visual cues (e.g., AR overlays), haptic feedback, or gesture recognition can create richer, more intuitive interfaces. Multimodal NLP (Baltruaitis et al., 2019) merges language understanding with vision and other sensory data, allowing systems to ground commands in the physical world. This synergy can improve disambiguation, accessibility, and situational awareness.Personalization and User ModelingIncorporating user-specific preferences, interaction history, and personalization is a key frontier. Reinforcement learning-based approaches can optimize dialogue strategies based on user feedback. Adaptive language models, trained incrementally on user data (with privacy safeguards), can refine vocabulary, style, and responses. Such personalization leads to more satisfying experiences, reduces friction, and encourages sustained engagement.Ethical ConsiderationsBias and FairnessLarge language models trained on web-scale data inherit societal biases present in the data. This leads to potential unfair treatment or exclusion of certain demographic groups. Voice-controlled systems must mitigate biases by curating training corpora, applying bias detection algorithms, and conducting thorough bias and fairness audits. Academic and industry efforts, including the Partnership on AIs fairness guidelines, aim to develop standardized benchmarks and best practices.Transparency and ExplainabilityUsers should understand how voice-controlled systems make decisions. Explainable NLP techniques help surface system reasoning processes, indicating which parts of a query influenced a particular response. While neural models often function as black boxes, research on attention visualization and interpretable embeddings attempts to shed light on model decisions. Regulatory bodies may require such transparency for compliance and user trust.User Consent and Data GovernanceUsers must be informed about how their voice data is collected, stored, and used. Applications should provide opt-in mechanisms, allow data deletion, and offer clear privacy statements. Data governance frameworks must align with local regulations, ensure secure data handling, and minimize the risk of data breaches or unauthorized surveillance.Case StudiesVoice Assistants in HealthcareIn healthcare settings, voice-controlled interfaces facilitate patient triage, symptom checks, and medication reminders. For example, conversational agents integrated with Electronic Health Record (EHR) systems can assist clinicians in retrieving patient data hands-free, improving workflow efficiency and reducing administrative burden. Studies (Shickel et al., 2018) show that voice interfaces can enhance patient engagement and adherence to treatment plans, though privacy and data compliance (HIPAA) remain critical.Voice CommerceRetailers integrate voice search and ordering capabilities to reduce friction in the shopping experience. For instance, Walmarts voice-shopping feature allows users to add items to their carts by simply stating product names. Research indicates that streamlined voice interactions can improve conversion rates and user satisfaction, especially when paired with recommendation engines that leverage NLU to comprehend user preferences.Smart CitiesVoice-controlled kiosks, public information systems, and transportation hubs can guide citizens and visitors through unfamiliar environments. Tourists might ask for restaurant recommendations, bus schedules, or directions to landmarks. Combining NLP with geospatial data and public APIs fosters intuitive, inclusive urban experiences. Pilot projects in cities like Seoul and Barcelona have explored voice-enabled access to public services, improving accessibility for non-technical populations.Future DirectionsLow-Resource Languages and Code-SwitchingDeveloping robust NLP solutions for languages with scarce training data remains a pressing challenge. Transfer learning, multilingual embeddings, and unsupervised pre-training on unlabeled text corpora aim to bridge this gap. Code-switching when speakers alternate between languages within a single conversation further complicates the NLP pipeline. Research on code-switching corpora and models is critical for voice applications in linguistically diverse regions.Emotion and Sentiment RecognitionDetecting user emotions can enable more empathetic and context-sensitive responses. Emotion recognition in speech (Schuller et al., 2018) involves analyzing prosody, pitch, and energy, while sentiment analysis on textual transcriptions provides additional cues. Emotion-aware interfaces could, for example, adjust their tone or offer calming responses in stressful situations (e.g., technical support sessions).Real-Time Multilingual NLPAs global connectivity increases, real-time multilingual NLP could allow seamless communication between speakers of different languages. Advances in neural machine translation, combined with on-the-fly ASR and TTS, enable voice interfaces to serve as universal translators. This capability can foster cross-cultural collaboration and improve accessibility in international contexts.Zero-Shot and Few-Shot LearningZero-shot learning allows models to handle tasks without direct training examples. In voice applications, zero-shot NLU could interpret novel commands or domain-specific requests without prior fine-tuning. Few-shot learning reduces the amount of annotated data needed to adapt models to new domains. These paradigms promise more agile development cycles, lowering barriers for custom voice interfaces.ConclusionNatural Language Processing forms the bedrock of voice-controlled frontend applications, empowering more natural, inclusive, and intuitive human-computer interactions. Advances in ASR, NLU, NLG, and TTS, combined with scalable architectures, have made it possible to deploy voice interfaces across diverse domains ranging from smart homes and healthcare to e-commerce and urban services.The journey is far from complete. Ongoing research addresses challenges in handling language diversity, maintaining conversational context, ensuring user privacy, and scaling NLP systems efficiently. Ethical considerations, such as bias mitigation and explainability, remain paramount as these technologies become increasingly pervasive in daily life.Looking ahead, innovations in edge computing, multimodal interaction, and personalization will further enhance the capabilities and reach of voice-controlled frontends. Zero-shot learning and real-time multilingual NLP will break down language barriers, and emotion recognition will lead to more empathetic and user-centric experiences. By continuing to invest in research, responsible development, and inclusive design principles, we can realize the full potential of NLP for voice-controlled front-end applications ultimately making digital services more accessible, natural, and empowering for everyone.ReferencesBaevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. Advances in Neural Information Processing Systems (NeurIPS).Baltruaitis, T., Ahuja, C., & Morency, L-P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423-443.Brown, T., Mann, B., Ryder, N., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems (NeurIPS).Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT.Dinan, E., Roller, S., Shuster, K., et al. (2019). Wizard of Wikipedia: Knowledge-Powered Conversational Agents. International Conference on Learning Representations (ICLR).Oord, A. v. d., Dieleman, S., Zen, H., et al. (2016). WaveNet: A Generative Model for Raw Audio. ArXiv:1609.03499.Pires, T., Schlinger, E., & Garrette, D. (2019). How multilingual is Multilingual BERT? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Schuller, B., Batliner, A., Steidl, S., & Seppi, D. (2018). Recognising Realistic Emotions and Affect in Speech: State of the Art and Lessons Learnt from the First Challenge. Speech Communication, 53(910), 10621087.Shen, J., Pang, R., Weiss, R. J., et al. (2018). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. ICASSP 2018.Shickel, B., Tighe, P. J., Bihorac, A., & Rashidi, P. (2018). Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589-1604.Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS).World Wide Web Consortium (W3C). (n.d.). Web Accessibility Initiative (WAI). [Online]. 
Digital Assistance/Content Synthesis
Unknown
null
null
null
null
null
null
news
Matt Milano
Microsoft Announces Phi-4 Small Language AI Model
Microsoft announced Phi-4, its latest small language AI model, saying it "excels at complex reasoning" and "offers high quality results at a small size."
https://www.webpronews.com/microsoft-announces-phi-4-small-language-ai-model/
https://www.webpronews.c…Headquarters.jpg
2024-12-16T13:00:00Z
Microsoft announced the latest small language AI model, saying it “excels at complex reasoning” and “offers high quality results at a small size.”Despite Microsoft’s deal with OpenAI, the company has been releasing its own AI models. Phi-4 is the latest, coming in at 14 billion parameters. For comparison, ChatGPT has 1 trillion parameters, while Microsoft’s MAI-1 has 500 billion.Despite Phi-4’s small size, Microsoft says the new model “continues to push the frontier of size vs quality.”Today we are introducing Phi-4, our 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates whats possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face next week.Microsoft says the new model blurs the performance line with much larger models.Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality.The company emphasizes its focus on responsible AI development, giving users access to Azure AI Content Safety features.Building AI solutions responsibly is at the core of AI development at Microsoft. We have made our robust responsible AI capabilities available to customers building with Phi models, including Phi-3.5-mini optimized for Windows Copilot+ PCs.Azure AI Foundry provides users with a robust set of capabilities to help organizations measure, mitigate, and manage AI risks across the AI development lifecycle for traditional machine learning and generative AI applications. Azure AI evaluations in AI Foundry enable developers to iteratively assess the quality and safety of models and applications using built-in and custom metrics to inform mitigations.Additionally, Phi users can use Azure AI Content Safety features such as prompt shields, protected material detection, and groundedness detection. These capabilities can be leveraged as content filters with any language model included in our model catalog and developers can integrate these capabilities into their application easily through a single API. Once in production, developers can monitor their application for quality and safety, adversarial prompt attacks, and data integrity, making timely interventions with the help of real-time alerts.Microsoft is clearly continuing its efforts to develop AI models independent of OpenAI, and Phi-4 is the latest evidence of its efforts.
Unknown
Unknown
null
null
null
null
null
null
news
TechNode Feed
Former Microsoft and Alibaba veteran Hu Yunhua joins Zhipu AI as head of ChatGLM
Former Microsoft Research Asia (MSRA) researcher and senior technical expert at Alibaba DAMO Academy, Hu Yunhua, has joined AI unicorn Zhipu AI, according to Chinese media outlet 36Kr. Hu will lead the company’s consumer-facing AI conversation application, ChatGLM. Hu began his career at MSRA as an intern in 2003 and became a researcher in 2007. […]
http://technode.com/2024/12/26/former-microsoft-and-alibaba-veteran-hu-yunhua-joins-zhipu-ai-as-head-of-chatglm/
https://technode.com/wp-…20-下午3.50.24.png
2024-12-26T09:23:18Z
Former Microsoft Research Asia (MSRA) researcher and senior technical expert at Alibaba DAMO Academy, Hu Yunhua, has joined AI unicorn Zhipu AI, according to Chinese media outlet 36Kr. Hu will lead the companys consumer-facing AI conversation application, ChatGLM. Hu began his career at MSRA as an intern in 2003 and became a researcher in 2007. In 2012, he joined Alibaba, where he was instrumental in developing recommendation algorithms for shopping intent mining and optimization. After leaving Alibaba in 2014, he founded Smartdot, an AI company that raised four funding rounds in its first year and focused on e-commerce solutions such as intelligent customer service. ChatGLM has attracted over 25 million users and generates annual revenue of more than RMB 10 million ($1.4 million). Yet despite its early success, the platform now faces intensified competition as major tech firms enter the conversational AI market. [36Kr, in Chinese]
Digital Assistance/Content Creation
Unknown
null
null
null
null
null
null
news
Nomad76
(PR) Axelera AI Partners with Arduino for Edge AI Solutions
Axelera AI - a leading edge-inference company - and Arduino, the global leader in open-source hardware and software, today announced a strategic partnership to make high-performance AI at the edge more accessible than ever, building advanced technology solutions based on inference and an open ecosystem. This furthers Axelera AI's strategy to democratize artificial intelligence everywhere.The collaboration will combine the strengths of Axelera AI's Metis AI Platform with the powerful SOMs from the Arduino Pro range to provide customers with easy-to-use hardware and software to innovate around AI. Users will enjoy the freedom to dictate their own AI journey, thanks to tools that provide unique digital in-memory computing and RISC-V controlled dataflow technology, delivering high performance and usability at a fraction of the cost and power of other solutions available today. Read full story
https://www.techpowerup.com/330001/axelera-ai-partners-with-arduino-for-edge-ai-solutions
https://www.techpowerup.…AatKUoIBX60R.jpg
2024-12-18T15:59:08Z
Axelera AI - a leading edge-inference company - and Arduino, the global leader in open-source hardware and software, today announced a strategic partnership to make high-performance AI at the edge more accessible than ever, building advanced technology solutions based on inference and an open ecosystem. This furthers Axelera AI's strategy to democratize artificial intelligence everywhere.The collaboration will combine the strengths of Axelera AI's Metis AI Platform with the powerful SOMs from the Arduino Pro range to provide customers with easy-to-use hardware and software to innovate around AI. Users will enjoy the freedom to dictate their own AI journey, thanks to tools that provide unique digital in-memory computing and RISC-V controlled dataflow technology, delivering high performance and usability at a fraction of the cost and power of other solutions available today.Showcasing AI-Powered Industrial Monitoring at CES 2025The first outcome of this partnership will be an innovative solution that will integrate AI technology and the Portenta X8, running an offline pre-trained LLM (Phi-3) to power an AI-driven industrial monitoring system capable of processing and analyzing sensor data (temperature, humidity, air quality, CO2, etc.) from different locations and manufacturing machines, to identify trends and potential issues in real time: a demo will be showcased live at CES, January 7-10, 2025 in Las Vegas, NV at the EIC Pavillion (Tech West, Venetian Expo, Level 1, Hall G - Booth #60633 - Eureka Park).Investing in AI with an Open ApproachAs we approach 2025, investments in AI are only expected to increase. IDC recently forecasted that worldwide spending on technology to support AI strategies will reach $337 billion in the upcoming year. The estimate is likely to more than double to $749 billion by 2028 as organizations explore new and innovative use cases such as generative AI and agentic AI - an emerging trend, expected to pick up steam in 2025, that employs autonomous machine "agents" that move beyond query-and-response generative chatbots to accomplish enterprise-level tasks without human guidance. The open approach that Axelera AI and Arduino are taking to bring high-performance inference solutions to the edge will enable these and many other use cases globally.Extending the Opportunities for Open-Source InnovationArduino's open-source solutions are trusted by many Fortune 500 companies and extensively used by makers, educators, and engineers all over the world, aiming to discover what comes next in their fields of interest. This partnership allows a 33-million-strong community of creative thinkers already working with Arduino to have access to Axelera AI's innovative AI acceleration platform, to transform AI technologies and unlock new possibilities for AI that have yet to be realized."Axelera AI was founded with a vision to democratize AI by offering fast, easy-to-access AI technology that minimizes power consumption and cost," said Fabrizio Del Maffeo, CEO and Co-Founder at Axelera AI. "This is critical for companies to realize true AI innovation. Another important step to democratizing AI is building an ecosystem of collaborators all striving to achieve that same goal. We cannot wait to see what the Arduino community will create through this partnership."A New Vision for AI Accessibility, Based on CollaborationThe partnership is an important step toward developing an AI ecosystem of partners all aligned on the vision to reduce barriers around the most advanced and powerful technologies today. To realize this multi-billion-dollar opportunity requires skillsets from a wide range of disciplines, including builders, enablers, co-creators, and developers. This ecosystem is essential to harness the true power of AI, and to enable the AI innovation of tomorrow, unlocking AI imagination across key vertical markets such as automotive, digital healthcare, high performance computing, Industry 4.0, retail, robots & drones, visual security and more."This partnership exemplifies Arduino's mission to enable innovators with powerful, accessible tools that bridge creativity and real-world problem-solving," said Fabio Violante, CEO at Arduino. "By working with Axelera AI, we're providing developers and businesses the means to integrate advanced AI capabilities into their projects, opening the door to groundbreaking innovations across industries. It's exciting to see how this collaboration will empower our community and beyond to turn bold ideas into impactful solutions."
Detection and Monitoring/Prediction/Content Synthesis
Computer and Mathematical/Life, Physical, and Social Science/Production
null
null
null
null
null
null
news
Akash Dutta, David Delima
Microsoft Phi-4 Open-Source Small Language Model Introduced; Claimed to Outperform Gemini 1.5 Pro
Microsoft released the Phi-4 artificial intelligence (AI) model on Friday. The small language model (SLM) joins the company’s open-source Phi family of foundational models. The next generation of the AI model comes eight months after the release of Phi-3 and four months after the introduction of the Phi-3.5 series of AI models.
https://www.gadgets360.com/ai/news/microsoft-phi-4-model-launch-beats-gemini-1-5-pro-7240177
https://i.gadgets360cdn.…734088544753.jpg
2024-12-13T11:56:22Z
Microsoft on Friday released its Phi-4 artificial intelligence (AI) model. The company's latest small language model (SLM) joins its open-source Phi family of foundational models. The AI model comes eight months after the release of Phi-3 and four months after the introduction of the Phi-3.5 series of AI models. The tech giant claims that the SLM is more capable of solving complex reasoning-based queries in areas such as mathematics. Additionally, it is also said to excel in conventional language processing.So far, every Phi series has been launched with a mini variant, however, no mini model accompanied the Phi-4 model. Microsoft, in a blog post, highlighted that Phi-4 is currently available on Azure AI Foundry under a Microsoft Research Licence Agreement (MSRLA). The company plans to make it available on Hugging Face next week as well.The company also shared benchmark scores from its internal testing. Based on these, the new AI model significantly upgrades the capabilities of the older generation model. The tech giant claimed that Phi-4 outperforms Gemini Pro 1.5, a much larger model, on the math competition problems benchmark. It also released a detailed benchmark performance in a technical paper published in the online journal arXiv.On safety, Microsoft stated that the Azure AI Foundry comes with a set of capabilities to help organisations measure, mitigate, and manage AI risks across the development lifecycle for traditional machine learning and generative AI applications. Additionally, enterprise users can use Azure AI Content Safety features such as prompt shields, groundedness detection and others as a content filter.Developers can also add these safety capabilities into their applications via a single application programming interface (API). The platform can monitor applications for quality and safety, adversarial prompt attacks, and data integrity and provide developers with real-time alerts. This will be available to those Phi users who access it via Azure.Notably, smaller language models are often being trained after deployment on synthetic data, allowing them to quickly gain more knowledge and higher efficiency. However, post-training results are not always consistent in real-world use cases.
Decision Making/Content Synthesis
Computer and Mathematical/Life, Physical, and Social Science/Education, Training, and Library
null
null
null
null
null
null
news
Sead Fadilpašić
Claude AI and other systems could be vulnerable to worrying command prompt injection attacks
With prompt injections, crooks could get AI to write, compile, and run, malware.
https://www.techradar.com/pro/security/claude-ai-and-other-systems-could-be-vulnerable-to-worrying-command-prompt-injection-attacks
https://cdn.mos.cms.futu…t78k-1200-80.jpg
2024-12-10T22:46:00Z
Security researchers tricked Anthropic's Claude Computer Use to download and run malwareThey say that other AI tools could be tricked with prompt injection, tooGenAI can be tricked to write, compile, and run malware, as wellIn mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device - and researchers have already found a way to abuse it.Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.While it sounds devastating, there are a few things worth mentioning here: Claude Computer Use is still in beta, and the company did leave a disclaimer saying that Computer Use might not always behave as intended: “We suggest taking precautions to isolate Claude from sensitive data and actions to avoid risks related to prompt injection.” Another thing worth noting is that this is a prompt injection attack, fairly common against AI tools."Countless ways" to abuse AIRehnberger calls his exploit ZombAIs, and says he was able to get the tool to download Sliver, a legitimate open source command-and-control (C2) framework developed by BishopFox for red teaming and penetration testing, but it is often misused by cybercriminals as malware.Threat actors use Sliver to establish persistent access to compromised systems, execute commands, and manage attacks in a similar way to other C2 frameworks like Cobalt Strike.Rehnberger also stressed that this is not the only way to abuse generative AI tools, and compromise endpoints via prompt injection.“There are countless others, like another way is to have Claude write the malware from scratch and compile it,” he said. “Yes, it can write C code, compile and run it.”Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.“There are many other options.”In its writeup, The Hacker News added DeepSeek AI chatbot was also found vulnerable to a prompt injection attack that could allow threat actors to take over victim computers. Furthermore, Large Language Models (LLM) can output ANSI escape code, which can be used to hijack system terminals via prompt injection, in an attack dubbed Terminal DiLLMa.You might also like
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
Mohita Narang
Interactive Conversations with PDFs Using Langchain
Unlock the future of document interaction with LangChain, where AI transforms PDFs into dynamic, conversational experiences.
https://www.digitalocean.com/community/tutorials/chatting-with-pdfs-using-langchain
null
2024-12-24T06:47:27Z
Gone are the days when interacting with PDFs was cumbersome and time-consuming. Users had to open the documents manually using software like Adobe Reader and read through the entire document or use essential search functions to find specific information. But now chatting with an AI assistant is simple with the integration of LangChain. Users can upload PDFs to a LangChain enabled LLM application and receive accurate answers within seconds, through a process called Optical character recognition (OCR).This benefits businesses requiring customized interaction with company policies, documents, or reports. It can even help researchers and students to identify the important parts and avoid reading the whole book or research paper.Introduction to PDF Analyser1.Document Handling and Embeddings: Load the PDF document using a suitable loader like PyPDFLoader. Clean and structure the text data (removing headers/footers, handling special characters, and segmenting text into paragraphs or sections). It could also involve tokenization (breaking text into words), stemming/lemmatization (reducing words to their root form), or stop word removal (eliminating common words like "the ‘’ or "a ‘’) at this step.2.Vector Embeddings and Retrieval: This involves creating vector representations of the text chunks extracted from the PDFs. These vectors capture the semantic meaning and relationships between words. The chatbot generates a vector for the query and compares it with the stored document vectors during user queries. Documents with the closest vector match are considered the most relevant for the user’s question. Libraries like Gensim or Faiss can be used for this approach.3.Language Generation Pipeline: Set up the language generation pipeline using AutoModelForSeq2SeqLM and AutoTokenizer.4.Create a Retriever and Integrate with LLM: Develop a custom retriever using RetrievalQA to fetch relevant documents based on queries.5.Querying the Chain: Test the system by querying the chain with questions related to the PDF content.PrerequisitesPython Environment: Install Python 3.7+ and set up a virtual environment.LangChain Library: Install LangChain (pip install langchain).Text Extractor: Install a library for PDF text extraction like PyPDF2 or pdfplumber.Vector Database: Install and configure a vector database (e.g., FAISS, Pinecone) for embeddings.Interacting with PDFs TodayNow, it’s very easy to understand the contents of PDFs. Just upload the PDF to the LLM application and ask questions about the content in the PDF. It’s the same as chatting with ChatGPT, but users can upload the PDFs directly.Customized Document HandlingBusinesses can now customize the document handling system for more precise interactions with company documents, policies, or reports in PDF format. A vast repository of PDF documents can be prepared for employees, and LLMs can be trained on it. Users can simply upload the document and ask questions in plain language, “What are the company’s sick leave policies?”, or the sales team can quickly query up-to-date technical specifications or product information from PDF catalogs or manuals.Dynamic Content RetrievalRAG (Retrieval Augmented Generation) techniques can incorporate external data in real-time. This means businesses with LLM powered applications can access the most current information from the company database using RAG techniques. This ensures that the generated responses are current and can help decision-making. Imagine a sales team asking about a product’s availability. To provide the latest stock status, the LLM not only retrieves information from the PDF manual but also accesses the company’s inventory database.Secure and Efficient Information ExtractionConfidentiality is very important in sectors like financial services and legal services. LLMs can maintain privacy and security by providing information from sensitive PDF documents without exposing the entire context, ensuring only authorized information is accessed.Application in Knowledge BasesAs new policies or procedures are uploaded as PDFs, an LLM can scan and extract information from PDFs and update the knowledge base by updating FAQs accordingly. LangChain has built in functional integrations with popular storage solutions like Redis.Improved Customer SatisfactionCustomers can get personalized interaction and quick access to relevant information by integrating PDF interaction chatbots. For example, a chatbot can guide customers in assembling a piece of furniture from IKEA. It can provide step-by-step instructions by referring to the PDF user manual and ensure a smooth customer shopping experience.We have tried a PDF interaction demo using Langchain below. But why use Langchain?Lanchain offers pre-built components like retrieval systems, document loaders, and LLM integration tools. LangChain components have already been tested to ensure effective working with documents and alarms. It improves the overall efficiency of the development process and reduces the risk of errors.Demo CodeThis demo used a pre-trained hugging face model, ‘flan-t5-large’. Other open-source models, like FastChatT5 3b Model and Falcon 7b Model, can also be used for this. Start the Gradient Notebook by choosing the GPU and cloning the repository. This repository did not have requirements.txt, so the dependencies were installed separately.Model and Document LoadingEmbedding_Model = "hkunlp/instructor-xl"LLM_Model = "google/flan-t5-large"from langchain_community.document_loaders import DirectoryLoaderfrom langchain_community.document_loaders import PyPDFLoaderfrom langchain.document_loaders import PyPDFLoaderloader = PyPDFLoader("/content/langchain/qna_from_pdf_file_with_citation_using_local_Model/pdf_files/TRANSFORMERS.pdf") #path to PDF documentdocuments = loader.load_and_split()‘hkunlp/instructor-xl’ is the Embedding_Model, and ‘google/flan-t5-large’ is used as LLM_Model defines pre-trained models for text embedding and language generation, respectively. This pre-trained model is taken from HuggingFace.A PyPDFLoader loads the PDF file by giving the path to the PDF document. Here, only one PDF document is loaded. Multiple PDF documents can be loaded into the folder, and a path to the folder can also be given.The load_and_split method of the loader reads and splits the PDF content into individual sections or documents for processing.Testing the Embeddings Mechanismfrom langchain_community.embeddings import HuggingFaceInstructEmbeddingsinstructor_embeddings = HuggingFaceInstructEmbeddings(model_name=Embedding_Model)text = "This is a test document."query_result = instructor_embeddings.embed_query(text)Testing the embedding generation process is common practice before integrating it into a larger system, such as a question-answering system that processes PDF documents. With the selected embedding model, an instance of HuggingFaceInstructEmbeddings is created.3. Language Generation Pipelineimport torchimport transformersfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLMfrom transformers import pipelinefrom langchain.llms import HuggingFacePipelinetokenizer = AutoTokenizer.from_pretrained(LLM_Model)model = AutoModelForSeq2SeqLM.from_pretrained(LLM_Model, torch_dtype=torch.float32)pipe = pipeline( "text2text-generation", model=model, tokenizer=tokenizer, max_length=512, temperature=0, top_p=0.95, repetition_penalty=1.15)llm = HuggingFacePipeline(pipeline=pipe)AutoTokenizer.from_pretrained(LLM_Model)-This convert text into a format that the model can understand.AutoModelForSeq2SeqLM.from_pretrained(LLM_Model, torch_dtype=torch.float32): This line of code is likely used in a Python script employing the Transformers library for Natural Language Processing (NLP) tasks.AutoModelForSeq2SeqLM: This part of the code refers to a pre-trained model architecture specifically designed for sequence-to-sequence learning tasks. It’s used for tasks like machine translation, summarization, and text generation..from_pretrained(LLM_Model): This section loads a pre-trained LLM (Large Language Model) from the transformers library’s model hub.torch_dtype=torch.float32:  torch.float32 indicates that the model will use 32-bit floating-point precision.pipe = pipeline: Creates a text-to-text generation pipeline for generating text with the model.Parameters for the pipeline:model, tokenizer: Specify the model and tokenizer to use.max_length: Limits the maximum length of the generated text to 512 tokens.temperature (0): Controls randomness in generation (0 means deterministic).top_p (0.95): Filters potential next tokens for more likely responses.repetition_penalty (1.15): Discourages repetitive text generation.4. Create a retriever from the index and integrate it with LLMfrom langchain_core.retrievers import BaseRetrieverfrom langchain_core.callbacks import CallbackManagerForRetrieverRunfrom langchain_core.documents import Documentfrom typing import Listclass CustomRetriever(BaseRetriever): def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: return [Document(page_content=query)]retriever = CustomRetriever()retriever.get_relevant_documents("bar")from langchain.chains import RetrievalQAqa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)question = "explain Power Transformers?"This code retrieves relevant documents based on a query and generates answers to questions using those documents. It integrates the retrieved component with a QA pipeline in the LangChain framework.5. Query the Chainquestion = "Ideal transformers can be characterized as?"generated_text = qa(question)Generated_textqa(question) call in real-time or interact with the LangChain framework directly and will generate the output.For integrating this chat feature in the application, training and finetuning is required first. GPUs can fastrack that process. For real-time chats with PDF documents, using CPU will be insufficient as it will result in prohibitively long wait times for customer responses. In that case also a high-power GPU will be needed.Closing ThoughtsIntegrating RAG techniques, will streamline chatbot conversations and ensure secure and up-to-date information retrieval from PDF documents. More enhancements could be made to the PDF analysis. For example, leveraging OCR technology for scanning PDFs or handwritten documents effectively, telling about the source citations.Dive into the world of advanced PDF analysis and chatbot interactions.
Digital Assistance
Business and Financial Operations/Management
null
null
null
null
null
null
news
[email protected] (Wayne Williams)
OpenAI spent $80M to $100M training GPT-4; Chinese firm claims it trained its rival AI model for $3 million using just 2,000 GPUs
Chinese firm claims it trained rival GPT-4 model for $3 million using just 2,000 GPUs.
https://www.techradar.com/pro/openai-spent-usd80m-to-usd100m-training-gpt-4-chinese-firm-claims-it-trained-its-rival-ai-model-for-usd3-million-using-just-2-000-gpus
https://cdn.mos.cms.futu…Hsye-1200-80.jpg
2024-12-02T18:32:00Z
01.ai trained an AI model for $3 million using 2000 unnamed GPUS “Efficient engineering” allows 01.ai to compete globally, company claims 01.ai reduced inference costs to 10 cents per million tokens Tech companies in China face a number of challenges due to the American export ban, which restricts access to advanced hardware from US manufacturers.This includes cutting-edge GPUs from Nvidia, critical for training large-scale AI models, forcing Chinese firms to rely on older or less efficient alternatives, making it difficult to compete globally in the rapidly evolving AI industry.However, as we’ve seen time and again, these seemingly insurmountable challenges are increasingly being overcome through innovative solutions and Chinese ingenuity. Kai-Fu Lee, founder and CEO of 01.ai, recently revealed that his team successfully trained its high-performing model, Yi-Lightning, with a budget of just $3 million and 2,000 GPUs. In comparison, OpenAI reportedly spent $80-$100 million to train GPT-4 and is rumored to have allocated up to $1 billion for GPT-5.Making inference fast too“The thing that shocks my friends in the Silicon Valley is not just our performance, but that we trained the model with only $3 million," Lee said (via @tsarnick)."We believe in scaling law, but when you do excellent detailed engineering, it is not the case you have to spend a billion dollars to train a great model. As a company in China, first, we have limited access to GPUs due to the US regulations, and secondly, Chinese companies are not valued what the American companies are. So when we have less money and difficulty to get GPUs, I truly believe that necessity is the mother of invention."Lee explained the company’s innovations include reducing computational bottlenecks, developing multi-layer caching, and designing a specialized inference engine. These advancements, he claims, result in more efficient memory usage and optimized training processes.“When we only have 2,000 GPUs, the team has to figure out how to use it,” Kai-Fu Lee said, without disclosing the type of GPUs used. “I, as the CEO, have to figure out how to prioritize it, and then not only do we have to make training fast, we have to make inference fast... The bottom line is our inference cost is 10 cents per million tokens.”Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.For context, that’s about 1/30th of the typical rate charged by comparable models, highlighting the efficiency of 01.ai's approach.Some people may be skeptical about the claims that you can train an AI model with limited resources and "excellent engineering", but according to UC Berkeley’s LMSIS, Yi-Lightning is ranked sixth globally in performance, suggesting that however it has done it, 01.ai has indeed found a way to be competitive with a minuscule budget and limited GPU access.You might also like
Unknown
Unknown
null
null
null
null
null
null
news
Martin Chan
Harnessing Azure OpenAI and R for Web Content Summarisation: A Practical Guide with rvest and tidyverse
IntroductionContinue reading: Harnessing Azure OpenAI and R for Web Content Summarisation: A Practical Guide with rvest and tidyverse
https://www.r-bloggers.com/2024/12/harnessing-azure-openai-and-r-for-web-content-summarisation-a-practical-guide-with-rvest-and-tidyverse/
https://raw.githubuserco…heddar-gorge.png
2024-12-27T00:00:00Z
[This article was first published on Musings on R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.IntroductionIn last weeks article, we covered how you can interact with a local language model from R, using LM Studio and the Phi-3 model. Although local language models have the advantage of not incurring any costs asides from your electricity bills, they are likely to be less powerful than the more complex large language models (LLMs) that are hosted on cloud. There are many ways to interact with different LLMs from R, and one of them is through Azure OpenAI, which is the method that I will cover here. Azure OpenAI services allows you access to models from OpenAI such as GPT-4o, DALL·E, and more.In this post, we will cover:how to set up an Azure OpenAI deploymenthow to use {keyring} to avoid exposing your API key in your R codean example on creating a prompt function to connect to Azure OpenAIusing {rvest} to extract text content from websites for summarisationFor demo purposes, we will use the same 100 climbs dataset that we had used as an example last week. The goal of our exercise here is to summarise a websites content based on a URL, using only R and an Azure subscription.Why use Azure OpenAI instead of OpenAI?One clear alternative is to access the OpenAI APIs directly through OpenAIs platform. However, using Azure OpenAI offers several advantages. It allows for seamless integration with other Azure services you may already use, such as Azure Key Vault, and enables you to manage related projects under the same subscription or resource group. Additionally, while OpenAIs services are currently inaccessible in Hong Kong, Azure OpenAI remains available (at the time of writing).Setting Up Azure OpenAITo begin leveraging Azure OpenAI for web content summarization, youll first need to sign up for an account on Azure. Once you have set up an account, you will need to:Create a subscription – a billing unit to manage costs and track usageCreate a resource group for your project – for organising resourcesCreate an Azure OpenAI resourceAll of these can be accessed easily by typing into the search bar at the top of the Azure portal.When you go into Subscriptions, this is what you will see. Click Add to add a subscription:You should then create a resource group that is linked to the relevant subscription. You can read more here to decide on whether you should create new resource groups if you have other related projects on Azure:Next, go to Azure OpenAI account under Azure AI services to create an Azure OpenAI resource:To create the resource, you will need to supply information such as subscription, resource group, and details on the instance that you wish to create:For simplicity, we will select All networks for this example. Other options provide more secure methods to set up this workflow:This should be what you see when you go to Azure OpenAI. In this example, we have called our resource demo-r:2. Set up deploymentOnce you have your subscription, resource group, and Azure OpenAI resource, the next step is to go to Azure AI Foundry portal and create a new deployment From base models. We can start with gpt-4o-mini, as this is one of the cheapest models available and allows us to experiment without breaking the bank:Once you have created your deployment, use the left side bar and navigate to Shared Resources > Deployments. Click on your deployment, and you will find your API key and Azure Open AI Endpoint here. These are the two key bits of information that you need, and you should store these in a safe place (not in code!).3. Storing your API key in {keyring}You must never expose your API key / secret directly in your code – the worst case scenario is you accidentially pushing your key to GitHub and your API key becoming accessible to everyone. The {keyring} package in R helps with this problem by enabling you to store and retrieve your keys easily, using your operating systems credential store that is more secure than storing secrets in environment variables.Running keyring::key_set() triggers a dialogue box where you can save your API key, which you can link to a service (openai_api):library(keyring)key_set("openai_api")You can then retrieve the key by running key_get():key_get("openai_api")Once a secret is defined, the keyring persists across multiple R sessions. I would definitely recommend checking out the {keyring} documentation for more information on how it works and on other options on improving security.Scraping the source data with rvestIn this section, I will show you how to extract a source article from a URL that we can use for summarisation with our LLM.1. Accessing Example DatasetLet us start by loading the required packages in R:library(tidyverse)library(rvest)library(keyring)library(httr)library(jsonlite)You can load the Top 100 Climbs dataset and extract relevant URLs using the following code snippet:# Reading the datasettop_100_climbs_df <- read_csv( "https://raw.githubusercontent.com/martinctc/blog/refs/heads/master/datasets/top_100_climbs.csv" )urls <- top_100_climbs_df$urlurlsThis is what urls returns:> top_100_climbs_df$url [1] "https://cyclinguphill.com/cheddar-gorge/" [2] "https://cyclinguphill.com/100-climbs/weston-hill-bath/" [3] "https://cyclinguphill.com/100-climbs/crowcombe-combe/" [4] "https://cyclinguphill.com/porlock-hill-climb/" [5] "https://cyclinguphill.com/100-climbs/dunkery-beacon/" [6] "https://cyclinguphill.com/100-climbs/exmoor-forest-climb/" [7] "https://cyclinguphill.com/100-climbs/challacombe/" [8] "https://cyclinguphill.com/100-climbs/dartmeet-climb/" [9] "https://cyclinguphill.com/100-climbs/haytor-vale/" [10] "https://cyclinguphill.com/100-climbs/widecombe-hill/"For this demo, let us just use the first URL here for our summarisation task:> urls[[1]][1] "https://cyclinguphill.com/cheddar-gorge/"2. Extracting Text Using {rvest}The rvest package (which is part of tidyverse) is a powerful package for scraping web content. Previously, Ive written a short example on how to scrape reviews from Amazon using the same package.In our example, what we want to do is to extract the relevant content from the Cheddar Gorge climbs page, based on the URL. This is the code that I have used:library(rvest)# Use rvest to extract content from URLpage_content <- read_html(urls[[1]])# Parse page_content to extract the textpage_text <- page_content %>% html_elements(".entry-content") %>% html_text()Looking at the page source of the Cheddar Gorge website, I could skim that entry-content is the div class that best matches onto the real content that concerns the Cheddar Gorge climb, which is why I used html_elements(".entry-content") to identify that specific part of the page content.Here is a snippet of what is captured in page_text:[1] "\n\t\t\tCheddar Gorge is an interesting climb through the beautiful limestone gorge of Cheddar, Somerset. Climbing gently out of the village of Cheddar, the road starts to snake upwards at a gradient of up to 16% round some twisty corners. It is a tough start to the climb, but the remaining couple of km are a much gentler gradient, allowing you the opportunity to take it at your own pace as you climb away from the gorge and onto to the top of the moor.\n\nCheddar Gorge full hill\nLocation: Cheddar, Mendip hills, Somerset, South West.\nDistance 2.6 miles\nAverage gradient: 4-5%\nMaximum gradient: 16%\nHeight gain. 150m\n100 climbs: #1\nCheddar Gorge, Strava\nCheddar Gorge understandably features in many local cyclo-sportives and is a popular destination for many cyclists. It is also popular with tourists and rock climbers. The road can be busy especially as you leave the village of Cheddar...Now that we have our target content for summarisation stored in page_text, we can start setting up the function to interact with OpenAI.Interacting with OpenAI1. Gathering all the piecesThere are four key pieces that we need to interact with Azure OpenAI from R:Azure OpenAI keyAPI endpointSystem promptUser promptWe already have (1) and (2) identified in the earlier section:# Get the API key from the keyringaz_key <- keyring::key_get("openai_api")# Assign chat completions endpoint to variableazo_completions_ep <- paste0( az_openai_endpoint, "openai/deployments/", "gpt-4o-mini/chat/completions?api-version=2024-08-01-preview" )As for the prompts, we can use page_text as the user prompt, and for the system prompt write something similar like the below to start:# Set system promptsys_prompt <- "Please summarise the following article, identifying the top 3-5 takeaways."2. Writing a Custom FunctionNow that we have all the pieces together, we can put together the interaction function to generate the chat completions:generate_completion <- function( sys_prompt, user_prompt, api_endpoint = azo_completions_ep, api_key = az_key) {response <- POST( url = api_endpoint, add_headers( "api-key" = api_key ), body = list( messages = list( list(role = "system", content = sys_prompt), list(role = "user", content = user_prompt) ), max_tokens = 1000, temperature = 0.7, top_p = 1 ), encode = "json" )response_content <- content(response, as = "parsed", type = "application/json") response_text <- response_content$choices[[1]]$message$content# Calculate token count token_count <- (nchar(sys_prompt) + nchar(user_prompt)) / 4 # Rough estimate of tokensreturn(tibble( prompt = user_prompt, response = response_text, token_count = token_count ))}The generate_completion function above sends a request to the Azure OpenAI endpoint to generate a completion (a summary) based on the system prompt and user prompt we provided. A data frame / tibble is returned, containing a column for:prompt: the original user prompt (source text for Cheddar Gorge)response: the summary returned by the LLMtoken_count: a count of the number of tokens used in the promptYou can always adapt the function to only return the response, but I usually prefer to have metadata readily available and stored in a tidy data frame / tibble. This also makes it easier as you scale out the script to run it across a list of inputs.Note that the above function does not contain any mechanisms for handling retries (for when there are rate limits, or server errors) and for handling unsuccessful responses (anything that isnt HTTP 200). This could make it difficult to debug when there is a problem with say the format of the HTTP request, or when trying to scale this out to summarising multiple texts. For reference, here are some common HTTP responses you can get:Status CodeDescription200OK201Created204No Content301Moved Permanently302Found400Bad Request401Unauthorized403Forbidden404Not Found429Too Many Requests500Internal Server Error502Bad Gateway503Service UnavailableTo include a retry mechanism and response handling, this is how generate_completion() can be enhanced:generate_completion <- function( sys_prompt, user_prompt, api_endpoint = azo_completions_ep, api_key = az_key, max_retries = 5) {retries <- 0 while (retries < max_retries) { response <- POST( url = api_endpoint, add_headers( "api-key" = api_key ), body = list( messages = list( list(role = "system", content = sys_prompt), list(role = "user", content = user_prompt) ), max_tokens = 1000, temperature = 0.7, top_p = 1 ), encode = "json" )if (status_code(response) == 200) { response_content <- content(response, as = "parsed", type = "application/json") response_text <- response_content$choices[[1]]$message$content# Calculate token count token_count <- (nchar(sys_prompt) + nchar(user_prompt)) / 4 # Rough estimate of tokensreturn(tibble( prompt = user_prompt, response = response_text, token_count = token_count ))} else if (status_code(response) == 429) { # Handle 429 error by waiting and retrying retry_after <- as.numeric(headers(response)$`retry-after`) if (is.na(retry_after)) { retry_after <- 2 ^ retries # Exponential backoff } Sys.sleep(retry_after) retries <- retries + 1 } else { warning(paste("Error in API request:", status_code(response))) retries <- retries + 1 Sys.sleep(2 ^ retries) # Exponential backoff for other errors } }stop("Max retries exceeded")}To generate the summary completion, you can run this:output_df <- generate_completion( sys_prompt = sys_prompt, user_prompt = page_text, max_retries = 1 )output_dfwhich returns the following tibble:> output_df# A tibble: 1 × 3 prompt response token_count <chr> <chr> <dbl>1 "\n\t\t\tCheddar Gorge is an interesting climb through t "**Summ 1004.Im sure you are keen to see what our LLM has summarised! Here is whats returned from output_df$response:Summary of Cheddar Gorge Climb Article:Cheddar Gorge is a notable cycling climb located in Somerset, England, characterized by its picturesque limestone landscape and challenging gradients. The climb features a steep initial section followed by a gentler ascent, making it a popular choice for cyclists and tourists alike.Top Takeaways:Climbing Details: The climb spans 2.6 miles with an average gradient of 4-5% and a maximum gradient of 16%, resulting in a height gain of 150 meters. The steepest section is located at the base, near the car parks.Popularity: Cheddar Gorge is frequently included in local cyclo-sportives and is a favored spot for both cyclists and rock climbers. It also attracts tourists, making the road busy, especially near the village.Historical Significance: The gorge hosted the 2007 National Hill Climb Championship, showcasing competitive cycling talent and providing a challenging venue for participants.Scenic Experience: The climb offers stunning views and a twisting descent, but caution is advised due to sharp corners that can lead to high speeds.Local Attractions: Cheddar Gorge is part of the Mendip hills, which offers other climbing opportunities like Burrington Combe, enhancing the areas appeal for cycling enthusiasts.Pretty cool, right? You can also easily scale this out using purrr, running generate_completion() through all the content extracted from each URL in urls.For those wanting to test out a simpler example of interacting with Azure OpenAI without using rvest, you can also try this mini snippet:generate_completion( sys_prompt = "Describe the following animal in terms of its taxonomic rank, habitat, and diet.", user_prompt = "yellow spotted lizard", max_retries = 1)Where the response givesThe yellow spotted lizard, scientifically known as Eulimastoma gracile, belongs to the following taxonomic ranks:Domain: EukaryaKingdom: AnimaliaPhylum: ChordataClass: ReptiliaOrder: SquamataFamily: TeiidaeGenus: EulimastomaSpecies: Eulimastoma gracileHabitat: The yellow spotted lizard is typically found in arid and semi-arid regions, including deserts and scrublands. It prefers areas with loose soil and ample cover, such as rocks, vegetation, and debris, which provide shelter and hunting grounds.Diet: This lizard is primarily insectivorous, feeding on a variety of insects and other small invertebrates. It may also consume small vertebrates and plant matter occasionally, depending on the availability of food sources in its habitat.ConclusionIn this blog, we explored the steps to summarize web content utilizing Azure OpenAIs capabilities, interacting with the gpt-4o-mini model from R. I would love to hear from the comments below if you have found any interesting applications for this, and if you have any suggestions / thoughts on the code!
Content Synthesis
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Affan Mir
Is CyberArk Software (CYBR) Set to Lead in Securing AI for Businesses in 2025?
We recently compiled a list of the 11 Trending AI Stocks on Latest News and Ratings. In this article, we are going to take a look at where CyberArk Software ...
https://finance.yahoo.com/news/cyberark-software-cybr-set-lead-082826228.html
https://s.yimg.com/ny/api/res/1.2/7_TgtyP9_6jm1jRhv1KATg--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD02NzM-/https://media.zenfs.com/en/insidermonkey.com/4f34f369480291694e653d8ff87d92b9
2024-12-26T08:28:26Z
We recently compiled a list of the 11 Trending AI Stocks on Latest News and Ratings. In this article, we are going to take a look at where CyberArk Software Ltd. (NASDAQ:CYBR) stands against the other trending AI stocks.The rise of Chinese open-source AI models highlights their growing influence in the global AI landscape, challenging traditional proprietary systems. Their accessibility and competitive capabilities signal a shift in the balance of AI innovation. On December 17, CNBC's Arjun Kharpal reported that Chinese AI models are advancing rapidly, with several matching or exceeding the performance of U.S. models despite chip restrictions imposed by Washington.Open-source models, referred to as "open weight," are key to this growth. These models, such as Deepseek and Alibaba's QN, are highly popular on platforms like Hugging Face, allowing developers worldwide to freely access and build applications without strict licensing. The strategy enables Chinese companies to distribute their technology widely and challenge proprietary systems from firms like OpenAI, which rely on closed ecosystems to monetize their models. The growing adoption and strong capabilities of Chinese large language models present a significant challenge to U.S. dominance in the AI sector.READ ALSO: 12 AI News and Ratings Investors Are Watching Right Now and Jim Cramer's Latest Lightning Round: 8 Stocks in Focus.While China is making significant moves in AI, US-based companies are also making strides. OpenAI concluded its 12-day "shipmas" event by unveiling o3, a new reasoning model and successor to o1, along with o3-mini, a smaller version tailored for specific tasks. CEO Sam Altman highlighted the advanced capabilities, including exceptional performance in coding, mathematics, and scientific problem-solving.The o3 model achieved a groundbreaking 87.5% on the ARC-AGI benchmark, prompting plans to develop a more challenging evaluation. The score shows the model's ability to perform well on tasks that require learning and adapting to new challenges. ARC-AGI is a benchmark used to evaluate how well an AI can acquire new skills beyond its training data. This high score suggests that o3 is advancing toward more general intelligence capabilities. With a strong focus on safety testing and future enhancements, o3 represents a major step forward for U.S.-based AI development.OpenAI's decision to skip naming the model o2 is linked to trademark concerns with a UK telecom company. While o3 is not yet broadly available, safety researchers can preview o3-mini now, with general availability expected in early 2025.For this article, we selected AI stocks by going through news articles, stock analysis, and press releases. We listed the stocks in ascending order of their hedge fund sentiment taken from Insider Monkeys database of 900 hedge funds.Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here).Top 20 Most Valuable Data Companies in the WorldA data center with a repetetive design of computer servers, showing the companies' efficient and secure IT infrastructure.Number of Hedge Fund Holders: 51CyberArk Software Ltd. (NASDAQ:CYBR) develops AI-driven identity security solutions that protect access, manage credentials, and provide secure authentication across various industries and environments.J.P. Morgan has identified CyberArk (NASDAQ:CYBR) as one of its "top picks" in security software for 2025, citing strong demand, favorable expectations, and potential for better-than-expected performance. J.P. Morgan emphasized focusing on companies with strong enterprise budget exposure and those expanding into adjacent markets. Additionally, as digital identity management becomes a top priority for businesses, securing AI is also highlighted as a key theme for 2025, with growing competition expected in this area. The firm said:"We think that solutions to securing AI currently exist in a highly fragmented and nascent market, and that sustained investment from bigger players in the security space will likely increase competition and available solutions.Overall, CYBR ranks 5th on our list of trending AI stocks on latest news and ratings. While we acknowledge the potential of CYBR as an investment, our conviction lies in the belief that AI stocks hold greater promise for delivering higher returns and doing so within a shorter time frame. If you are looking for an AI stock that is more promising than CYBR but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT: 8 Best Wide Moat Stocks to Buy Now and30 Most Important AI Stocks According to BlackRock.Disclosure: None. This article is originally published at Insider Monkey.
Decision Making/Detection and Monitoring/Process Automation
Computer and Mathematical
null
null
null
null
null
null
news
Stephen Turner
Repost: Bluesky conversation analysis with local and frontier LLMs with R/Tidyverse
Reposted from the original at https://blog.stephenturner.us/p/bluesky-analysis-claude-llama-tidyverse.Summary, translation, and sentiment analysis of Bluesky posts on a topic using atrrr, tidyverse/ellmer, and mlverse/mall with Claude and ot...Continue reading: Repost: Bluesky conversation analysis with local and frontier LLMs with R/Tidyverse
https://www.r-bloggers.com/2024/12/repost-bluesky-conversation-analysis-with-local-and-frontier-llms-with-r-tidyverse/
https://substackcdn.com/…10f7_700x500.png
2024-12-30T15:59:00Z
[This article was first published on Getting Genetics Done, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Reposted from the original at https://blog.stephenturner.us/p/bluesky-analysis-claude-llama-tidyverse.Summary, translation, and sentiment analysis of Bluesky posts on a topic using atrrr, tidyverse/ellmer, and mlverse/mall with Claude and other open models, and gistr to put the summary on GitHub.…BackgroundBluesky, atrrr, local LLMsIve written a few posts lately about Bluesky first, Bluesky for Science, about Bluesky as a home for Science Twitter expats after the mass eXodus, another on using the atrrr package to expand your Bluesky network. Ive also spent some time looking at R packages to provide an interface to Ollama. I wrote about this generically last summer, and quickly followed that up with a post about an R package that uses Llama 3.1 for summarizing bioRxiv and medRxiv preprints. Using frontier models in RIn those posts above Im using a local LLM (some variant of the llama model). Meta claims that the newest llama3.3-70B model achieves performance on par with GPT-4 and the much larger llama3.1-405B. Its good (really good for an open-ish model), but I dont agree with this claim (and neither does the Chatbot Arena leaderboard).Enter ellmer (ellmer.tidyverse.org) a recent addition to the tidyverse that provides interfaces to frontier models like OpenAIs ChatGPT, Anthropics Claude, Googles Gemini, and others (along with local models via Ollama). I bought a few bucks worth of Claude API credits to see how Claude would compare to some of these open source models for a simple summarization task. The taskBluesky for science has really taken off since Xs new management has let the place degrade into the sewer that it is. Its such a lively place and if I go a day or two without scrolling through my feeds Ill miss a lot of whats happening in the R community and other interest groups. I subscribe to the AI News newsletter, where every day I get an AI-summarized digest of whats happening in the AI world across Twitter, Reddit, various Discords, and other forums (fora?). I was curious if I could do something similar with the #Rstats hashtag on Bluesky.Subscribe nowThe setupHeres the basic setup. Heres the code on GitHub.Use the atrrr package to retrieve 1000 Bluesky posts tagged with #Rstats. I typically see about 100 posts per day, so 1000 is enough to capture everything in the past week.Use dplyr to do some cleaning: Limit posts to just the previous 7 days, create a link to the post URL from the at:// URI, arrange descending by the most liked posts, having at least 5 likes, extract the post text to feed to the LLM, and extract a bullet point list of post text for the top 10 posts to include in the summary.Use the ellmer package to summarize these posts using Claude 3.5 Sonnet and various open models through Ollama. Run three iterations of each, and choose the best result out of the three (subjective).Claude 3.5 SonnetLlama3.3Gemma2MistralUse ROpenScis gistr package to programmatically post the resulting markdown as a GitHub gist.The system prompt Im using for each of these is:What are the top posts on #rstats Bluesky last week? I will provide you with data showing the author, post URL, and post text from the top 100 most liked posts tagged with #Rstats. Remember, this is bluesky, and posts are called posts, NOT tweets. Make sure you call them posts, not tweets. I want you to return a summary of the top posts, in markdown. Who posted about what? And why is it important? Give examples. Give me a list of topics, a list of common hashtags, and author names. Also talk about the general overall tone or sentiment. You can link to a specific post by using the URL provided, where the link text is the author’s name, like ‘[author](url) posted ‘. Do not include any preample or postscript text or followup question suggestions. Here are the posts.Here’s the code on GitHubResultsNote: I ran the code linked above to generate these results for the week of December 18-24, 2024. Youll get different results running the same code today.The result from Claude 3.5 Sonnet was far and above the best summary, and it followed my system prompt instructions better than any of the open source models. After checking my usage, this query cost about $0.01-$0.02 cents.Claude 3.5 SonnetResults from Claude 3.5 Sonnet: SummaryProduced with claude-3-5-sonnet-20241022Here’s a summary of the top #rstats posts on Bluesky from the past week:Key Announcements and News:Posit (formerly RStudio) joined the Open Source Pledge, committing $2000/year per developer to support open source softwareThe RStudio IDE and Posit Workbench received a major update with Positron Pro supportuseR 2025 will be held at Duke University in August 2024Several notable package releases including rms 6.9-0, {chopin} for spatial analysis, and {rayrender} v0.38.0Popular Topics and Themes:Package Development & UpdatesData VisualizationTutorials & Learning ResourcesStatistical MethodsMobile DevelopmentGenerative ArtNotable Tutorial Content:Jarrett Byrnes shared a tutorial on Bayesian SEM using brmsDanielle Navarro wrote about regular expressions in RMultiple posts about the easystats ecosystem for Bayesian analysisInnovation & New Tools:Colin announced Rlinguo, a mobile app for running R on phonesNew developments in web-based R applications using webR and ShinyliveVarious updates to visualization packages and toolsCommunity Engagement:Active TidyTuesday participation analyzing D&D spell dataSignificant discussion around generative art and visualizationStrong focus on teaching and learning resourcesThe overall tone of the R community on Bluesky is overwhelmingly positive and collaborative, with a mix of technical discussion, creative applications, and mutual support. There’s a strong emphasis on sharing knowledge and resources, particularly around data visualization and statistical methods.Common Hashtags: #rstats #dataviz #tidytuesday #pydata #statistics #dataBS #rtistry #bayesian #shiny #ggplot2Notable Authors:Hadley WickhamDanielle NavarroFrank HarrellTyler Morgan-WallSharon MachlisColin FayNicola RennieThe community appears to be particularly excited about new developments in mobile and web-based R applications, as well as advances in statistical computing and visualization tools. There’s also a strong strain of creativity running through the posts, with many sharing artistic applications of R programming.Top postsTop 10 posts:Hadley Wickham: Very proud to announce that @posit.co has joined the opensourcepledge.com. We’re committing to spending $2000 / developer / year to support open source software that we use (but don’t develop): posit.co/blog/posit-p… #rstats #pydataJarrett Byrnes: OK, here is a very rough draft of a tutorial for #Bayesian #SEM using #brms for #rstats. It needs work, polish, has a lot of questions in it, and I need to add a references section. But, I think a lot of folk will find this useful, so…. jebyrnes.github.io/bayesian_sem… (use issues for comments!)Danielle Navarro: Some words on regular expressions in #rstats. In which I finally wrap my head around the fact that there are at least three different regex engines in widespread usage in R, they support different features, and they all hate me blog.djnavarro.net/posts/2024-1…Colin : #RStats I’m so, so thrilled to finally share the release of Rlinguo, a mobile app that runs R This is a fully, native mobile app that uses #webR in the backend to run R functions. Available on the App Store (iPhone) & Play Store (Android) rtask.thinkr.fr/introducing-…Erik Reinbergs: These bayes tutorials are the first ones i’ve actually understood. Looking forward to the rest of the being finished. Go @easystats.bsky.social teams! easystats.github.io/bayestestR/i… #rstatsDave H: Just pushed a Christmas update to {ggblanket}. I decided to support colour blending. It uses {ggblend} under the hood (thanks @mjskay.com), which uses graphics features developed by Paul Murrell. Give {ggblanket} and {ggblend} a star, if you find them useful. Oh, and have a merry Christmas #rstatsOwen Phillips: I created a thing that puts every player’s box score stats from the night before in one interactive table. Ive been using it every morning to quickly check who had a good/bad game instead of looking at 10 different box scores updates nightly #rstats #gt #quarto thef5.quarto.pub/boy/Hadley Wickham: I discovered that the original plyr website still exists: had.co.nz/plyr/. It’s hard to imagine that this used to be a better than average package website #rstatsFrank Harrell: 1/2 Major release 6.9-0 of #RStats rms package now available on CRAN w/complete re-write of the binary/ordinal logistic regression function lrm. lrm began in 1980 as SAS PROC LOGIST so it was time for a re-do for iterative calculations. Details are at fharrell.com/post/mleJoe Chou: As far as Im concerned, #webr and #shinylive is indistinguishable from magic. I cant believe that since first installing #quarto-webr a few hours ago till now, Ive gotten a relatively complicated #shinyapp exported, deployed, and running serverless. This is magic. #rstatsOpen-source models: Llama3.3, Gemma2, MistralIm not showing the other results here the links go to a Markdown file as a GitHub gist showing results from running the same prompt with Llama3.3, Gemma2, and Mistral. Theyre very different! Gemma just provided a very high-level overview that could have come from any collection of posts about R, and mistral just listed out individual summaries of each post instead of summarizing the whole collection. Llama3.3 might have been a bit better, but overall, it seems that the open source models arent following my system prompt as well as Claude 3.5 Sonnet is.Subscribe nowBonus: sentiment, summary, and translationWhile the ellmer package demonstrated above provides a straighforward way to prompt local or frontier models with text, the mall package in the mlverse (mlverse.github.io/mall) provides functions for running LLM predictions against a data frame. Its API is similar in the R and Python packages. If I have the data frame created above with the post text in d$text, I can run a sentiment analysis, create a 5 word summary, and translate into Spanish with the code below. New columns are added to the data, prefixed by a dot.d |> llm_sentiment(text) |> llm_summarize(text, max_words = 5) |> llm_translate(text, "spanish")Heres what the top 10 most liked posts from the past week are, summarized in 5 words, translated into Spanish:Hadley Wickham (posit joins opensourcepledge): Estamos muy orgullosos de anunciar que @posit.co ha unido a opensourcepledge.com. Estamos comprometidos a gastar $2000 / desarrollador / año para apoyar software de código abierto que usamos (pero no desarrollamos): posit.co/blog/posit-p… #rstats #pydataJarrett Byrnes (tutorial on bayesian sem rstats): Este es un tutorial muy desgloseado para el uso de SEM bayesiano utilizando brms en RStats. Necesita ser revisado, pulido y tener muchas preguntas resueltas, pero creo que muchos personas lo encontrarán útil, así que… jebyrnes.github.io/bayesian_sem… (usen los problemas para hacer comentarios!)Danielle Navarro (three r regex engines exist): Hay al menos tres diferentes motores de expresiones regulares en uso amplio en R, que apoyan diferentes características y todos me odian.Colin (mobile app for r programming): Estoy extremadamente emocionado de compartir finalmente el lanzamiento de Rlinguo, una aplicación móvil que ejecuta R.Erik Reinbergs (bayes tutorials finally make sense): Estos tutoriales de Bayes son los primeros que he entendido. Estoy emocionado de ver el resto terminado. ¡Buena suerte a todos en @easystats.bsky.social!Dave H (pushed christmas update to ggblanket): Acabo de lanzar una actualización de Navidad para {ggblanket}. Decidí apoyar la mezcla de colores. Utiliza {ggblend} detrás del escenario (gracias @mjskay.com), que utiliza características gráficas desarrolladas por Paul Murrell. Dale un estrella a {ggblanket} y {ggblend}, si los encuentras útiles. ¡Que tengas un feliz Navidad #rstatsOwen Phillips (interactive box score table): Crea una cosa que agrega todos los estadísticas de puntuación del tablero de cada jugador de la noche anterior en una tabla interactiva. He estado utilizandola cada mañana para revisar rápidamente quién tuvo un buen/buen juego en lugar de mirar las actualizaciones de diez diferentes partidos nocturnos #rstats #gt #quarto thef5.quarto.pub/boyHadley Wickham (original plyr website still exists): Descubrí que la página web original de plyr aún existe: had.co.nz/plyr/. Es difícil imaginar que esto era una página de paquete mejor que promedio. #rstatsFrank Harrell (rms package version released): La versión 6.9-0 del paquete #RStats rms ahora está disponible en CRAN con una completa revisión de la función de regresión logística binomial lrm, que comenzó en 1980 como PROC LOGIST en SAS y requería una reescritura para cálculos iterativos.Joe Chou (webr and shiny are magic): En mi opinión, #webr y #shinylive son indistinguibles del magia. No puedo creer que desde que instale #quarto-webr unas horas atrás hasta ahora, he logrado exportar, desplegar y correr una aplicación #shinyapp relativamente complicada serverless. Esto es magia. #rstatsHeres a count of the sentiment classification for each post.I was curious what the negative sentiment posts were. The most liked negative sentiment post was Danielle Navarros blog post on different regex engines in R and the difficulty using them consistently. Others had a degree cynicism that the model picked up on.“Some words on regular expressions in #rstats. In which I finally wrap my head around the fact that there are at least three different regex engines in widespread usage in R, they support different features, and they all hate me blog.djnavarro.net/posts/2024-1…““Found a bug in my code and after fixing it, it stopped working. Wonderful Christmas gift :3 #rstats”“Turning a 10-minute #rstats project into a two-hour one because I refuse to break my pipe and save an intermediate object to memory.”Subscribe nowGetting Genetics Done by Stephen Turner is licensed under a Creative Commons Attribution (CC BY) License.
Content Synthesis
Unknown
null
null
null
null
null
null
news
Manasi Sharma
Advancing Explainable Natural Language Generation (NLG): Techniques, Challenges, and Applications
Natural language generation (NLG) lies at the core of applications ranging from conversational agents to content creation. Despite its advances, NLG systems often operate as "black boxes," leaving developers and users uncertain about their decision-making processes. Explainable AI (XAI) bridges this gap by making NLG models more interpretable and controllable. This article explores practical techniques and tools for enhancing the transparency of NLG systems, offering detailed code snippets and step-by-step explanations to guide developers in understanding and improving model behavior. Topics include attention visualization, controllable generation, feature attribution, and integrating explainability into workflows. By focusing on real-world examples, this article serves as an educational guide for building more interpretable NLG systems.
https://dzone.com/articles/advancing-explainable-natural-language-generation
https://dz2cdn1.dzone.co…092217-thumb.jpg
2024-12-10T21:00:05Z
Natural language generation (NLG) lies at the core of applications ranging from conversational agents to content creation. Despite its advances, NLG systems often operate as "black boxes," leaving developers and users uncertain about their decision-making processes. Explainable AI (XAI) bridges this gap by making NLG models more interpretable and controllable. This article explores practical techniques and tools for enhancing the transparency of NLG systems, offering detailed code snippets and step-by-step explanations to guide developers in understanding and improving model behavior. Topics include attention visualization, controllable generation, feature attribution, and integrating explainability into workflows. By focusing on real-world examples, this article serves as an educational guide for building more interpretable NLG systems.Introduction to Explainable NLGNatural language generation (NLG) enables machines to produce coherent and contextually appropriate text, powering applications like chatbots, document summarization, and creative writing tools. While powerful models such as GPT, BERT, and T5 have transformed NLG, their opaque nature creates challenges for debugging, accountability, and user trust.Explainable AI (XAI) provides tools and techniques to uncover how these models make decisions, making them accessible and reliable for developers and end-users. Whether you're training an NLG model or fine-tuning a pre-trained system, XAI methods can enhance your workflow by providing insights into how and why certain outputs are generated.Techniques for Explainable NLG1. Understanding Attention MechanismsTransformers, which form the backbone of most modern NLG models, rely on attention mechanisms to focus on relevant parts of the input when generating text. Understanding these attention weights can help explain why a model emphasizes certain tokens over others.Example: Visualizing Attention in GPT-2from transformers import GPT2Tokenizer, GPT2LMHeadModelfrom bertviz import head_view# Load GPT-2 model and tokenizermodel = GPT2LMHeadModel.from_pretrained("gpt2", output_attentions=True)tokenizer = GPT2Tokenizer.from_pretrained("gpt2")# Input texttext = "The role of explainability in AI is crucial for ethical decision-making."# Tokenize inputinputs = tokenizer(text, return_tensors="pt")# Generate attentionsoutputs = model(**inputs)attentions = outputs.attentions # List of attention weights from all layers# Visualize attentionhead_view(attentions, tokenizer, text)ExplanationThe bertviz library provides a graphical interface for understanding how attention is distributed across input tokens. For instance, if the model generates a summary, you can analyze which words it deems most important.2. Controllable Text GenerationControllability allows users to guide the model's output by specifying parameters like tone, style, or structure. Models like CTRL and fine-tuned versions of GPT enable this functionality.Example: Guiding Text Generation with Promptsfrom transformers import AutoModelForCausalLM, AutoTokenizer# Load GPT-Neo modelmodel_name = "EleutherAI/gpt-neo-2.7B"model = AutoModelForCausalLM.from_pretrained(model_name)tokenizer = AutoTokenizer.from_pretrained(model_name)# Define a prompt for controlling output styleprompt = ( "Write an inspiring conclusion to an academic paper: \n" "In conclusion, the field of Explainable AI has the potential to...")# Tokenize and generate textinputs = tokenizer(prompt, return_tensors="pt")outputs = model.generate(inputs["input_ids"], max_length=100)# Decode and display outputprint(tokenizer.decode(outputs[0], skip_special_tokens=True))ExplanationBy structuring prompts effectively, developers can control how the model generates text. In this example, the model adapts its output to fit an academic tone.3. Feature Attribution With SHAPSHAP (SHapley Additive exPlanations) provides insights into which parts of the input contribute most to the generated output, helping developers debug issues like bias or irrelevance.Example: SHAP for Explaining Generated Textimport shapfrom transformers import pipeline# Load a text generation pipelinegenerator = pipeline("text-generation", model="gpt2")# Define SHAP explainerexplainer = shap.Explainer(generator)# Input textprompt = "Explainable AI improves trust in automated systems by"# Generate explanationsshap_values = explainer([prompt])# Visualize explanationsshap.text_plot(shap_values)ExplanationSHAP highlights the words or phrases that influence the generated text, offering a way to analyze model focus. For example, you might find that certain keywords disproportionately drive specific tones or styles.4. Integrated Gradients for Text AttributionIntegrated Gradients quantify the contribution of each input feature (e.g., words or tokens) by integrating gradients from a baseline to the input.Example: Integrated Gradients for a Classification Taskfrom captum.attr import IntegratedGradientsfrom transformers import AutoTokenizer, AutoModelForSequenceClassificationimport torch# Load model and tokenizermodel_name = "textattack/bert-base-uncased-imdb"model = AutoModelForSequenceClassification.from_pretrained(model_name)tokenizer = AutoTokenizer.from_pretrained(model_name)# Input texttext = "Explainable AI has transformed how developers interact with machine learning models."inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)# Compute Integrated Gradientsig = IntegratedGradients(model)attributions = ig.attribute(inputs['input_ids'], target=1)# Visualize attributionsprint("Integrated Gradients Attributions:", attributions)ExplanationIntegrated Gradients are particularly useful in classification tasks where you want to understand which words influence the decision. This can also be extended to text generation tasks for token attribution.5. Layer-Wise Attention AnalysisSometimes, understanding the individual layers of a transformer can provide deeper insights into the model's behavior.Example: Extracting Attention Weights Layer by Layerimport torchfrom transformers import BertTokenizer, BertModel# Load BERT model and tokenizertokenizer = BertTokenizer.from_pretrained("bert-base-uncased")model = BertModel.from_pretrained("bert-base-uncased", output_attentions=True)# Input sentencetext = "Natural Language Generation depends heavily on transformer architectures."inputs = tokenizer(text, return_tensors="pt")# Forward pass with attentionoutputs = model(**inputs)attention_weights = outputs.attentions # Attention weights for each layer# Analyze specific layerlayer_3_attention = attention_weights[3].detach().numpy()print("Attention weights from layer 3:", layer_3_attention)ExplanationLayer-wise analysis enables developers to track how attention evolves as it propagates through the network. This is particularly useful for debugging or fine-tuning pre-trained models.Integrating Explainable NLG in WorkflowsDebugging Model OutputsExplainability tools like SHAP and attention visualizations can help identify issues such as irrelevant focus or sensitivity to noise in the input.Improving Dataset QualityAttribution methods can reveal biases or over-reliance on specific phrases, guiding dataset augmentation, or curation.Building User TrustBy showing how models arrive at their outputs, developers can foster trust among end-users, especially in high-stakes applications like legal or medical text generation.Ethical ConsiderationsMitigating BiasExplainability methods can expose biases in generated content, prompting developers to address these issues through improved training datasets or fairness constraints.Preventing MisinformationTransparency ensures that users understand the limitations of NLG systems, reducing the risk of misinterpretation or misuse.ConclusionExplainable NLG bridges the gap between powerful AI systems and user trust, enabling developers to debug, optimize, and refine their models with greater confidence. By incorporating techniques such as attention visualization, controllable generation, and feature attribution, we can create NLG systems that are not only effective but also interpretable and aligned with ethical standards. As this field continues to evolve, the integration of explainability will remain central to building reliable, human-centric AI.
Content Synthesis/Decision Making
Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Abhi Maheshwari, Forbes Councils Member, Abhi Maheshwari, Forbes Councils Member https://www.forbes.com/councils/forbesbusinesscouncil/people/abhimaheshwari/
Can SLMs Win The Enterprise AI Battle Over LLMs?
SLMs can offer a more streamlined, cost-effective alternative, with the added benefit of being easier to fine-tune and integrate into proprietary data environments.
https://www.forbes.com/councils/forbesbusinesscouncil/2024/12/27/can-slms-win-the-enterprise-ai-battle-over-llms/
https://imageio.forbes.c…=1600&fit=bounds
2024-12-27T13:30:00Z
Abhi Maheshwari is the CEO of Aisera Inc.gettyLarge language models (LLMs) have dominated the spotlight. Growing increasingly complex, their parameter counts continue to soar. In fact, OpenAIs upcoming GPT-5 is rumored to exceed a trillion parameters and be trained on massive data sets.In contrast, SLMs are smaller versions of LLM counterparts and have significantly fewer parameters, typically ranging from a few million to a few billion. Some of the biggest tech companies are pouring vast resources into this space, with Meta introducing compact models within its Llama series, OpenAI having its o1-mini, Microsoft advancing its phi-3 family and Google pushing forward with Gemma.So, which is better for enterprise apps: SLMs or LLMs? As someone who leads an agentic AI company, weve had the chance to evaluate numerous LLMs and SLMs. From this experience, I have found that SLMs, despite their relatively small size, have been shown to deliver results that are on par or better in some instances as compared to larger models. In a world where bigger has often been equated with better, its becoming clear that small can pack just as much punch.Breaking Down LLM IssuesLLMs offer a range of impressive capabilities and are suitable for complex tasks. However, they are trained on diverse datasets, which can result in a lack of customization for specific enterprise needs, like domains. According to an article in Dataconomy, "This generality may lead to gaps in handling domain and industry-specific nuances, potentially decreasing the effectiveness of their responses."Despite their strengths, LLMs come with some notable challenges. The most prominent issue is "hallucinations"instances where the LLM generates content that appears accurate but is factually wrong or nonsensical. Another challenge is cost and time overhead. Training LLMs require significant computational resources (hundreds of GPUs/TPUs) and can take months. The average training costs for a state-of-the-art LLM like GPT-4 are in the millions.Because LLMs are trained on massive datasets, these models can be biased due to the data used to train the models, leading to poor performance in specialized domains.Maintenance is another challenge that enterprises face with LLMs. Retraining LLMs to update their knowledge is a slow, expensive process, making them impractical for environments that require rapid adaptation to new data or compliance changes. Customization for different industries requires massive fine-tuning efforts, delaying time-to-market.Ultimately, the accuracy, time, cost and lack of agility can make LLMs less attractive to enterprise applications. Instead of serving as catalysts for innovation, they often become bottlenecks, hindering the swift deployment of tailored AI solutions.The Power Of SLMsOn the other hand, SLMs are purpose-built to perform exceptionally well on specific tasks, making them more accessible for organizations with limited resources and can be more easily fine-tuned to meet specific needs.I find that SLMs can be trained in a fraction of the time and with significantly fewer resources. With lower hardware and training costs, SLMs dramatically reduce the cost of ownership compared to LLMs. This enables enterprises to deploy more AI models for diverse needs without blowing the budget. In fact, in my experience, SLMs cost just 1/10th of what LLMs require, offering a highly cost-effective solution for many enterprise applications.Domain-specific SLMs, built with specialized data, can help ensure greater accuracy and minimize errors, outperforming models reliant on generic data for specialized tasks. Moreover, SLMs can be seamlessly customized for individual business units, departments or industry verticals. Their smaller footprint facilitates faster and more efficient adaptation to unique operational requirements.Taking it a step further, when SLMs are combined with company-specific knowledge graphs, they can deliver richer context, allowing them to understand enterprise-specific terms and adapt to evolving data. Due to this, they can enrich queries with near-zero latency.As hinted at earlier, SLMs allow for rapid customization as they can be updated more frequently, keeping them up-to-date with changing domain needs or regulations. Their ability to adapt quickly to individual customer needs offers a balance between efficiency and precision.Companies using SLMs can respond faster to market shifts and customer needs, giving them a competitive advantage over those bogged down by the rigidity of LLMs. This cost-effective flexibility and adaptability can ultimately help reduce the total cost of ownership (TCO) while increasing ROI.LimitationsHowever, it's important to recognize how deploying small language models involves a trade-off. Due to their training on smaller datasets, SLMs possess more constrained knowledge bases compared to their Large Language Model (LLM) counterparts.Therefore, I recommend using SLMs for specific tasks like summarizing context or answer generation, content tagging, etc. These are areas where their specialized focus and efficiency offer significant benefits.One Size Does Not Fit AllEnterprises aren't simply shifting from large to small models. LLMs continue to be the gold standard for solving many complex tasks. Integrating LLMs might still be necessary for tasks requiring extensive general knowledge or deep contextual understanding. Instead, enterprises are adopting a diversified portfolio of models, allowing them to select the most appropriate one for each scenario.Choosing the right language model depends on an organizations specific needs, the complexity of the task and available resources. Organizations leveraging multiple models or looking to leverage multiple models will need an LLM gateway to connect the appropriate LLMs based on understanding, context or domain.Additionally, using the LLM gateway helps guarantee the responsible and secure use of AI with the most implementing comprehensive governance controls, including full auditability and visibility into AI consumption and performance.ConclusionI see the rise of small language models as signaling an important shift in how AI will be deployed across industries. While LLMs boast impressive capabilities, they also come with significant challenges.SLMs, on the other hand, especially when used in conjunction with LLMs, can offer a more streamlined, cost-effective option, with the added benefit of being easier to fine-tune and integrate into proprietary data environments.Ultimately, SLMs are proving that bigger isnt always better. I think they are well-positioned to become a go-to choice for many enterprises.Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
Sergio De Simone
EuroLLM-9B Aims to Improve State of the Art LLM Support for European Languages
EuroLLM-9B is an open-source large language model built in Europe and tailored to European languages, including all the official EU languages as well as 11 other non-official albeit commercially important languages. According to the team behind it, its performance makes it one of the best European-made LLM of this size. By Sergio De Simone
https://www.infoq.com/news/2024/12/eurollm-9b/
https://res.infoq.com/ne…35302792967.jpeg
2024-12-27T13:00:00Z
EuroLLM-9B is an open-source large language model built in Europe and tailored to European languages, including all the official EU languages as well as 11 other non-official albeit commercially important languages. According to the team behind it, its performance makes it one of the best European-made LLM of this size.EuroLLM-9B is the second LLM created within the EuroLLM initiative, coming a few months after the smaller EuroLLM-1.7B.The key component in EuroLLM-9B to make its performance stronger for European languages is the tokenizer, which was built using a vocabulary of 128,000 word pieces belonging to European languages. The model was pre-trained on approximately 4 trillion tokens using the GPU infrastructure provided by the Barcelona-based MareNostrum5 supercomputer.In the post-training phase, the EuroLLM team used publicly available datasets to fine-tune it and make it capable of handling multi-turn conversations and behave as an instruction-following model. One of the goals of the team was showing the model suitability to be fine-tuned for specific use case.According to the team, the model excels at translating texts across all supported languages, a task in which it outperforms Gemma-2-9B–IT and Aya-expanse-8B.To assess the model performance, the team ran benchmarks both in English and in EU languages. Unsurprisingly, for European languages, EuroLLM outperforms both European models, such as Mistral-7BV, Salamandra-7B, and others, as well as non-European languages, including LLama-3.1-8B, Qwen-2-5-7B, and others, with Gemma-2-9B achieving comparable results. For English languages, EuroLLM-9B shows good performance, on a par with Mistral-7B.As expected, a 9B model cannot match the performance of a 70B model. However, the scores are very good and come remarkably close to the larger models, especially when using a beam size of 4.The model is available on Hugging Face, where you can run it as shown in the following snippet:from transformers import AutoModelForCausalLM, AutoTokenizermodel_id = "utter-project/EuroLLM-9B-Instruct"tokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained(model_id)messages = [ { "role": "system", "content": "You are EuroLLM --- an AI assistant specialized in European languages that provides safe, educational and helpful answers.", }, { "role": "user", "content": "What is the capital of Portugal? How would you describe it?" }, ]inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")outputs = model.generate(inputs, max_new_tokens=1024)print(tokenizer.decode(outputs[0], skip_special_tokens=True))As several Reddit users point out, the need for open-source models tailored to European languages is real, since even larger models like Llama 3.3 70B may perform unsatisfactorily, not to mention the cost of fine-tuning it. The EuroLLM team is already at work on a larger version of the model, to make it more competitive with larger models, but has not clarified when it could become available.
Content Synthesis/Information Retrieval Or Search
Unknown
null
null
null
null
null
null
news
null
Strategic Priorities for Data and AI Leaders in 2025
AI remains at the forefront of every business leader’s plans for 2025. Overall, 70% of businesses continue to believe AI is critical to...
https://www.databricks.com/blog/strategic-priorities-data-and-ai-leaders-2025
https://www.databricks.c…a-ai-blog-og.png
2024-12-23T17:48:01Z
AI remains at the forefront of every business leaders plans for 2025. Overall, 70% of businesses continue to believe AI is critical to their long-term success, according to a recent survey of 1,100 technologists and 28 CIOs from Economist Impact. What does that look like in practice?While interest in the technology shows no signs of cooling, companies are shifting their strategic priorities for investing in and deploying it. Here are the areas we predict data and AI leaders will focus on in 2025:Enterprise AI strategies will center on post-training and specialized AI agentsCompanies will evolve how they navigate scaling laws as they shift their focus from pre-training and bigger models to post-training techniques. Were already seeing companies build agentic AI agent systems, composed of multiple models, techniques and tools that work together to improve efficiency and outputs.Companies will leverage agentic workflows at inference to evaluate AI systems for specialized tasks, such as debugging and improving quality over time with fewer resources and data.Investing in AI agents now will help organizations take a commanding lead in their respective markets as the technology grows more powerful. But few have the proper building blocks in place. AI agents require a unified foundation, free from data silos and legacy architectures. Dael Williamson, EMEA CTO at DatabricksInfrastructure will be the biggest AI investment area as companies race to AI agentsThe Economist Impact revealed that only 22% of organizations believe their current architecture can support AI workloads without modifications. We expect to see the most resources invested in this area of enterprise data infrastructure in the coming year.In Agentic AI Systems, agents must be able to work outside the boundaries of proprietary IT environments and interact with many data sources, LLMs and other components to deliver accurate and reliable outputs. Enterprises will need an end-to-end data platform an AI database to support the governance, regulation, training and evaluation required to get AI initiatives into production.A successful AI strategy starts with a solid infrastructure. Addressing fundamental components like data unification and governance through one underlying system lets organizations focus their attention on getting use cases into the real-world, where they can actually drive value for the business. Robin Sutara, Field CDO at DatabricksCompanies will use their data advantage to gain market shareIn 2024, the discourse around enterprise AI centered around internal applications that can boost employee productivity and efficiency. But domain-specific knowledge - or data intelligence - emerges as the new focus as enterprises put customer-facing applications into production. This means that companies will race to identify use cases aligned to the areas where they have a data advantage.This is one reason why customer service is such a popular starting point. Businesses often have large amounts of data on their own clients, and can use that to power AI systems that improve the support they provide. Details on each individuals past interactions can help personalize future experiences with the company.But organizations can go even deeper. Manufacturers can use data assets stemming from digital manufacturing equipment to optimize the health of their machines. Life sciences companies can use their decades of experience in drug discovery to help train AI models that enable them to discover future treatments more quickly. Financial services companies can build specialized models that help clients take advantage of their deep subject matter expertise to improve their own investment portfolios.Companies can realize huge efficiency gains by automating basic tasks and generating data intelligence on command. But thats just the beginning: business leaders will also use AI to unlock new growth areas, improve customer service, and ultimately give them a competitive advantage over rivals. Arsalan Tavakoli, SVP of Field EngineeringGovernance will dominate C-suite conversationsThe conversation on AI governance has so far centered on security and regulation.Executives are now recognizing the relationship between data governance and AI accuracy and reliability. A holistic approach to governance aims to ensure responsible AI development, deployment, and usage while mitigating risks and supporting regulatory compliance.Many companies have already taken the initial step of unifying metadata for their data and AI assets in one location to eliminate redundancies and improve data integrity. As enterprises deploy more AI use cases, this will serve as a critical foundation. Governing the two together ensures that AI models are generating outputs and taking action based on high-quality data sets. This improves the overall performance of the AI system, while also reducing the operational costs involved with building and maintaining it.As more businesses embrace data intelligence, leaders need to think critically about how to balance widespread access with privacy, security and cost concerns. The right end-to-end governance framework will allow companies to more easily monitor access, usage and risk, and uncover ways to improve efficiency and cut costs, giving enterprises the confidence to invest even more in their AI strategies. Trâm Phi, General CounselUpskilling will focus on boosting AI adoptionThe human-in-the-loop approach to AI projects will be required for many years to come. The past two years have framed AI upskilling as needing to understand how these systems work and prompt engineering. But weve just scratched the surface of how todays models can be applied, and the real hurdle to unlocking new applications is around human behaviors. Thats why organizations will turn their attention to driving human adoption - through refined hiring practices, home-grown internal AI applications, and more specialized use case training.In the world were operating in now, mindset matters more than skillset. Technology is evolving rapidly, so we need to look for people with an open, creative, growth mindset and a passion for learning and trying new things. Amy Reichanadter, Chief People OfficerWhats next in data + AI2025 promises to be a pivotal year, one in which both AI and the data, infrastructure and governance surrounding it, become even more of a focus area for leaders.To hear from 1k+ data and AI leaders about the challenges and opportunities of enterprise data management and AI adoption in 2025, check out the Economist Impact report: Unlocking Enterprise AIRelated: What the worlds largest and leading companies are using for AI tooling, top use cases by industry, and more in the State of Data + AI.
Decision Making/Process Automation
Management/Computer and Mathematical
null
null
null
null
null
null
news
Shaoni Mukherjee
Running Gemma 2 on an A4000 GPU
Introducing Gemma 2 a lightweight, state-of-the-art open model derived from the same advanced research and technology that powers the renowned Gemini models. Dive in to learn more!
https://www.digitalocean.com/community/tutorials/running-gemma-2-on-a-cloud-gpu
https://www.digitalocean…ud.d49bc5f7.jpeg
2024-12-31T06:43:17Z
IntroductionArtificial intelligence (AI) can solve some of the world’s biggest problems, but only if everyone has the tools to use it. On Jun 27, 2024, Google, a leading player in AI technology, launched Gemma 2 9B and 27B—a set of lightweight, advanced AI models. These models, built with the same technology as the famous Gemini models, make AI accessible to more people, marking a significant milestone in democratizing AI.Gemma 2 comes in two sizes: 9 billion (9B) and 27 billion (27B) parameters, and comes with a context length of 8K tokens. Google claims the model performs better and is more efficient than the first Gemma models. Gemma 2 also includes necessary safety improvements. The 27B model is so powerful that it competes with models twice its size, and it can run on a single NVIDIA H100 Tensor Core GPU or TPU host, lowering its cost.This democratizes AI by reducing entry barriers and enabling advanced models like Gemma 2 for inclusive and accelerated progress in artificial intelligence.PrerequisitesHardware: A system equipped with an NVIDIA A4000 GPU with sufficient VRAM (16 GB recommended).Software: CUDA Toolkit (11.7+), cuDNN, and NVIDIA drivers compatible with the GPU.Environment: Python 3.8+, PyTorch with GPU support installed.Dependencies: Install required libraries using pip install -r requirements.txt from the Gemma 2 repository.Need for a lightweight modelLightweight AI models are essential for making advanced technology more accessible, efficient, cost-effective, and sustainable. They enable various applications. Furthermore, these models drive innovation and address diverse challenges worldwide.There are several reasons why Lightweight models are essential in various fields:Speed: Due to their reduced size and complexity, lightweight models often have faster inference times. This is crucial for real-time or near-real-time data processing applications like video analysis, autonomous vehicles, or online recommendation systems.Low Computational Requirements: Lightweight models typically require fewer computational resources (such as memory and processing power) than larger models. This makes them suitable for deployment on devices with limited capabilities, such as smartphones, IoT, or edge devices.Scalability: Lightweight models are more accessible to scale across many devices or users. This scalability is particularly advantageous for applications with a broad user base, such as mobile apps, where deploying large models might not be feasible.Cost-effectiveness: The lightweight models can reduce operational costs associated with deploying and maintaining AI systems. They consume less energy and can run on less expensive hardware, making them more accessible and economical for businesses and developers.Deployment in resource-constrained environments: In environments where internet connectivity is unreliable or bandwidth is limited, lightweight models can operate effectively without requiring continuous access to cloud services.Lightweight models like Gemma 2 are crucial because they allow more people and organizations to leverage advanced AI technology, drive innovation, and create solutions for diverse challenges, all while being mindful of costs and sustainability.Introducing Gemma 2Gemma 2 is Google’s latest iteration of open-source Large Language Models (LLMs), featuring models with 9 billion (gemma-2-9b) and 27 billion (gemma-2-27b) parameters, including instruction fine-tuned variants. These models were trained on extensive datasets—13 trillion tokens for the 27B version and 8 trillion tokens for the 9B version—which includes web data, English text, code, and mathematical content. With an 8,000-token context length, Gemma 2 offers enhanced performance in tasks such as language understanding and text generation, attributed to improved data curation and larger training datasets. Released under a permissive license, Gemma 2 supports redistribution, commercial use, fine-tuning, and derivative works, fostering widespread adoption and innovation in AI applications. A few of the technical enhancements include interleaving local-global attention and group-query attention. Additionally, the 2B and 9B models utilize knowledge distillation instead of next token prediction, resulting in superior performance relative to their size and competitive alternatives to models 2-3 times larger.Gemma2 performanceDuring Gemma 2’s training, strict safety protocols were maintained. This included filtering pre-training data and conducting thorough testing across various metrics to detect and address potential biases and risks.Gemma 2 Performance (Image Source)The evaluation of Gemma 2’s 9B and 27B Instruction (Image Source)Discover the power of Gemma 2Before we get started, let’s gather some information about the GPU configuration.nvidia-smiollama run gemma2You’ll be amazed at how quickly the model will be retrieved using the advanced A4000 machine! In just seconds, you’ll be ready to engage in a fantastic chat with the model.1: thought-provoking question: How does neural networks work, and explain their key components?[video missing]In the blink of an eye, the model generates the output.2: Dilemma question: Imagine you are a doctor with five patients who all need organ transplants to survive, but you don’t have any organs. One day, a healthy person with compatible organsis admitted to the hospital for a routine check-up. If you use this healthy person’s organs, you can save all five of your patients. However, this would mean sacrificing the healthy person who came for routine check-up. What should you do, and why?[video missing]3: Coding: Write a function that solves a Sudoku puzzle by filling the empty cells. The Sudoku puzzle is a 9x9 grid, and the goal is to fill the grid so that each row, each column, and each of the nine 3x3 subgrids contains the digits from 1 to 9 exactly once. Empty cells are represented by the character ‘.’.Output:-def solveSudoku(puzzle): """ Solves a Sudoku puzzle. Args: puzzle (list): A list of strings representing the Sudoku puzzle. Returns: list: A list of strings representing the solved Sudoku puzzle. """ # Iterate over the rows, columns, and subgrids. for row in range(9): for col in range(9): if puzzle[row][col] == '.': # Check if the number is already in the row, column, or subgrid. for i in range(9): if puzzle[row][i] != '.' and puzzle[i][col] != '.' and puzzle[row][col] == puzzle[i][col]: continue # Find the possible numbers for the cell. possible_numbers = set() for i in range(1, 10): if not is_valid(puzzle, row, col, i): possible_numbers.add(i) # Try each possible number. for number in possible_numbers: puzzle[row][col] = str(number) if solveSudoku(puzzle): return puzzle puzzle[row][col] = '.' # If no solution is found, return the original puzzle. return puzzle# Helper function to check if a number is valid for a cell.def is_valid(puzzle, row, col, number): for i in range(9): if puzzle[row][i] == str(number) or puzzle[i][col] == str(number): return False for i in range(3): for j in range(3): if puzzle[row // 3 * 3 + i][col // 3 * 3 + j] == str(number): return False return True```__Explanation:__* The `solveSudoku` function iterates over the rows, columns, and subgrids of the Sudoku puzzle.* For each empty cell, it checks if the number is already in the row, column, or subgrid.* It then finds the possible numbers for the cell and tries each one.* If a solution is found, it returns the solved puzzle. Otherwise, it returns the original puzzle.* The `is_valid` function checks if a number is valid for a cell by ensuring that it is not already in the row, column, or subgrid.__Example Usage:__```python# Example Sudoku puzzle.puzzle = [ ['5', '3', '.', '.', '7', '8', '.', '.', '.'], ['.', '7', '4', '6', '.', '5', '8', '9', '.'], ['.', '.', '8', '.', '9', '1', '.', '6', '.'], // ... Rest of the puzzle.]# Solve the Sudoku puzzle.solution = solveSudoku(puzzle)# Print the solved puzzle.print(solution)```ConclusionThe advancements showcased by SOTA model Gemma 2 is groundbreaking within the AI landscape. With configurations utilizing 9 billion and 27 billion parameters, Gemma 2 demonstrates improved performance, efficiency, and crucial safety enhancements. It can rival models twice its size and operate cost-effectively on a single NVIDIA Tensor Core GPU or TPU host, making advanced AI accessible to a wider range of developers and researchers. Gemma 2’s open-source nature, extensive training, and technical enhancements underscore its superior performance, making it a crucial development in AI technology.ReferencesGemma 2: Improving Open Language Models at a Practical SizeGemma 2 is now available to researchers and developers
Unknown
Unknown
null
null
null
null
null
null
news
Mike Loukides
Radar Trends to Watch: December 2024
It’s the end of the year for Radar! We hope all of our readers enjoy the holidays. Here’s one prediction for 2025: Is this the end of the road for improving LLM performance by scaling either the number of parameters or the training data? No one knows yet. Regardless of the answer, we expect interest […]
https://www.oreilly.com/radar/radar-trends-to-watch-december-2024/
https://www.oreilly.com/…r-1400x950-1.png
2024-12-03T11:10:28Z
Its the end of the year for Radar! We hope all of our readers enjoy the holidays. Heres one prediction for 2025:Is this the end of the road for improving LLM performance by scaling either the number of parameters or the training data? No one knows yet. Regardless of the answer, we expect interest to shift toward smaller models. Well grudgingly allow the 70B parameter model to qualify as small, but we really mean 20B or fewer parameters. These models will prove to be easier for companies developing AI-enabled applications to work with: They wont cost as much to run and theyll be simpler to fine-tune for specialized applications. Very few applications will need a fully general language model.Learn faster. Dig deeper. See farther.Artificial IntelligenceThe OpenGPT-X project has released its open large language model, Teuken-7B. This model is significant because it supports 24 European languages and is designed to be compliant with European law. It is available on HuggingFace.OLMo 2 is a newly released, fully open, small language model that comes in 7B and 13B sizes. Both versions claim the best performance in their group. NVidia has announced Fugatto, a new generative text-to-audio model that can create completely new kinds of sounds. They position it as a tool for creators.Anthropic has announced the Developer Preview of their Model Context Protocol. MCP allows Claude Desktop to communicate securely with other resources. The MCP server limits the services that are exposed to Claude, filters Claudes requests, and prevents data from being exposed over the internet.OpenScholar is an open source language model designed to support scientific research. Its significantly more accurate than GPT-4o and more economical to run. It uses RAG to access a large database of open-access scientific papers, which ensures that citations are accurate.Meta has partnered with VSParticle to create new materials from instructions generated by AI. They are focusing on nanoporous materials, which could be catalysts for breaking down CO2 into useful products.Perplexity has introduced in-app shopping: Users can search for something, then have Perplexity buy it. Its the first widely available example of an AI agent that changes the state of the physical world.Research has shown that generative AI models have their own distinctive styles, not unlike human writers. Stylistic analysis can identify the source of a text to the model that generated it.Mistral has released Pixtral Large, a 124B parameter multimodal model with benchmark performance on a par with the latest versions of other frontier models.Mozillas Common Voice project collects speech samples in languages other than Anglo-American English to help developers build voice-enabled applications using other languages and dialects. The project is open source.Mechanistic interpretability is a research area that uses AI to examine whats happening within each layer of a large language model. It provides a path toward AI interpretability: the ability to understand why an AI produces any output that it generates, and possibly to control that output.Googles Pixel phones will be able to monitor phone conversations to detect scams in real time. Processing takes place entirely on the phone. The feature is off by default and can be enabled on a per-call basis. Another new feature detects stalkerware, apps that collect data without the users consent or knowledge.The Common Corpus dataset for training large language models is now open and available on Hugging Face. The dataset contains over 2T tokens taken from permissibly licensed sources, and it documents the provenance of every source.OpenAIs newest model, Orion, is an improvement over GPT-4. But is it a significant improvement? Apparently not. This may be the end of the road for improving LLMs by making them larger. (And is Orion GPT-5?)FrontierMath is a new AI benchmark that is based on very tough mathematical problems. At this point, no language model scores higher than 2% (Gemini 1.5 Pro).Separating the instruments in a musical performance is tough, but its possible. Heres an AI-free masterpiece of signal processing that attempts to do so. Can we turn a performance back into sheet music?Standard Intelligence has released hertz-dev, a new model for real-time voice synthesis. It was trained purely on audio and can participate in unscripted conversations without the use of text. Microsofts Magentic-One is a generalist agentic system that is capable of performing complex tasks. Magentic-One is open source for researchers and developers. Microsoft has also released AutoGenBench, an open source tool for evaluating the performance of agentic systems.ChainForge is a new visual tool for prompt engineering. It can be used to test prompts against multiple models and evaluate the quality of the response.AI was used to de-age Tom Hanks and Robin Wright in a new film, allowing the actors to play their characters across a 60-year time span.Anthropic has released Claude 3.5 Haiku, a new version of its smallest and fastest model. The company claims that its performance on many benchmarks is superior to Claude 3 Opus, its previous leading model. Anthropic has also significantly increased the price for using Haiku.OpenAI has introduced predicted outputs. If the output to a prompt is largely known ahead of timefor example, if youre asking GPT to modify a fileyou can upload the expected result with the prompt, and GPT will make the changes necessary. Predicted outputs reduce latency; apparently they dont reduce cost. Fortunately, AI Psychiatry has nothing to do with psychoanalyzing human patients. Its a forensic tool for postmortem analysis of AI failures that allows investigators to recover the exact model that was in use when the failure occurred.SmolLM2 is a new small language model, designed for running on devices. It comes in 135M, 360M, and 1.7B parameter versions. Early reports say that its performance is impressive.vLLM is a framework for serving LLMs. It works with most of the language models on Hugging Face. Not only does it claim to be simpler, but it also claims to have significant performance and cost benefits by using a key-value store to cache input tokens.AI Flame Graphs show developers what their models are doing in detail. If youre concerned about performance or energy use, they are revolutionary.Googles Project Jarvis is reported to be the companys answer to Anthropics computer use API. Jarvis takes over a browser (presumably Chrome) to perform tasks on behalf of the user.NotebookLMs ability to generate a podcast from documents is impressive. Can other models do the same thing? NotebookLlama is an open source project that generates podcasts using the Llama models. Programmingbpftune is a utility that constantly tunes Linux system performance using observability data from BPF. It has zero configurables (no configuration) and low overhead and is smart enough to stay away from settings a system administrator has made. It apparently does not use AI.Kyanos is a new open source network analysis tool thats based on eBPF. Because it has access to eBPF data, it can filter packets by process or by service, and it can give precise information about packet latency.VMware Fusion and VMware Workstation are now free to all users, including commercial users. Broadcom will continue to develop the products but will cease providing troubleshooting support for users.OpenCoder is a family of language models for generating code. Its completely open source, and training data, the data pipeline, training results, and training protocols are all available in addition to the code. Its intent is to encourage further experimentation and research on code generation.Mergiraf is a tool for solving Git merge conflicts by using an understanding of common programming languages (including Java, Rust, and Go) and file formats (including JSON, HTML, XML, and YAML). The authors claim that new languages can be added easily.A proposal has been published for Safe C++, a new version of C++ that will incorporate memory safety features.DataChain is a Python library for working with structured data in the context of artificial intelligence. Its designed for building data pipelines and manipulating data at scale.NoCode GitHub? GitHub Spark allows users to create small micro-apps, or sparks, without writing any code. What may be more important than no code is no deployment; sparks are deployed on GitHubs infrastructure and accessed through the web.Using Git to backup Linuxs /etc directory is obvious, once you think of it.Ractor is an Actor framework for Rust, which means that you can program in Rust somewhat as if it were Erlang. Im impressed by the longest, most complicated Hello, World that Ive ever seen.Kubernetes is a platform for building platforms. And platforms need to serve both development and operations teams.GitHub Copilot can now use models other than GPT. Users can select Claude Sonnet or Gemini in addition to different OpenAI models. Other new features include autocode review, an upgrade assistant for Java, multifile editing, and something called Spark that sounds something like Claudes Artifacts.Is your AI-generated code secure? No. Were not likely to stop using tools like Copilot and Cursor, but we need to understand the challenge: AI models were trained on publicly available code. Most publicly available code has vulnerabilities. Those will be reflected in the AIs output.Does Java need another build tool? Mill is waiting to take over. Mill claims to be 510x faster than Maven, 24x faster than Gradle.Amphion is an open source toolkit for generating all forms of audio, including music and speech. SecurityRobotsGrasso is an AI-powered trashbot: a mobile robot made of trash. It uses Llava-v1.6-mistral-7B to understand visual input from its camera, and Mistral-7B for prompts and responses. (It doesnt understand or generate speech.)Meta has released several new projects for touch perception, a crucial element in building AI-driven robots that can interact with the real world. Digit 360 is a tactile digital fingertip, Sparsh is an encoder for tactile data, and Digit Plexus is a platform for building artificial hands.Tie two unintelligent micro robots (bristlebots) together with a short, flexible tether and they acquire the ability to solve simple problems.WebWant to run Linux in your browser? You can. WebVM is a virtual machine that runs in a browser. Linux in the browser may not be that interesting; its more important as another example of Wasms abilities.Virtual RealityWant to talk to Rosa Parks or Abraham Lincoln? Try ENGAGE XR, a tool that combines VR and generative AI. Whether this is actually history is an interesting question; the bus in the Rosa Parks example looks like a modern European bus, not an American bus from the 1950s.Quantum ComputingGoogles DeepMind has developed AlphaQubit, an AI system that detects errors in quantum systems. Error correction has made tremendous progress in the past year but still remains a major problem in quantum computing.Biology
Content Synthesis/Decision Making
Life, Physical, and Social Science/Education, Training, and Library
null
null
null
null
null
null
news
Maria Deutscher
Microsoft releases Phi-4 language model trained mainly on synthetic data
Microsoft Corp. has developed a small language model that can solve certain math problems better than algorithms several times its size. The company revealed the model, Phi-4, on Thursday. The algorithm’s performance is notable mainly because of the way it was built: Microsoft trained Phi-4 mostly on synthetic, or machine-generated, data rather than web content […]The post Microsoft releases Phi-4 language model trained mainly on synthetic data appeared first on SiliconANGLE.
https://siliconangle.com/2024/12/13/microsoft-releases-phi-4-language-model-trained-mainly-synthetic-data/
https://d15shllkswkct0.c…2/microsoft2.png
2024-12-13T19:57:56Z
Microsoft Corp. has developed a small language model that can solve certain math problems better than algorithms several times its size.The company revealed the model, Phi-4, on Thursday. The algorithm’s performance is notable mainly because of the way it was built: Microsoft trained Phi-4 mostly on synthetic, or machine-generated, data rather than web content as is the usual practice. The models math prowess hints that incorporating more synthetic files into small models training datasets could be a way to boost their reasoning skills. Phi-4 is the fourth iteration of an open-source language model series Microsoft introduced last year. Its architecture is nearly identical to that of its predecessor, Phi-3-medium. Both neural networks feature 14 billion parameters and can process prompts with up to 4,000 tokens, units of data that each contain a few characters. One difference is that Phi-4 features an upgraded tokenizer. This is a component that breaks down user prompts into tokens, which makes the text easier to process. Microsoft also enhanced Phi-4s attention mechanism. This is a software component that language models use to find the most important details in a piece of text. The attention mechanism in the previous-generation phi-3-medium could only consider up to 2,000 tokens worth of user input, while Phi-4 can analyze 4,000.The main innovation in Phi-4 is the way it was trained. Microsoft trained the model using no fewer than 50 synthetic datasets that collectively contained about 400 billion tokens. Its researchers created the files through a multistep process. In the first phase, Microsoft collected content from the public web, its existing artificial intelligence training datasets and other sources. The information included, among others, tens of millions of question and answer pairs.Microsoft removed questions to which it found multiple identical answers online. The reason, the company explained, is that this is often a sign a question is too simple. While at it, Microsoft removed questions that appeared too complicated because the available answers diverged significantly from one another.The company leveraged this initial batch of files as a template from which it generated synthetic data. Microsofts researchers used several different methods to produce the synthetic files.In one phase of the project, the researchers used an AI to rewrite information from the web into test questions. Microsoft then had the AI model generate answers. Lastly, the company instructed the algorithm to analyze its answers and improve them where possible. In another phase of the project, Microsoft used open-source code as the starting point of the synthetic data generation process. The company entered a code snippet into an AI and asked it to generate a question to which the correct answer is the provided code snippet. This question was then incorporated into the training dataset that Microsoft used to develop Phi-4. After creating the initial version of the dataset, Microsoft checked it for accuracy using a set of automated workflows. We incorporate tests for validating our reasoning-heavy synthetic datasets, Phi-4s developers wrote in a research paper. The synthetic code data is validated through execution loops and tests. For scientific datasets, the questions are extracted from scientific materials.After it completed the training process, Microsoft evaluated Phi-4s output quality across more than a dozen benchmarks. The algorithm outperformed its predecessor across all but one, in some cases by more than 20%.Notably, Phi-4 also managed to best GPT-4o and Meta Platforms Inc.s recently released Llama 3.3 across two benchmarks: GPQA and MATH. The former dataset comprises 448 multi-choice questions spanning various scientific fields. MATH includes math problems. According to Microsoft, Phi-4 outperformed Llama 3.3 by more than 5% across both tests despite the fact it has a fifth as many parameters.“Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations,” Ece Kamar, managing director of Microsoft’s AI Frontiers group, wrote in a blog post.Phi-4 is currently accessible through the companys Azure AI Foundry service. Microsoft plans to make the code available on Hugging Face next week. Photo: MicrosoftYour vote of support is important to us and it helps us keep the content FREE.One click below supports our mission to provide free, deep, and relevant content.  Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well – Andy JassyTHANK YOU
Unknown
Education, Training, and Library/Computer and Mathematical
null
null
null
null
null
null
news
Piyush Thakur
Fine Tune PaliGemma with QLoRA for Visual Question Answering
Table of Contents Fine Tune PaliGemma with QLoRA for Visual Question Answering What Is PaliGemma? What Is a Vision-Language Model? Architecture of PaliGemma How Is PaliGemma Trained? Available Model Checkpoints Use Cases of PaliGemma Why PaliGemma? Inference with PaliGemma Setup…The post Fine Tune PaliGemma with QLoRA for Visual Question Answering appeared first on PyImageSearch.
https://pyimagesearch.com/2024/12/02/fine-tune-paligemma-with-qlora-for-visual-question-answering/
https://pyimagesearch.co…ing-featured.png
2024-12-02T14:00:00Z
Table of ContentsIn this tutorial, we will learn about PaliGemma briefly and demonstrate how to fine-tune it using the QLoRA (Quantized Low-Rank Adaptation) technique to achieve improved inference results, helping us create a custom vision-language model tailored to our specific tasks. This lesson is the 1st of a 4-part series on Vision-Language Models:Fine Tune PaliGemma with QLoRA for Visual Question Answering (this tutorial)Vision-Language Model: PaliGemma for Image Description Generator and MoreDeploy Gradio Applications on Hugging Face SpacesObject Detection with PaliGemmaTo learn how to use and fine-tune PaliGemma on your own dataset, just keep reading.PaliGemma is an open Vision-Language Model (VLM) released by Google. It builds on the PaLI (Pathways Language and Image) series and the Gemma family of language models. While many VLMs are getting larger to handle more tasks, PaliGemma focuses on being smaller and more efficient.Compact Size: Only about 3B parameters, significantly smaller than many VLMs.Competitive Performance: Can be fine-tuned to match the capabilities of larger models.Adaptability: Efficient and easy to customize for various tasks.A Vision-Language Model (VLM) combines visual and textual understanding, processing images and text simultaneously to grasp their relationship. This enables VLMs to handle tasks like image captioning, visual question answering, and more.How VLMs WorkImage Processing: A component specifically trained to interpret visual data.Language Processing: Another component designed to handle textual information.Joint Understanding: The model learns how visuals and language relate, enabling it to describe images or answer questions about them.For example, a VLM can look at a picture of a dog and generate a descriptive sentence about it.As VLMs have evolved, they have become more powerful, with larger models able to handle more complex tasks. However, recent efforts also focus on creating smaller, more efficient VLMs (e.g., PaliGemma) that can perform just as well without massive computational resources.PaliGemma (architecture shown in Figure 1) is a simple input+text-in, text-out model that takes an image and text as input and generates text as output.PaliGemma’s architecture is built on three key components:1. Image EncoderPaliGemma uses the SigLIP (Sigmoid loss for Language-Image Pre-training) checkpoint, a publicly available image encoder.Specifically, it leverages the “shape optimized” Vision Transformer (ViT So400m).This model is pre-trained on a large scale using contrastive learning with a sigmoid loss, enabling it to process and understand images effectively.2. Decoder-Only Language ModelPaliGemma integrates the Gemma-2B model, a publicly available pre-trained language model for the language component.As a decoder-only model, it is designed to generate text based on both image and language inputs, allowing for tasks like image captioning or answering questions about an image.3. Linear Layer ProjectionThe output tokens from the SigLIP encoder are linearly projected into the same dimension as the Gemma-2B models vocabulary tokens.This linear projection ensures that the image and language features can be seamlessly combined and used for downstream tasks.Inspired by the Pathways Language and Image (PaLI-3) models, PaliGemma follows the same training stages. The training stages are as follows:1. Stage 0: Unimodal PretrainingThe unimodal components of PaliGemma are pre-trained individually to leverage established training methods.No custom unimodal pre-training is performed; publicly available checkpoints are utilized.A SigLIP image encoder, specifically the shape optimized ViT-So400m model, is smaller yet effective compared to larger models like ViT-G.The model adopts the Gemma-2B decoder-only language model, balancing size and performance.2. Stage 1: Multimodal PretrainingThis stage combines unimodal models and trains them on various vision-language tasks.The focus is on creating a base model that can be fine-tuned well for diverse tasks rather than just aligning modalities.The image encoder remains frozen initially to maintain representation quality, although it later learns from tasks that provide valuable signals.Training occurs at 224px resolution with 1 billion examples, aiming to cover a wide array of concepts and languages.3. Stage 2: Resolution IncreaseThe model from Stage 1 is valuable but limited to 224×224 pixel resolution, which is insufficient for tasks requiring higher detail.Two additional checkpoints are trained at resolutions of 448×448 and 896×896 pixels.Stage 2 emphasizes increasing resolution while retaining the task diversity from Stage 1, with fewer examples but higher information density.The text sequence length is increased to 512 tokens for tasks that demand detailed input.4. Stage 3: TransferStages 1 and 2 produce checkpoints at different resolutions rich in visual knowledge.These models need to be fine-tuned for specific tasks to be user-friendly.The model can be adapted for various inputs, including multiple images and bounding boxes.Google has released three types of PaliGemma models:Pretrained Checkpoints: These are base models trained on large datasets that can be further fine-tuned for specific downstream tasks.Mix Checkpoints: These base models have been fine-tuned on various tasks, making them easy to use with general-purpose free-text prompts.Fine-Tuned Checkpoints: These base models have been fine-tuned for specific tasks, providing targeted performance for particular applications.Keeping vision applications in mind, here are some tasks PaliGemma can be used for:Image and Short Video Captioning: Asking the model to generate descriptive captions for pictures or short videos.Visual Question Answering: Asking the model to generate relevant answers to specific questions about an image.Document Understanding: Asking the model to generate answers to specific questions about the text shown in the image.Diagram Understanding: Asking the model to generate answers to specific questions about diagrams shown in the image.Object Detection: Asking the model to identify (output bounding box coordinates) and label objects within an image.Object Segmentation: Asking the model to segment (output polygon coordinates) that bound the detected objects in an image (shown in Figure 2).Science Question Answering: Asking the model to provide specific answers to questions related to scientific content, using visual inputs like graphs or diagrams in addition to text.PaliGemmas compact architecture makes it highly effective across multiple tasks while keeping the model size manageable. Its key components (e.g., an image encoder, a language decoder, and a projection layer) allow it to handle complex tasks without requiring the massive computational power that larger models might need. Overall, PaliGemmas flexibility makes it a powerful tool for any task requiring a deep understanding of images and text.Enough talk. Let us look at how we can use Hugging Face Transformers, to do inference with the model.!pip install -q -U transformersFirst, we install the transformers library from Hugging Face, which allows us to use pre-trained models for inference easily.import torchimport requestsfrom PIL import Imagefrom transformers import AutoProcessor, PaliGemmaForConditionalGenerationWe import the necessary libraries.We import torch to provide support for tensor computation.We import requests to send HTTP requests for downloading images from the internet.We import Image from PIL to open, manipulate, and process images.From the transformers library, we import AutoProcessor and PaliGemmaForConditionalGeneration to work with the PaliGemma model.pretrained_model_id = "google/paligemma-3b-pt-224"processor = AutoProcessor.from_pretrained(pretrained_model_id)pretrained_model = PaliGemmaForConditionalGeneration.from_pretrained(pretrained_model_id)We define a string variable pretrained_model_id with the identifier for the specific PaliGemma pre-trained model (3B parameters, 224px resolution) from the Hugging Face Model Hub.We load the pre-trained processor associated with the specified pretrained_model_id. It is responsible for pre-processing input data (e.g., tokenizing text or resizing images) and post-processing the model’s outputs (e.g., decoding).We load the pre-trained PaliGemma model associated with the specified pretrained_model_id. It is responsible for generating text outputs based on the inputs (image + text) provided.prompt = "What is behind the cat?"image_file = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cat.png?download=true"raw_image = Image.open(requests.get(image_file, stream=True).raw)inputs = processor(raw_image.convert("RGB"), prompt, return_tensors="pt")output = pretrained_model.generate(**inputs, max_new_tokens=20)print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])We define the prompt as a text input.We assign the URL of an image to the variable image_file. The image is hosted on the Hugging Face dataset repository.We download the image from the URL using the requests library and open it using the PIL library (Image.open()). stream=True ensures that the file is streamed in memory rather than downloaded fully before opening.We process the image (raw_image) and the text prompt (prompt) using the AutoProcessor. We convert the image to RGB format and prepare the inputs for the model. return_tensors="pt" returns the processed data as PyTorch tensors to be compatible with the pre-trained model. We generate the models output based on the processed inputs, with a maximum of 20 new tokens in the response. It tries to answer the prompt using the image and text as inputs.We decode the generated output from the model using processor.decode, converting the tokenized output back into human-readable text. skip_special_tokens=True removes any special tokens that the model might have generated (e.g., start/end tokens). [len(prompt):] slices the response to exclude the prompt itself, printing only the generated answer.In Figure 3, you can see the image and the generated output.The generated answer is not accurate. Now, we will show the steps to fine-tune the model for improved results.Fine-tuning involves adapting a pre-trained model to a specific task or dataset by training it further on new data. This allows the model to specialize and improve performance on domain-specific tasks. In our case, we fine-tune the PaliGemma model on a custom visual question answering dataset, vqav2-small, a small subset of the VQA v2.0 dataset hosted by Merve on Hugging Face.The vqav2_small dataset (shown in Figure 4) consists of 21.4k examples with three features: image, question, and multiple_choice_answer. The model is trained using a combination of the image and question as input, while the multiple_choice_answer serves as the label for fine-tuning.!pip install -q -U transformers datasets peft bitsandbytesWe install the following libraries:transformers: for using pre-trained models.datasets: to load and process datasets.peft: to fine-tune large models with fewer parameters efficiently.bitsandbytes: to optimize the training of large models through quantization.from huggingface_hub import notebook_loginnotebook_login()This allows us to log in to our Hugging Face account.We import the notebook_login function from the huggingface_hub library, which authenticates our Hugging Face account in a notebook environment.We run notebook_login, which prompts us to enter a Hugging Face API token to log in to access Hugging Face models and datasets.import torchimport requestsfrom PIL import Imagefrom datasets import load_datasetfrom peft import get_peft_model, LoraConfigfrom transformers import Trainerfrom transformers import TrainingArgumentsfrom transformers import PaliGemmaProcessorfrom transformers import BitsAndBytesConfigfrom transformers import AutoProcessor, PaliGemmaForConditionalGenerationWe import several essential libraries to facilitate our tasks. First, we import a torch library for tensor computation support. Next, requests allows us to send HTTP requests for downloading images from the internet, while PIL is utilized to open, manipulate, and process these images. The load_dataset function from the datasets library enables us to load datasets for training and evaluation. Additionally, we import get_peft_model and LoraConfig from the peft library; the former helps create a model with parameter-efficient fine-tuning (PEFT) techniques, and the latter configures the Low-Rank Adaptation (LoRA) method for fine-tuning. To simplify the training and evaluation of models, we bring in the Trainer class and set the training configuration using TrainingArguments from the transformers library. The PaliGemmaProcessor is essential for preparing inputs for the PaliGemma model and managing preprocessing tasks (e.g., tokenization). For optimizing memory usage and computational efficiency during the training of large models, we utilize BitsAndBytesConfig to configure options related to quantization. Finally, the AutoProcessor automatically selects the appropriate processor based on the model, and the PaliGemmaForConditionalGeneration class is used to generate outputs based on the input data.device = "cuda"model_id = "google/paligemma-3b-pt-224"Next, we define a string variable device as cuda. This will enable the model to run on a GPU, speeding up the training and evaluation computational speed.We also define a string variable model_id with the identifier for the specific PaliGemma pre-trained model (3B parameters, 224px resolution) from the Hugging Face Model Hub.ds = load_dataset('merve/vqav2-small')We use the load_dataset function from the datasets library to load the dataset merve/vqav2-small, hosted by Merve on Hugging Face.ds_dataframe = ds['validation'].to_pandas()Here, we use the to_pandas() method to convert the dataset to a dataframe (shown in Figure 5) to inspect a few of the rows and columns.We split the dataset into training and testing sets for fine-tuning and inference.split_ds = ds["validation"].train_test_split(test_size=0.05)train_ds = split_ds["train"]test_ds = split_ds["test"]We use the train_test_split method to split the dataset into training and testing sets. We specify a test_size of 0.05, meaning that 5% of the data will be allocated for testing.We preprocess the dataset to convert it into a suitable format for model fine-tuning, which includes processing text, images, and labels.processor = PaliGemmaProcessor.from_pretrained(model_id)def collate_fn(examples): texts = [f"<image> <bos> answer {example['question']}" for example in examples] labels= [example['multiple_choice_answer'] for example in examples] images = [example["image"].convert("RGB") for example in examples] tokens = processor(text=texts, images=images, suffix=labels, return_tensors="pt", padding="longest") tokens = tokens.to(torch.bfloat16).to(device) return tokensWe initialize the PaliGemmaProcessor using a pre-trained model specified by model_id and define a custom collate_fn function to preprocess a list of examples from the dataset. This function performs several key tasks:Formats each example’s question into a text prompt with special tokens.Extracts the multiple-choice answers and stores them in a list called labels.Converts each image in the examples to RGB format, collecting them in a list named images.Next, we utilize the PaliGemmaProcessor to tokenize the text, images, and labels, returning the processed data as PyTorch tensors with padding to the longest sequence. For efficiency, we convert the processed tensors to the bfloat16 data type and move them to the specified device, which is the GPU in this case.Finally, the function returns the processed tokens, making them ready for input into the model.Since our dataset closely resembles the ones PaliGemma was initially trained on, there’s no need to fine-tune the image encoder or the multimodal projector. We will focus on fine-tuning only the text decoder.model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.bfloat16).to(device)for param in model.vision_tower.parameters(): param.requires_grad = Falsefor param in model.multi_modal_projector.parameters(): param.requires_grad = FalseFirst, we load the PaliGemma model using the model ID (model_id) provided. We convert the models parameters to the bfloat16 data type for more efficient memory use and computation. We also transfer the model to the device (GPU here).Next, we loop through all the models vision_tower parameters and set requires_grad = False to freeze the parameters, preventing them from being updated during fine-tuning.Finally, we loop through all the models multi_modal_projector parameters and set requires_grad = False to freeze the parameters, preventing them from being updated during fine-tuning.Now, we set up the configuration to use the Quantized Low-Rank Adapters (QLoRA) technique for model optimization and fine-tuning.bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16)lora_config = LoraConfig( r=8, target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"], task_type="CAUSAL_LM",)load_in_4bit=True: enables 4-bit quantization to reduce memory consumption.bnb_4bit_quant_type="nf4": specifies the quantization type, here using "nf4" for more efficient weight representation.bnb_4bit_compute_dtype=torch.bfloat16: sets the computation to use bfloat16 to improve computational efficiency during training.r=8: sets the rank (number of dimensions) for low-rank adaptation.target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"]: specifies which modules to fine-tune within the models language tower (e.g., query, key, and value projections).task_type="CAUSAL_LM": defines the task type for the fine-tuning, in this case, causal language modeling.model = PaliGemmaForConditionalGeneration.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})model = get_peft_model(model, lora_config)model.print_trainable_parameters() # trainable params: 11,298,816 || all params: 2,934,765,296 || trainable%: 0.3850First, we load the pre-trained PaliGemma model with the specified model ID. We apply the bnb_config for 4-bit quantization to optimize memory usage. We use device_map={"":0} to map the model to the default device (typically GPU 0).Next, we use get_peft_model to apply the LoRA (Low-Rank Adaptation) configuration to fine-tune the model efficiently by updating only specific parts of the model (e.g., language tower).Finally, for verification, we print the total number of trainable parameters (11,298,816) out of the total model parameters (2,934,765,296), showing that only 0.385% of the model is being fine-tuned for efficiency.We define the training arguments for fine-tuning a model using the Hugging Face Trainer API, including configurations for the number of epochs, batch size, learning rate, optimizer, logging, and saving checkpoints.args=TrainingArguments( num_train_epochs=2, per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=2, learning_rate=2e-5, weight_decay=1e-6, adam_beta2=0.999, logging_steps=100, optim="adamw_hf", save_strategy="steps", save_steps=1000, save_total_limit=1, output_dir="finetuned_paligemma_vqav2_small", bf16=True, dataloader_pin_memory=False report_to=["tensorboard"])We set up the training configuration by specifying key parameters such as training for 2 epochs, a batch size of 4 per device, and accumulating gradients over 4 steps to simulate a larger batch size. To ensure stability during early training, we include 2 warmup_steps and use a learning_rate of 2e-5 with a small weight_decay of 1e-6 for regularization. The adamw_hf optimizer is configured with an adam_beta2 value of 0.999, and metrics are logged every 100 steps using logging_steps. We also save model checkpoints every 1,000 steps (as specified by save_steps), limiting the number of saved checkpoints to 1 (using save_total_limit). The checkpoints are stored in the finetuned_paligemma_vqav2_small directory, as defined in output_dir.Additionally, we enable bfloat16 precision (bf16=True) to reduce memory usage and disable data loader memory pinning (dataloader_pin_memory=False), which can help decrease memory overhead in specific environments.Finally, we set report_to=["tensorboard"] to log the training progress.We initialize a Hugging Face Trainer with the specified model, training dataset, data collation function, and training arguments, then start the fine-tuning process.trainer = Trainer( model=model, train_dataset=train_ds, data_collator=collate_fn, args=args)trainer.train()We create an instance of the Hugging Face Trainer class with the following parameters:The pre-trained model to be fine-tuned.The dataset (train_ds) and data collator (collate_fn) to handle batching and preprocessing.Training configurations from args to specify details (e.g., batch size and learning rate).Finally, we start the training process (shown in Figure 6) using trainer.train().The training progress is also visualized on TensorBoard here.We upload our fine-tuned model, which includes the safetensors file and model configurations, to the Hub so that it can be used later for inference. The fine-tuned model can be found here: Hugging Face Model Hub.trainer.push_to_hub()We use the push_to_hub() method of the Trainer class to accomplish this.Now, we can use our uploaded fine-tuned PaliGemma model to run inference on examples from our test split dataset.pretrained_model_id = "google/paligemma-3b-pt-224"finetuned_model_id = "pyimagesearch/finetuned_paligemma_vqav2_small"processor = AutoProcessor.from_pretrained(pretrained_model_id)model = PaliGemmaForConditionalGeneration.from_pretrained(finetuned_model_id)We first initialize the processor with pretrained_model_id to handle input preprocessing. Next, we load the model using finetuned_model_id, which points to our fine-tuned version of PaliGemma.prompt = "What is behind the cat?"image_file = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cat.png?download=true"raw_image = Image.open(requests.get(image_file, stream=True).raw)inputs = processor(raw_image.convert("RGB"), prompt, return_tensors="pt")output = model.generate(**inputs, max_new_tokens=20)print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])Here, we have the same inference code as before. We define the text prompt and download the image using requests, then open it with the PIL library (Image.open()). After that, we preprocess the image and prompt using the processor, generate a response from the model, and finally decode the output to get the answer.When we ran inference with the pre-trained model, the output generated was antique (see Figure 3), which wasn’t entirely accurate. However, after fine-tuning, the model generated a gramophone (shown in Figure 7), which is a more accurate result.Now, lets run inference on some examples from the test split dataset and observe the results.From this point, the inference logic remains the same, with the only difference being that the images are now sourced from the test split dataset, and the input text prompt is to "Describe the image."prompt = "Describe the image."image_file = test_ds[9]["image"]inputs = processor(image_file, prompt, return_tensors="pt")output = model.generate(**inputs, max_new_tokens=20)print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])The generated output from the fine-tuned model (Figure 8) is: skateboarding ramp.prompt = "Describe the image."image_file = test_ds[99]["image"]inputs = processor(image_file, prompt, return_tensors="pt")output = model.generate(**inputs, max_new_tokens=20)print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])The generated output from the fine-tuned model (Figure 9) is: dollhouse kitchen with table and chairs, stove, and food on table.prompt = "Describe the image."image_file = test_ds[69]["image"]inputs = processor(image_file, prompt, return_tensors="pt")output = model.generate(**inputs, max_new_tokens=20)print(processor.decode(output[0], skip_special_tokens=True)[len(prompt):])The generated output from the fine-tuned model (Figure 10) is: train wheel and track.Course information:86 total classes 115+ hours of on-demand code walkthrough videos Last updated: October 2024 4.84 (128 Ratings) 16,000+ Students EnrolledI strongly believe that if you had the right teacher you could master computer vision and deep learning.Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?Thats not the case.All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And thats exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here youll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.Inside PyImageSearch University you'll find:&check; 86 courses on essential computer vision, deep learning, and OpenCV topics&check; 86 Certificates of Completion&check; 115+ hours of on-demand video&check; Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques&check; Pre-configured Jupyter Notebooks in Google Colab&check; Run all code examples in your web browser works on Windows, macOS, and Linux (no dev environment configuration required!)&check; Access to centralized code repos for all 540+ tutorials on PyImageSearch&check; Easy one-click downloads for code, datasets, pre-trained models, etc.&check; Access on mobile, laptop, desktop, etc.Click here to join PyImageSearch UniversityIn this tutorial, we covered the following key points:Overview of PaliGemma: We explored PaliGemma, a vision-language model (VLM), discussing its architecture, training methods, and available model checkpoints. We highlighted its use cases and benefits, emphasizing its efficiency and performance.Initial Inference Results: We observed that the generated output from the pre-trained model lacked accuracy.Fine-Tuning with QLoRA: We demonstrated how to fine-tune the pre-trained model using the QLoRA technique to enhance its performance.Improved Inference Results: After fine-tuning, we ran inference with the updated model and noted significant improvements in the accuracy and relevance of the generated outputs.Thakur, P. Fine Tune PaliGemma with QLoRA for Visual Question Answering, PyImageSearch, P. Chugh, S. Huot, K. Kidriavsteva, and R. Raha, eds., 2024, https://pyimg.co/1qgi9@incollection{Thakur_2024_PaliGemma, author = {Piyush Thakur}, title = {{Fine Tune PaliGemma with QLoRA for Visual Question Answering}}, booktitle = {PyImageSearch}, editor = {Puneet Chugh and Susan Huot and Kseniia Kidriavsteva and Ritwik Raha}, year = {2024}, url = {https://pyimg.co/1qgi9},}To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!Download the Source Code and FREE 17-page Resource GuideEnter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Content Creation/Content Synthesis/Decision Making
Unknown
null
null
null
null
null
null
news
Justine Calma
Meta turns to nuclear energy for its AI ambitions
Meta says it wants to bring more nuclear energy to electricity grids to power its data centers.
https://www.theverge.com/2024/12/4/24313011/meta-ai-data-center-nuclear-energy
https://cdn.vox-cdn.com/…_CVIRGINIA_C.jpg
2024-12-04T16:54:51Z
Meta turns to nuclear energy for its AI ambitionsMeta turns to nuclear energy for its AI ambitions / Its the latest tech company to jump on the nuclear energy bandwagon.Meta is turning to nuclear energy to power its AI ambitions with the release a request for proposals to partner with nuclear energy developers.Its the latest announcement in a string of recent deals Big Tech companies have made to secure nuclear energy for their data centers. Developing new AI tools is an energy-intensive endeavor that risksderailing Silicon Valleyssustainability goals unless it can find less polluting sources of electricity. Meta now joins Amazon, Microsoft, and Google in efforts to get more nuclear reactors up and running.Meta now joins Amazon, Microsoft, and Google in efforts to get more nuclear reactors up and runningThats much easier said than done. The first all-new nuclear reactor to be built in the US in decades started running in 2023 seven years overdue and $17 billion over budget. Developers are now designing next-generation technology called small modular reactors (SMRs) that are supposed to make it easier to build and site a project, ostensibly cutting down costs. Those advanced reactors arent expected to become commercially viable until the 2030s.Meta says its interested in both SMRs and larger reactors, and is searching for partners who who will ultimately permit, design, engineer, finance, construct, and operate these power plants. Its goal is to add 1-4 gigawatts of new nuclear generation capacity in the US by the early 2030s. For context, 54 nuclear power plants across the nation currently have a combined capacity of roughly 97GW and generate about 19 percent of the US electricity mix.After decades of aging reactors shutting down, the nuclear landscape is starting to change as companies look for ways to generate electricity without producing the carbon emissions causing climate change. Nuclear power plants have increasingly been seen as a carbon pollution-free source of electricity that can fill in for solar and wind farms when the sun sets and gales weaken.We believe nuclear energy will play a pivotal role in the transition to a cleaner, more reliable, and diversified electric grid, Metas announcement says. Its not alone.Amazon bought a nuclear-powered data center campus in March, and then inked a few more deals in October to support the development of SMRs. Google announced an agreement in October to eventually purchase electricity from SMRs to be built between 2030 and 2035. Microsoft signed a power purchase agreement in September to restart a shuttered reactor at Three Mile Island.Under the Biden administration, the US crafted a roadmap to triple nuclear energy capacity by 2050. The Inflation Reduction Act signed into law in 2022 provides crucial investment and tax incentives to help make that happen. And while President-elect Donald Trump is expected to try to dismantle much of Joe Bidens clean energy legacy, nuclear energy happens to have relatively strong bipartisan support and Trump has indicated that he supports it. Given the long lead times to construct a new plant and since advanced technologies will still have to prove that they can work at scale all these splashy nuclear deals are unlikely to help the US meet its short-term climate goals. President Joe Biden committed the US to cutting greenhouse gas emissions roughly in half from peak levels by 2030 under the Paris agreement, which Trump now says hell renege on. And beyond building new reactors, the US still faces conundrums when it comes to responsibly securing uranium for fuel and figuring out where to safely store radioactive waste.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
Andrew R. Chow
How AI Is Making Buildings More Energy-Efficient
A new study estimates AI could help buildings reduce their energy consumption and emissions by at least 8%.
https://time.com/7201501/ai-buildings-energy-efficiency/
https://api.time.com/wp-…200&h=628&crop=1
2024-12-11T18:37:47Z
Heating and lighting buildings requires a vast amount of energy: 18% of all global energy consumption, according to the International Energy Agency. Contributing to the problem is the fact that many buildings HVAC systems are outdated and slow to respond to weather changes, which can lead to severe energy waste. Some scientists and technologists are hoping that AI can solve that problem. At the moment, much attention has been drawn to the energy-intensive nature of AI itself: Microsoft, for instance, acknowledged that its AI development has imperiled their climate goals. But some experts argue that AI can also be part of the solution by helping make large buildings more energy-efficient. One 2024 study estimates that AI could help buildings reduce their energy consumption and carbon emissions by at least 8%. And early efforts to modernize HVAC systems with AI have shown encouraging results. To date, we mostly use AI for our convenience, or for work, says Nan Zhou, a co-author of the study and senior scientist at the Lawrence Berkeley National Laboratory. But I think AI has so much more potential in making buildings more efficient and low-carbon. AI in Downtown ManhattanOne example of AI in action is 45 Broadway, a 32-story office building in downtown Manhattan built in 1983. For years, the buildings temperature ran on basic thermostats, which could result in inefficiencies or energy waste, says Avi Schron, the executive vice president at Cammebys International, which owns the building. There was no advance thought to it, no logic, no connectivity to what the weather was going to be, Schron says. In 2019, New York City enacted Local Law 97, which set strict mandates for the greenhouse emissions of office buildings. To comply, Schron commissioned an AI system from the startup BrainBox AI, which takes live readings from sensors on buildingsincluding temperature, humidity, sun angle, wind speed, and occupancy patternsand then makes real-time decisions about how those buildings temperature should be modulated.Sam Ramadori, the CEO of BrainBox AI, says that large buildings typically have thousands of pieces of HVAC equipment, all of which have to work in tandem. With his companys technology, he says: I know the future, and so every five minutes, I send back thousands of instructions to every little pump, fan, motor and damper throughout the building to address that future using less energy and making it more comfortable. For instance, the AI system at 45 Broadway begins gradually warming the building if it forecasts a cold front arriving in a couple hours. If perimeter heat sensors notice that the sun has started beaming down on one side of the building, it will close heat valves in those areas. After 11 months of using BrainBox AI, Cammeby has reported that the building reduced its HVAC-related energy consumption by 15.8%, saving over $42,000 and mitigating 37 metric tons of carbon dioxide equivalent. Schron says tenants are more comfortable because the HVAC responds proactively to temperature changes, and that installation was simple because it only required software integration. Its found money, and it helps the environment. And the best part is it was not a huge lift to install, Schron says. Read More: How AI Is Fueling a Boom in Data Centers and Energy DemandBrainBoxs autonomous AI system now controls HVACs in 4,000 buildings across the world, from mom-and-pop convenience stores to Dollar Trees to airports. The company also created a generative AI-powered assistant called Aria, which allows building facility managers to control HVACs via text or voice. The company expects Aria to be widely available in early 2025. Scientific StudiesSeveral scientists also see the potential of efforts in this space. At the Lawrence Berkeley National Laboratory in California, Zhou and her colleagues started studying the potential impacts of AI on building efficiency several years before ChatGPT captured public attention. This year, they published a paper arguing that AI/HVAC integration could lead to a 8 to 19% decrease in both energy consumption and carbon emissionsor an even bigger decrease if paired with aggressive policy measures. AI, the paper argues, might help reduce a buildings carbon footprint at every stage of its life cycle, from design to construction to operation to maintenance. It could predict when HVAC components might fail, potentially reducing downtime and costly repairs.  Zhou also argues that AI systems in many buildings could help regional electricity grids become more resilient. Increasingly popular renewable energy sources like wind and solar often produce uneven power supplies, creating peaks and valleys. Thats where these buildings can really help by shifting or shedding energy, or responding to price signals, she says. This would help, for instance, take pressure off the grid during moments of surging demand.Other efforts around the world have also proved encouraging. In Stockholm, one company implemented AI tools into 87 HVAC systems in educational facilities, adjusting temperature and airflow every 15 minutes. These systems led to an annual reduction of 64 tons of carbon dioxide equivalent, a study found, and an 8% decrease in electricity usage. And the University of Marylands Center for Environmental Energy Engineering just published a study arguing that AI models predictive abilities could significantly reduce the power consumption of complex HVAC systems, particularly those with both indoors and outdoor units.As the globe warms, efficient cooling systems will be increasingly important. Arash Zarmehr, a building performance consultant at the engineering firm WSP, says that implementing AI is a necessary move for all designers and engineers. All engineers are aware that human controls on HVAC systems reduce efficiencies, he says. AI can help us move toward the actual decarbonization of buildings. Despite its potential, AIs usage in building efficiency faces challenges, including ensuring safety and tenant data privacy. Then theres the larger question of AIs overall impact on the environment. Some critics accuse the AI industry of touting projects like this one as a way to greenwash its vast energy usage. AI is driving a massive increase in data center electricity demand, which could double from 2022 to 2026, the International Energy Agency predicts. And this week, University of California Riverside and Caltech scientists published a study arguing that the air pollution from AI power plants and backup generators could result in 1,300 premature deaths a year in the U.S. by 2030. If you have family members with asthma or other health conditions, the air pollution from these data centers could be affecting them right now, Shaolei Ren, a co-author of the study, said in a statement. Its a public health issue we need to address urgently. Zhou acknowledges that the energy usage of AI data centers increased drastically after she and her colleagues started writing the paper. To what extent it will offset the emission reduction we came up with in our paper needs future research, she says. But without doing any research, I still think AI has much more benefits for us.
Decision Making/Process Automation
Management/Architecture and Engineering
null
null
null
null
null
null