source
stringclasses 2
values | author
stringlengths 0
824
⌀ | title
stringlengths 0
475
⌀ | description
stringlengths 0
32.8k
⌀ | url
stringlengths 0
713
| urlToImage
stringlengths 0
2k
⌀ | publishedAt
stringlengths 20
20
⌀ | content
stringlengths 0
32.8k
⌀ | category_nist
stringlengths 5
160
| category
stringlengths 5
239
| id
stringlengths 6
7
⌀ | subreddit
stringlengths 3
21
⌀ | score
int64 0
30.2k
⌀ | num_comments
int64 0
2.27k
⌀ | created_time
timestamp[ns] | top_comments
stringlengths 1
25.4k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
news | Jason Acidre | Product-led Link Building Strategies | Links are “less important” than they were before (according to Google reps). I guess it is – if you’re mostly getting the types of links that Google no longer counts. There’s still a clear competitive advantage in acquiring the links that actually matter. The ones that do highlight your product, service, content, and/or expertise. Because […]The post Product-led Link Building Strategies appeared first on Kaiserthesage. | https://kaiserthesage.com/product-led-link-building/ | 2024-08-08T00:43:46Z | Links are less important than they were before (according to Google reps). I guess it is – if youre mostly getting the types of links that Google no longer counts. Theres still a clear competitive advantage in acquiring the links that actually matter. The ones that do highlight your product, service, content, and/or expertise. Because beyond helping search engines discover, index, and rank pages on your website, these links also:Build topical and brand authority through contextual referencing, aiding search engines in better understanding the relevance of your product or content to specific topics.Improve branded and navigational searches through brand impressions from mentions, links, and coverages.Optimize for large language models (LLMs) like ChatGPT, Perplexity, and AI Overviews, which rely heavily on sourcing information from other publishers.Drive referral and second-hand search traffic, especially if the linking pages rank well for the keywords you are also targeting.Product-led link building is a strategy that leverages a brands product or service to attract and earn links.This is often achieved by:Promoting a link-worthy product or process (if youre offering a service).Publishing linkable content assets that showcase your product/service as an obvious solution.This approach to link building focuses on campaigns with direct business impact by specifically promoting products/services, resulting in:More links pointing directly to commercial landing pagesIncreased brand impressions and potentially more branded searchesEducating potential customers about product features or service offeringsDemonstrating trust and authorityHere are some of the most important link traits we try to consider and are often achieved by product-led links. Note: These link characteristics tend to provide higher value based on the link attributes found in the recent Google leaked API documentation.TraitRelated Link Attribute(s)The target domains homepage trust scores (can ideally be determined through the volume of branded searches for the domain)homePageInfo; siteAuthority; PageRankFeatured in the top tier pages – such as links from pages ranking highly in Google search and getting more clicks, fresh content (recently published), or are prioritized in the sites architecture.SourceType; indexTier; totalClicksLinks from sites in the same country as the target siteAnchorsAnchorSource > localCountryCodesLinks from news sites (making digital PR cool again)encodedNewsAnchorDataRelevance of the content surrounding a link context2; fullLeftContext; fullRightContextLinks from seed sites (Google using a set of trusted seed sites such as The New York Times., etc), or links from sites that are fewer links away from one of these trusted seed sites are likely to carry more weight.PagerankNs or PageRank-NearestSeedsTotal number of links from trusted sources to a particular URLtrustedTotal; trustedMatching; trustedExamples; trustedTargetThe authors/experts associated with a document author; isAuthorTopical relevance between the link source and the target page – based on how much the linking site is focused on one topic, as well as how far the linking pages page_embeddings deviate from the site_embedding.siteFocusScore; siteRadius; anchorMismatchDemotion; topicEmbeddingsVersionedData; webrefEntitiesHow much the linked entity is connected/related to the other entities in the documentconnectednessProduct-led link building strategies tend to target multiple objectives that are aligned with improving SERP rankings (SEO), brand positioning, and lead generation.Here are some tactics you can try. Many of our most successful link building campaigns in the past have mostly been content-led (link earning/link baiting, resource link building, etc). While this approach has been a staple in the industry, it often involves creating content around high-demand and high-interest topics solely for the purpose of getting links, even if they are not directly related to the products or services we are promoting.Highlighting your product as an obvious solution through your content marketing efforts, essentially serving as an extension of your product or service, appears to be the way forward. Given how search is evolving with a strong focus on user intent. Create comprehensive guides that address the problems your target audience faces and demonstrate how your product or service can solve them.Another popular approach is to create high-utility content assets, such as free online tools, which can provide your target audience immediate access to certain features your product/service offers. You can also provide and leverage Freemium content such as free templates, checklists, apps, or services.Attract more links by giving potential customers the opportunity to try the product and experience the benefits firsthand before committing to a purchase.Build a link-earning flywheel by promoting these assets through outreach, enabling them to rank better for relevant informational queries and eventually attract more links organically over time.Get coverage from or place content strategically on sites/publications that can compete and rank well in search results for investigative or comparative intent queries (ie: searches like best dropship suppliers, top loan apps in the Philippines, etc).This concept, known by various names such as Barnacle SEO, SERP Monopoly, and Second-Hand Search Traffic, has been around for over a decade. The goal is to effectively dominate the top page of SERPs by being visible in as many ranking pages as possible.Identify sites/publications that have featured your top competitors in comparison/list type articles but havent included your site.We use tools like Ahrefs to find these opportunities.Reach out and pitch your product/service for inclusion.Cant get included on their existing lists?Find a new angle:Pitch a new list that focuses on a niche segment of your main product or service.Find other sites that havent yet created their own list of recommended providers for your product or service category.Reinforce and communicate your brands unique messaging through your content distribution efforts.Focus on the main problems your product or service solves in the content you publish externally (ie: guest posts, interviews).For instance, while there are many solar power system providers, only a few offer installment plans to make their services more affordable. You can highlight this unique selling point in your external content by showcasing how your installment or rent-to-own plans remove financial barriers for customers, making solar power accessible to a broader audience.Contextually plug your service/products relevant features or use cases into the flow of your content piece so it appears as an obvious solution worth trying.Context supercharges your links – as it enables LLMs and AI-generated search results to better understand your product.Another effective way to promote your products unique value proposition is by partnering with publishers to write reviews specifically about your product, providing them with free products or access.Genuine human interactions are sort of winning in this era of search (seeing a noticeable strong bias towards discussion-based content on SERPs lately).Do you have products, services, or content that people constantly share privately with their community (on platforms like Reddit, Facebook Groups, Linkedin, YouTube)?Generate interest by becoming more visible on these platforms. Nofollow links? They are still clickable and can generate potential leads.Unlinked brand mentions? They generate branded searches (which ultimately helps with entity recognition). Working within an unflashy industry? People do search for recommendations through these platforms, even for unsexy products or businesses. The key is to genuinely help solve peoples problems.Leverage internal data to create and distribute content that will demonstrate your brands expertise, as well as the results gained from using your products/services.Collect data through: Your sites audienceCustomer surveysIndustry/peersFeedback from your customer rep teamCase studiesCustomer reviews or testimonials (build stories around them)In the context of a consulting or service-oriented business, expertise is likely the main product. Seeing that it is the primary value offered to clients, encompassing the knowledge, skills, and experience that solve problems, provide insights, and drive results.One straightforward approach to demonstrating expertise is to create the content that other creators in your space wish they had created. Throwback to when Google+ was still alive, lol.Easier said than done, but this really boils down to your experience in your field and how you can draw unique insights from these experiences to create content thats genuinely helpful to your audience.Other ways to acquire hard-to-replicate links through your expertise:Seek out interview opportunities to promote your ideas, processes, and insights. Participate in podcasts, webinars, and industry panels to share your expertise. Collaborate with other experts and influencers to co-create content, expanding your reach and credibility. Publish thought-leadership pieces, case studies, and how-to guides that address common challenges and provide innovative solutions through columns or guest blogs. Positive coverage contribute to a strong brand reputation – and Digital PR helps amplify your products reach by distributing it across multiple channels.As Ive mentioned earlier from the list of link traits that do seem to matter these days, links generated through digital PR possess many of those attributes – from links or brand mentions from fresh content, trusted sources, to the brand or products connection to the other entities within the document.Red Stag Fulfillment: Increased organic traffic by over 6,000% from 2,500+ to 162,000+ monthly organic visitors. The campaign focused on intent-based content marketing and product-led link building, resulting in a 320% increase in linking root domains over the entirety of the campaign.Other notable campaign highlights include:Ranks #2 for the business primary topic and service offering 3pl – with over 25K searches per month in the US.Ranks #1 for several keyword groups with commercial intent: ecommerce fulfillment, 3pl warehouse, logistics service, etc…Link building has always been one of the most challenging parts of SEO. And it is getting a lot harder these days, especially if you’re in a highly competitive space. Given that it’s not much about the “quantity” but more about the “quantity of high quality” links your site is getting.Build links that not only improve your rankings but also enhance your brand reputation and help convert customers.Need help with link building? Let’s talk. | Content Creation/Content Synthesis | Business and Financial Operations | null | null | null | null | null | null |
|
news | NYT News Service | Artificial intelligence gives weather forecasters a new edge | Rapid AI weather forecasts will also aid scientific discovery, said Amy McGovern, a professor of meteorology and computer science at the University of Oklahoma who directs an AI weather institute. She said weather sleuths now used AI to create thousands of subtle forecast variations that let them find unexpected factors that could drive such extreme events as tornadoes. | https://economictimes.indiatimes.com/tech/artificial-intelligence/artificial-intelligence-gives-weather-forecasters-a-new-edge/articleshow/112336158.cms | 2024-08-07T05:07:49Z | In early July, as Hurricane Beryl churned through the Caribbean, a top European weather agency predicted a range of final landfalls, warning that Mexico was most likely. The alert was based on global observations by planes, buoys and spacecraft, which room-size supercomputers then turned into forecasts.That same day, experts running artificial intelligence software on a much smaller computer predicted landfall in Texas. The forecast drew on nothing more than what the machine had learned previously about the planet's atmosphere.Four days later, on July 8, Hurricane Beryl slammed into Texas with deadly force, flooding roads, killing at least 36 people and knocking out power for millions of residents. In Houston, the violent winds sent trees slamming into homes, crushing at least two of the victims to death.The Texas prediction offers a glimpse into the emerging world of AI weather forecasting, in which a growing number of smart machines are anticipating future global weather patterns with new speed and accuracy. In this case, the experimental program was GraphCast, created in London by DeepMind, a Google company. It does in minutes and seconds what once took hours."This is a really exciting step," said Matthew Chantry, an AI specialist at the European Center for Medium-Range Weather Forecasts, the agency that got upstaged on its Beryl forecast. On average, he added, GraphCast and its smart cousins can outperform his agency in predicting hurricane paths. In general, superfast AI can shine at spotting dangers to come, said Christopher S. Bretherton, an emeritus professor of atmospheric sciences at the University of Washington. For treacherous heats, winds and downpours, he said, the usual warnings will be "more up-to-date than right now," saving untold lives. Rapid AI weather forecasts will also aid scientific discovery, said Amy McGovern, a professor of meteorology and computer science at the University of Oklahoma who directs an AI weather institute. She said weather sleuths now used AI to create thousands of subtle forecast variations that let them find unexpected factors that could drive such extreme events as tornadoes. "It's letting us look for fundamental processes," McGovern said. "It's a valuable tool to discover new things."Moreover, the AI models can run on desktop computers, making the technology much easier to adopt than the room-size supercomputers that now rule the world of global forecasting."It's a turning point," said Maria Molina, a research meteorologist at the University of Maryland who studies AI programs for extreme-event prediction. "You don't need a supercomputer to generate a forecast. You can do it on your laptop, which makes the science more accessible."People depend on accurate weather forecasts to make decisions about such things as how to dress, where to travel and whether to flee a violent storm.Even so, reliable weather forecasts turn out to be extraordinarily hard to achieve. The trouble is complexity. Astronomers can predict the paths of the solar system's planets for centuries to come because a single factor dominates their movements -- the sun and its immense gravitational pull.In contrast, the weather patterns on Earth arise from a riot of factors. The tilts, the spins, the wobbles and the day-night cycles of the planet turn the atmosphere into turbulent whorls of winds, rains, clouds, temperatures and air pressures. Worse, the atmosphere is inherently chaotic. On its own, with no external stimulus, a zone can go quickly from stable to capricious.As a result, forecasts can fail after a few days, and even after a few hours. The errors grow in step with the length of the prediction -- which today can extend for 10 days, up from three days a few decades ago. The slow improvements stem from upgrades to the global observations as well as the supercomputers that make the predictions.Not that supercomputing has grown easy. The preparations take skill and toil. Modelers build a virtual planet crisscrossed by millions of data voids and fill the empty spaces with current weather observations.Bretherton called these inputs crucial and somewhat improvisational. "You have to blend data from many sources into a guess at what the atmosphere is doing right now," he said.The knotty equations of fluid mechanics then turn the blended observations into predictions. Despite the enormous power of supercomputers, the number-crunching can take an hour or more. And as the weather changes, the forecasts must be updated.The AI approach is radically different. Instead of relying on current readings and millions of calculations, an AI agent draws on what it has learned about the cause-and-effect relationships that govern the planet's weather.In general, the advance derives from the continuing revolution in machine learning -- the branch of AI that mimics how humans learn. The method works because AI excels at pattern recognition. It can rapidly sort through mountains of information and spot intricacies that humans cannot discern. Doing so has led to breakthroughs in speech recognition, drug discovery, computer vision and cancer detection.In weather forecasting, AI learns about atmospheric forces by scanning repositories of real-world observations. It then identifies the subtle patterns and uses that knowledge to predict the weather, doing so with remarkable speed and accuracy.Recently, the DeepMind team that built GraphCast won Britain's top engineering prize, presented by the Royal Academy of Engineering. Sir Richard Friend, a physicist at Cambridge University who led the judging panel, praised the team for what he called "a revolutionary advance."Rémi Lam, GraphCast's lead scientist, said his team had trained the AI program on four decades of global weather observations compiled by the European forecasting center. In seconds, he said, GraphCast can produce a 10-day forecast that would take a supercomputer more than an hour.Lam said GraphCast ran best and fastest on computers designed for AI but could also work on desktops and even laptops, though more slowly.In a series of tests, Lam reported, GraphCast outperformed the best forecasting model of the European Center for Medium-Range Weather Forecasts more than 90% of the time. "If you know where a cyclone is going, that's quite important," he added. "It's important for saving lives."Lam said he and his team were computer scientists, not cyclone experts, and had not evaluated how GraphCast's predictions for Hurricane Beryl compared with other forecasts in precision.But DeepMind, he added, did conduct a study of Hurricane Lee, an Atlantic storm that in September was seen as possibly threatening New England or, farther east, Canada. Lam said the study found that GraphCast locked in on landfall in Nova Scotia three days before the supercomputers reached the same conclusion.Impressed by such accomplishments, the European center recently embraced GraphCast as well as AI forecasting programs made by Nvidia, Huawei and Fudan University in China. On its website, it now displays global maps of its AI testing, including the range of path forecasts that the smart machines made for Hurricane Beryl on July 4.Chantry of the European center said the institution saw the experimental technology as becoming a regular part of global weather forecasting. A new team, he added, is now building on "the great work" of the experimentalists to create an operational AI system for the agency.Its adoption, Chantry said, could happen soon. He added, however, that the AI technology as a regular tool might coexist with the center's legacy forecasting system.Bretherton, now a team leader at the Allen Institute for AI (established by Paul G. Allen, one of the founders of Microsoft), said the European center was considered the world's top weather agency because comparative tests have regularly shown its forecasts to exceed all others in accuracy. As a result, he added, its interest in AI has the world of meteorologists "looking at this and saying, 'Hey, we've got to match this.'"Weather experts say the AI systems are likely to complement the supercomputer approach because each method has its own particular strengths."All models are wrong to some extent," Molina said. The AI machines, she added, "might get the hurricane track right but what about rain, maximum winds and storm surge?"Even so, Molina noted that AI scientists were rushing to post papers that demonstrated new forecasting skills. "The revolution is continuing," she said. "It's wild."Jamie Rhome, deputy director of the National Hurricane Center in Miami, agreed on the need for multiple tools. He called AI "evolutionary rather than revolutionary" and predicted that humans and supercomputers would continue to play major roles. "Having a human at the table to apply situational awareness is one of the reasons we have such good accuracy," he said."With AI coming on so quickly, many people see the human role as diminishing," Rhome added. "But our forecasters are making big contributions. There's still very much a strong human role." | Prediction/Discovery | Life, Physical, and Social Science/Education, Training, and Library/Computer and Mathematical | null | null | null | null | null | null |
|
news | Paul Bois, Paul Bois | Donald Trump in X Spaces Interview: AI Requires Double the Energy | Donald Trump talked about America's energy production during an X Spaces interview with Elon Musk focusing on competing with China on AI.The post Donald Trump in X Spaces Interview: AI Requires Double the Energy appeared first on Breitbart. | https://www.breitbart.com/politics/2024/08/12/donald-trump-in-x-spaces-interview-ai-requires-double-the-energy/ | 2024-08-13T02:11:53Z | Former President Donald Trump talked about America’s energy production during an X Spaces interview with Elon Musk on Monday, focusing on how the country can compete with China on artificial intelligence (AI).During the interview, Trump said that a Kamala Harris administration would enact harsh energy policies by pushing “Green Energy” with windmills and solar panels while hurting oil and fracking production. Later, the former president talked about artificial intelligence, which Elon Musk has been a proponent of, and how the country will not be able to compete with China on the technology if America’s energy production lowers.So, I know you love this AI. Well, AI requires twice the energy that everything else requires. So, well need to drill hard. Well need to double what we produce now,” he said. Trump just said AI requires double the energy – Big Tech knows that, so do the environmentalists but what do we hear? Humans, cows, chickens – were the problem.— Lara Logan (@laralogan) August 13, 2024TRUMP TO ELON: So, I know you love this AI. Well, AI requires twice the energy that everything else requires. So well need to drill hard. Well need double what we produce now. Honestly, going long energy names seems to be something that needs to be a part of everyones— amit (@amitisinvesting) August 13, 2024According to Goldman Sachs, an AI revolution will drive energy consumption up 160 percent.“On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search. In that difference lies a coming sea change in how the US, Europe, and the world at large will consume power and how much that will cost,” it noted.“For years, data centers displayed a remarkably stable appetite for power, even as their workloads mounted,” it continued. “Now, as the pace of efficiency gains in electricity use slows and the AI revolution gathers steam, Goldman Sachs Research estimates that data center power demand will grow 160% by 2030.”Paul Roland Bois directed the award-winning Christian tech thriller, EXEMPLUM, which has a 100% Rotten Tomatoes critic rating and can be viewed for FREE on YouTube or Tubi. Better than Killers of the Flower Moon,” wrote Mark Judge. You havent seen a story like this before, wrote Christian Toto. A high-quality, ad-free rental can also be streamed on Google Play, Vimeo on Demand, or YouTube Movies. Follow him on X @prolandfilms or Instagram @prolandfilms. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Fiona Keeley | EY Entrepreneur of the Year: From AI in healthcare to Ballymaloe Relish and the largest cold store operator in sub-Saharan Africa | We profile four of the eight finalists chosen in the International category for this year’s EY Entrepreneur of the Year awards | https://www.irishtimes.com/business/2024/08/23/ey-entrepreneur-of-the-year-2024-from-ai-in-healthcare-to-sustainable-foods-to-partner-programme-automation/ | 2024-08-23T04:00:00Z | Mark Gilmartin and Jonathan Larbey, T-ProT-Pro provides speech technology solutions for healthcare, including a deeply integrated workflow platform, various speech recognition interfaces and the latest innovation, T-Pro Copilot. Copilot uses speech recognition, and generative AI to automate administrative tasks such as document creation, correspondence and diagnosis/procedure coding. T-Pro is shown to save clinicians eight to 10 hours every week.Chief operations officer Mark Gilmartin and chief executive Jonathan Larbey founded T-Pro in 2012 and have grown the business across English-speaking healthtech markets.Describe your business model and what makes your business uniqueT-Pro gives our clients and partners the foundations and flexibility to implement our various state-of-the-art AI tools to help measure and improve efficiency. Ultimately, the business model is based on shared value. We focus on delivering value whether that be cost savings, efficiency gains or softer metrics like clinician satisfaction and retention.What is your greatest business achievement to date?The T-Pro business grew organically, without any external investment, to become the leading provider in Ireland and Britain. In early 2022, we were able to enter the Australian market, where we now work with over 60 per cent of healthcare institutions via an acquisition. This was self-sourced and self-funded, which is testament to the strong team and culture that we have built.What was your back-to-the-wall moment and how did you overcome it?In the early days, we invested every penny we had in building our software platform and early AI models. We won a couple of contracts and slipped into overtrading. We had to hire more staff and pay them on time. Which meant, despite not paying ourselves, we couldnt afford to pay our Revenue liabilities. We ended up entering an instalment arrangement with the Revenue. At that stage, it would have been easy to give up and search for external capital, but we traded out of it into profitability and never looked back.What were the best and the worst pieces of advice you received when starting out?The best advice we have ever received is to move forward quickly. This is along the lines of perfect being the enemy of progress and has really stood to us. We are focused on delivering high-quality products to our clients but understand that this needs to be done through a lens of pace and innovation where we are not afraid to try things and change them if they are not working.The worst piece of advice was not to waste time developing our own AI, as we would never be able to compete with the multinationals.To what extent does your business trade internationally and what are your future plans/ambitions?We provide a service in Ireland, Britain and Australia where we have a physical presence. Beyond that, we operate in New Zealand, Singapore, Malaysia and the wider Apac [Asia-Pacific] region as well as Germany, Austria, Switzerland. Our plans are to continue to grow market share in our core territories and to grow total addressable market through the introduction of new products, and expand into new markets, with the US an obvious target.How will your market look in three years and where would you like your business to be?As an AI native business, we feel like the market is finally catching up with where we have been in terms of product and messaging for the past decade. We think there will be a requirement to fund scalable AI-based solutions in healthcare and a wave of digital transformation will create strong industry tailwinds for us. In three years time, we hope to be at the top table in the US market which will be a new territory for us as well as other strategic global markets.What are the big disruptive forces in your industry?AI is the big disrupter but has been for a number of years. However, it is creaking healthcare systems, population growth, ageing populations and clinician burnout that are probably the factors that will really drive change over the next five years. Governments and healthcare funders will turn to AI to provide scalability through digital transformation. T-Pro will be perfectly positioned to capitalise on this.Maxine and Rosaleen Hyde, Ballymaloe FoodsMaxine and Rosaleen Hyde are the daughters of Yasmin Hyde, who started Ballymaloe Foods in 1990. Today, they run Ballymaloe Foods and employ more than 45 full-time staff, producing more than 25 products for consumers in Ireland and abroad. With a commitment to sustainability, Ballymaloe Foods has achieved gold membership of Bord Bias Origin Green programme.The company uses traditional cooking methods and natural ingredients, ensuring that the brand is creating new products. The core of the business is still Ballymaloe Relish, and its taste and quality has never changed in the 70 years since Myrtle Allen first cooked it.What is your greatest business achievement to date?We take pride in our Irish beetroot products. Faced with a declining supply of locally-sourced raw materials, we sought an ingredient that could be reliably grown in Ireland, working with farmers to grow the quantity we need. Today, we are the only large producer of beetroot products exclusively using Irish-grown beetroot, ensuring quality and flavour and supporting local agriculture.What was your back-to-the-wall moment and how did you overcome it?In September 2019, we suffered a fire in our production building. A unit on the electrical board was the cause. It was coming up to Christmas and we were told initially it would be a year before we could operate again in the building. We refused to accept this and, together with our team, managed to get back up and running within four weeks, closing off a section of the building that was damaged and going from a single shift to producing 24/7 in about half of the footprint to get our Christmas stock produced and sold.What moment/deal would you cite as the game changer or turning point for the company?There were two game-changer moments for us. In the 1990s when OBriens sandwich bars became a nationwide chain, Ballymaloe Relish was listed on their menus. To have a national sandwich bar proud to serve our product was an incredible help to us. Secondly, securing our listing in Tesco UK gave us the confidence to see that we had potential to expand our retail range beyond Ireland.What were the best and the worst pieces of advice you received when starting out?Worst advice: after our first year of trading, a former accountant advised us to wind up the business because the outlook wasnt promising.Best advice: instead of specific advice, when starting out, Yasmin followed the example set by her parents in their business practices. They emphasised treating your staff well, making sure your product or service is the best quality and, if you believe strongly in something, having confidence in yourself even when others might doubt you.How will your market look in three years and where would you like your business to be?We believe sustainably produced food will be increasingly demanded by our customers as we all try to consume meals that are more environmentally favourable. We would like to be the go-to brand for our consumers to help them make tasty, nutritious meals that have minimal environmental impact.What are you doing to disrupt, innovate and improve the products or services you offer?Two working parents is now the norm. We understand there can be a midweek strain to make nutritious meals so this is an area we like to try to help our customers. Our approach to making food focuses on minimally processed sauces, made in the same way as you would at home, that can help improve the flavour of a variety of dishes while saving time and still packing a punch nutritionally.How is the current inflationary environment impacting your business? How do you expect things to unfold?Since the inflation surge, our ingredient and packaging costs have skyrocketed. Weve absorbed a decent proportion to shield our customers from higher prices, while trying to lower costs through smarter purchasing. We stopped short of altering our product quality, though something we will never do. Prices are stabilising now, improving margins slightly, but we dont expect a full return to pre-inflation levels.Kenneth Fox, Channel MechanicsChannel Mechanics is a partner programme automation platform. Headquartered in Galway, its focus is to improve ease of doing business for companies going to market via the channel, where 75 per cent of global sales are transacted. By automating the business of working with partners, the platform allows companies to focus on the relationships. Founded in 2010, the company now employs more than 100 people globally.The platform, built with a modular architecture, offers a suite of functionalities specifically designed for various partner programmes including partner performance management, partner training and enablement, partner incentives and rewards, sales promotions and programmes, and inventory operations and compensation.What vision/light bulb moment prompted you to start up in business?Through my experience working with multinationals, I saw the inefficiencies of each company building its own channel automation solution to go to market. The emergence of cloud technology presented the opportunity to build a multi-tenant platform that all companies could utilise.Describe your business model and what makes your business uniqueOur approach is one of land and expand. We identify a potential customer, build trust by solving their immediate channel challenges and then grow our footprint. Helping our customers create a frictionless channel by placing ease of doing business at the heart of their go-to-market model ensures competitive advantage for them and ultimately helps them win partner mindshare.What moment/deal would you cite as the game changer or turning point for the company?In the early years we relied on consultancy to keep the lights on as we bootstrapped the company. Making the decision to switch off a lucrative consulting business and pivot to a product-driven model was a real game changer. Since then, we have achieved growth, doubling in size every other year since we made the decision.What were the best and the worst pieces of advice you received when starting out?The best advice we received was to engage with Enterprise Ireland, where we were fortunate to be accepted into their High Potential Start-Up (HPSU) programme. Its support, both domestically and internationally, has been instrumental in our success. I have great faith in people and took on all advice as, generally speaking, people are supportive and mean well.To what extent does your business trade internationally and what are your future plans/ambitions?Channel Mechanics operates globally and is 100 per cent export-driven. Our core markets lie in North America and Europe, where we see growth potential. We will continue to focus on these markets and have already opened offices in Britain and the US. We plan to grow the business to over $500 million (462 million) in the next three to four years.Describe your growth funding pathChannel Mechanics began as a self-funded start-up and achieved organic growth. In March of this year, we secured $70 million in a Series A funding round. This investment fuels our ambitious growth plans to reach over $500 million in the next three to four years. After achieving this milestone, we may pursue additional funding to further accelerate our expansion.What are the big disruptive forces in your industry?The recent downturn in the tech sector forced companies to re-evaluate their go-to-market strategies. Many companies turned to channel partners as a cost-effective way to expand their reach and drive revenue growth. This disrupted the market and provided the opportunity for Channel Mechanics to help these companies via our platform.How is the current inflationary environment impacting your business? How do you expect things to unfold?As an export-driven business, we arent reliant on the domestic economy for sales. So, from that perspective, the impact has been minimal. However, our cost base has risen significantly with the majority of staff based in Ireland. Thankfully, growth has been strong over the past two years, and we have more than doubled in size over that period.Ivor Queally, QK GroupIvor Queally is a director of QK Group, which operates across a number of different sectors from meat processing to cold storage and warehousing. QK Meats began operating in 2004 after signing a contract with one of South Africas largest retailers. Initially it was processing 2,500 cattle, 7,000 lambs and retail-packing 400 tonnes per week.In 2006, the company opened its first cold store and grew rapidly, expanding to now being the single largest cold store operator in sub-Saharan Africa, with a capacity exceeding 200,000 pallets. The companys latest cold store facility is opening in August 2024 with an additional 25,000 pallets.What vision/light bulb moment prompted you to start up in business?There was never a light bulb moment as I came from a very entrepreneurial family and my Dad had always been actively involved in opening and developing new businesses. So from a very early age, I always envisioned that I would always work for myself or within the family business.What was your back-to-the-wall moment and how did you overcome it?One that stands out the most was when we commenced operations and identified that the initial management team did not have the same passion, commitment and shared purposes as we were accustomed to in Ireland. We realised that we had overcommitted to our key customer, who had expected a service level of 97 per cent plus. We were underdelivering, with a service level of less than 55 per cent.To overcome this, we worked night and day for 14-16 weeks and we parachuted in some key people from our Irish operations who went into critical positions until we could find, train and develop a local management team.Some of the initial Irish management team still work in the business 20 years later and made South Africa their adopted home.What were the best and the worst pieces of advice you received when starting out?My late Dad gave me some great advice and that was to always back the jockey and not the horse, that it is people that make a business great. Today I surround myself with great people.The other critical piece of advice I got was never to stop people from making their mistakes and trying new ways of doing things without risking the business.What are the big disruptive forces in your industry?In South Africa, we are struggling with the supply of the most basic services. We are dealing with no electricity for up to 12 hours per day. We have to deal with water rationing as Africa is more impacted by global warming and the lack of rain.What are you doing to disrupt, innovate and improve the products or services you offer?We are investing significantly into solar energy and the installation of battery technology as we move away from reliance on the grid and fossil fuels. Today we self-generate more than 32 per cent of all our energy requirements and we are aspiring to achieve over 62 per cent within the next two years with an objective to have some of our cold stores fully carbon neutral within five years.We are also installing waterless condensers to reduce our water consumption by 70 per cent.What makes your company a good place to work?In South Africa the social responsibilities of an employer are much more onerous than in Ireland as we deal with a lot more social economic issues. A large percentage of our staff live with HIV/Aids and have struggled in getting access to antiretroviral [ARV] drugs at government clinics. We opened a clinic on-site with our own health practitioners to allow staff not only get access to the ARVs but also proactively monitor their health.We also opened a school to enable our staff to get the basic education to read and write while giving them an opportunity to work on a full-time basis and provide for their family.What is the single most important piece of advice you would offer to a less-experienced entrepreneur?Find a great mentor. Find someone that you can talk to and just bounce the ball with, someone who has done it before and can relate to your problems, who has a wide network of great people who will help and guide you when you hit a wall or an issue you cant deal with alone. | Process Automation | Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | Vikram Barhat | Mideast Microsoft, OpenAI ally mired in national security controversy | Microsoft's relationship with G42, a UAE-based AI company, has fueled national security concerns, with the Mideast caught between the U.S. and China. | https://www.cnbc.com/2024/08/25/a-controversial-mideast-partner-to-microsoft-openai-global-ambitions.html | 2024-08-25T07:02:30Z | Tech giants including Microsoft signaled in recent earnings calls that their intention is to spend aggressively on AI, but it's not just the billions of dollars being invested in U.S. AI companies like Microsoft partner and ChatGPT creator OpenAI, or the ballooning data center costs. Microsoft and its big tech peers are in a race for global AI dominance, and in a rivalry with China.As part of the broader strategic planning, Microsoft invested $1.5 billion in the United Arab Emirates' top artificial intelligence firm, G42, a deal providing Microsoft with access to vast amounts of data, the deep pockets of the ambitious rulers of the oil-rich Middle Eastern country, and a looser regulatory environment. The UAE's tech ecosystem, free from political wrangling and barriers inherent in democratic legal systems, is overseen by Emirati authorities who have a more lenient view than many Western nations of the use of anonymized citizen data to train AI models. In multiple areas of technology adoption, the UAE's autocratic, state-capitalist governance is seen as an advantage, enabling smooth and speedy mobilization of substantial resources towards tech leadership goals.G42 is central to Abu Dhabi's AI ambitions and a partnership with Microsoft could galvanize the development of a robust AI ecosystem which could fuel faster AI innovation. The UAE's tax-free income can also help to attract the world's highly sought after AI researchers."Microsoft opened a data center there a few years ago, so clearly it has been investing in the region," said Dan Romanoff, tech analyst at Morningstar.Microsoft set up its first cloud data center in the UAE in 2019. As part of the recent deal, G42 would use Microsoft's cloud computing platform Azure as the backbone for the development and deployment of AI services it provides to all of its customers. Beyond the UAE, Microsoft and G42 plan on building out data centers in other countries, including in East African countries.These facilities to power and cool the giant AI supercomputers are fundamental to further advance of the AI ecosystem."The compute needed for AI is large and consumes an enormous amount of power,"saidAndrew Feldman, CEO and co-founder of U.S.-based Cerebras Systems, a chip-making startup with relationships in the region.Power availability, from oil to natural gas and solar, in the UAE can also help with new data centers, shifting compute away from overstrained electricity grids in other parts of the world.OpenAI CEO Sam Altman has made numerous trips to the UAE looking to build a global AI coalition and earlier this year said the region could serve as the world's "regulatory sandbox" for AI testing.The deep interest from Microsoft and OpenAI in the UAE is viewed by industry watchers as part of an effort to consolidate their AI leadership position and increase its global footprint, particularly in emerging markets. "For MSFT, this investment is more about gathering clients on its Azure infrastructure and reinforcing the notion that it is the early leader in all things AI by proliferating OpenAI usage," Romanoff said. Over the past few quarters, the UAE, particularly its glitzy capital Abu Dhabi, have grown from regional AI hub to become a global AI center, according to Feldman. Leaders like Microsoft, OpenAI and others "are doing pioneering work there," he said.Feldman's Cerebras is building advanced supercomputer data centers in California, Texas and Abu Dhabi. It is also providing G42 its AI tech to develop what could be the world's most advanced Arabic large language model, a language spoken by approximately 400 million people, he noted.But amid rising tensions between the U.S. and China, AI has become a critical battlefield, and for all the strategic reasons the UAE deal makes sense, it has also attracted a high level of scrutiny on Capitol Hill related specifically to G42's relationship with China. The geopolitical landscape has become more complicated since Microsoft's partnership with G42 was first announced, with growing unease among U.S. intelligence officials, according to a New York Times report, regarding the potential transfer of sensitive American technologies and data to China through G42's partnerships, particularly with companies like Huawei. The Biden administration reportedly pressured the UAE to sever its tech ties with China in an effort to safeguard critical AI technologies. A statement earlier this year released by Rep. Mike Gallagher (R-WI), Chairman of the House Select Committee on the Chinese Communist Party, further confirmed that the Emirati firm, G42, had sold its stake in Chinese companies in what he referred to as a "welcome first step.""The UAE is a critical and powerful ally, one that will only become more important for regional and global stability as AI advances. Therefore, it is imperative the United States and the UAE further understand and mitigate any high-risk commercial and research relationships with PRC entities."The pressure from Capitol Hill has not let up, with a report earlier this month from Politico saying that Microsoft is now tweaking its $1.5 billion partnership with G42. Initially, the deal involved transferring sensitive AI hardware and intellectual property, including advanced semiconductors, to G42. However, Microsoft now reportedly plans to instead lease its AI products to the AI company. Microsoft's decision to pivot its strategy reflects the intense scrutiny and geopolitical challenges that surround the deal, highlighting the complex interplay of global tech rivalry, national security, and the broader U.S.-China tensions in the AI race.Microsoft and G42 did not reply to emails requesting for comment.Some experts contend that the deal still makes sense as a way to counter China."Microsoft's investment in the UAE strengthens a key U.S. partner with advanced AI capabilities," says Cory Johnson, chief market strategist and group analyst at The Futurum Group, a leading global technology advisory, media and research firm, in Austin, Texas."This could indirectly hinder China's influence in the strategically important Middle Eastern market."China remains a formidable competitor with significant state backing. However, "to the extent Microsoft is in the UAE helping to build a regional AI hub, China is less likely to be there," Romanoff said. "Anywhere the West can have influence over China, it helps promote stability, so Microsoft investing in AI in the UAE should be viewed positively by the tech community, as well as western governments," he added. "In 10 years, you could see a bustling start-up community in the UAE where AI innovation happens," he added.Microsoft's relationships in the UAE come amid greater focus among U.S. tech giants on the Middle East's AI potential, leading to a surge in investment and innovation in the region.Leading Silicon Valley venture capital firm Andreessen Horowitz reportedly plans to invest heavily in a $40 billion Saudi Arabian AI fund, which could make the country the world's largest investor in artificial intelligence.Amazon has also set up data centers in the UAE and is expanding its footprint in Saudi Arabia to support the growing adoption of artificial intelligence technologies. Google recently expanded its cloud services presence in the Middle East with a new Saudi Arabian HQ. But when it comes to AI development, no country can exist on its own. "The most effective results come from countries that are open and embrace the best and brightest from around the world," Johnson said. "Collaborations like Microsoft and G42 are valuable, but fostering a global AI ecosystem is essential for long-term success. We should view this as one piece on the global AI chessboard, not a checkmate for China," he said. "The race is far from over." | Content Creation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Alyssa Hughes | Innovations in AI: Brain-inspired design for more capable and sustainable technology | Researchers and their collaborators are drawing inspiration from the brain to develop more sustainable AI models. Projects like CircuitNet and CPG-PE improve performance and energy efficiency by mimicking the brain's neural patterns: | https://www.microsoft.com/en-us/research/blog/innovations-in-ai-brain-inspired-design-for-more-capable-and-sustainable-technology/ | 2024-08-29T17:25:25Z | As AI research and technology development continue to advance, there is also a need to account for the energy and infrastructure resources required to manage large datasets and execute difficult computations. When we look to nature for models of efficiency, the human brain stands out, resourcefully handling complex tasks. Inspired by this, researchers at Microsoft are seeking to understand the brains efficient processes and replicate them in AI. At Microsoft Research Asia (opens in new tab), in collaboration with Fudan University (opens in new tab), Shanghai Jiao Tong University (opens in new tab), and the Okinawa Institute of Technology (opens in new tab), three notable projects are underway. One introduces a neural network that simulates the way the brain learns and computes information; another enhances the accuracy and efficiency of predictive models for future events; and a third improves AIs proficiency in language processing and pattern prediction. These projects, highlighted in this blog post, aim not only to boost performance but also significantly reduce power consumption, paving the way for more sustainable AI technologies. Many AI applications rely on artificial neural networks, designed to mimic the brains complex neural patterns. These networks typically replicate only one or two types of connectivity patterns. In contrast, the brain propagates information using a variety of neural connection patterns, including feedforward excitation and inhibition, mutual inhibition, lateral inhibition, and feedback inhibition (Figure 1). These networks contain densely interconnected local areas with fewer connections between distant regions. Each neuron forms thousands of synapses to carry out specific tasks within its region, while some synapses link different functional clustersgroups of interconnected neurons that work together to perform specific functions. Figure 1: The four neural connectivity patterns in the brain. Each circle represents a neuron, and each arrow represents a synapse. Inspired by this biological architecture, researchers have developed CircuitNet, a neural network that replicates multiple types of connectivity patterns. CircuitNets design features a combination of densely connected local nodes and fewer connections between distant regions, enabling enhanced signal transmission through circuit motif units (CMUs)small, recurring patterns of connections that help to process information. This structure, shown in Figure 2, supports multiple rounds of signal processing, potentially advancing how AI systems handle complex information. Figure 2. CircuitNets architecture: A generic neural network performs various tasks, accepts different inputs, and generates corresponding outputs (left). CMUs keep most connections local with few long-distance connections, promoting efficiency (middle). Each CMU has densely interconnected neurons to model universal circuit patterns (right).Evaluation results are promising. CircuitNet outperformed several popular neural network architectures in function approximation, reinforcement learning, image classification, and time-series prediction. It also achieved comparable or better performance than other neural networks, often with fewer parameters, demonstrating its effectiveness and strong generalization capabilities across various machine learning tasks. Our next step is to test CircuitNets performance on large-scale models with billions of parameters. Spiking neural networks (SNNs) are emerging as a powerful type of artificial neural network, noted for their energy efficiency and potential application in fields like robotics, edge computing, and real-time processing. Unlike traditional neural networks, which process signals continuously, SNNs activate neurons only upon reaching a specific threshold, generating spikes. This approach simulates the way the brain processes information and conserves energy. However, SNNs are not strong at predicting future events based on historical data, a key function in sectors like transportation and energy.To improve SNNs predictive capabilities, researchers have proposed an SNN framework designed to predict trends over time, such as electricity consumption or traffic patterns. This approach utilizes the efficiency of spiking neurons in processing temporal information and synchronizes time-series datacollected at regular intervalsand SNNs. Two encoding layers transform the time-series data into spike sequences, allowing the SNNs to process them and make accurate predictions, shown in Figure 3.Figure 3. A new framework for SNN-based time-series prediction: Time series data is encoded into spikes using a novel spike encoder (middle, bottom). The spikes are then processed by SNN models (Spike-TCN, Spike-RNN, and Spike-Transformer) for learning (top). Finally, the learned features are fed into the projection layer for prediction (bottom-right). Tests show that this SNN approach is very effective for time-series prediction, often matching or outperforming traditional methods while significantly reducing energy consumption. SNNs successfully capture temporal dependencies and model time-series dynamics, offering an energy-efficient approach closely aligns with how the brain processes information. We plan to continue exploring ways to further improve SNNs based on the way the brain processes information. While SNNs can help models predict future events, research has shown that its reliance on spike-based communication makes it challenging to directly apply many techniques from artificial neural networks. For example, SNNs struggle to effectively process rhythmic and periodic patterns found in natural language processing and time-series analysis. In response, researchers developed a new approach for SNNs called CPG-PE, which combines two techniques:Central pattern generators (CPGs): Neural networks in the brainstem and spinal cord that autonomously generate rhythmic patterns, controlling function like moving, breathing, and chewing Positional encoding (PE): A process that helps artificial neural networks discern the order and relative positions of elements within a sequence By integrating these two techniques, CPG-PE helps SNNs discern the position and timing of signals, improving their ability to process time-based information. This process is shown in Figure 4. Figure 4: Application of CPG-PE in an SNN. X, X, and X-output are spike matrices. We evaluated CPG-PE using four real-world datasets: two covering traffic patterns, and one each for electricity consumption and solar energy. Results demonstrate that SNNs using this method significantly outperform those without positional encoding (PE), shown in Table 1. Moreover, CPG-PE can be easily integrated into any SNN designed for sequence processing, making it adaptable to a wide range of neuromorphic chips and SNN hardware.Table 1: Evaluation results of time-series forecasting on two benchmarks with prediction lengths 6, 24, 48, 96. Metr-la and Pems-bay are traffic-pattern datasets. The best SNN results are in bold. The up-arrows indicate a higher score, representing better performance. The innovations highlighted in this blog demonstrate the potential to create AI that is not only more capable but also more efficient. Looking ahead, were excited to deepen our collaborations and continue applying insights from neuroscience to AI research, continuing our commitment to exploring ways to develop more sustainable technology.Opens in a new tab | Unknown | Unknown | null | null | null | null | null | null |
|
news | Cody Corrall | A comprehensive list of 2024 tech layoffs | TechCrunch | A complete list of all the known layoffs in tech, from Big Tech to startups, broken down by month throughout 2024. | https://techcrunch.com/2024/08/01/tech-layoffs-2024-list/ | 2024-08-01T23:55:16Z | The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized startups have also seen a fair amount of cuts, and in some cases, have shut down operations altogether.By tracking these layoffs, were able to understand the impact on innovation across companies large and small. Were also able to see the potential impact of businesses embracing AI and automation for jobs that had previously been considered safe. It also serves as a reminder of the human impact of layoffs and what could be at stake in regards to increased innovation.Below youll find a comprehensive list of all the known layoffs in tech that have occurred in 2024, to be updated regularly. If you have a tip on a layoff, contact us here. If you prefer to remain anonymous, you can contact us here.Intel kicked off the month with substantial layoffs, with 15,000 employees accounting for 15% of its total staff affected by the company’s cutbacks. “Our revenues have not grown as expected and we’ve yet to fully benefit from powerful trends, like AI,” CEO Pat Gelsinger said in a memo announcing the layoffs.The e-bike startup that has raised more than $300 million from investors has also conducted five rounds of layoffs since April 2021, with TechCrunch exclusively learning that Red Power’s most recent layoffs were conducted in July with an unknown number of Rad Power’s roughly 394 employees impacted.Has discontinued livestreaming services across its dating apps, specifically Plenty of Fish and BLK, as it shifts its focus to generative AI. The move will result in a 6% reduction in its total workforce.Will cut 220 employees, representing around 17% of the game studios total workforce. CEO Pete Parsons said the changes impact all levels of the company, including senior and executive leadership.Has reportedly eliminated roles for nearly 200 U.S. writers a month after the company partnered with ElevenLabs to quickly convert scripts into audio content using AI.Has reportedly laid off more than 200 employees across several departments. It would be the agritech companys third substantial layoff round in the past year.Announced it will eliminate roughly 8% of its workforce as the company works toward its next phase of growth.Is reportedly laying off about 20 employees, accounting for nearly 5% of its total workforce. The cuts came the day after the company announced it raised $500 million at a $5 billion valuation.Reportedly eliminated around 75 of its workers. As part of the cuts, the augmented reality startup reportedly axed its sales and marketing departments entirely.Is reportedly laying off nearly half of its employees in the U.S. as the Japan-based company struggles to compete with other e-commerce rivals like Temu.Is eliminating 50 employees, accounting for 10% of its total workforce. Earlier this year, the cybersecurity company raised $60 million at a $1 billion valuation, making it a unicorn.Is reportedly laying off 10% of its 165-person workforce. The company develops cyber intelligence software that helps prevent online fraud.Has laid off the majority of its roughly eight-person staff as the LGBTQ+ social networking site struggles to monetize its product. Last year, the companys third, Lex raised $5.6 million in seed funding and elevated co-founder Jennifer Lewis from COO to CEO.Cut less than 15% of its 250- to 300-person workforce as part of a necessary reshuffling following a $133 million Series C funding round, TechCrunch has learned.Will lay off dozens of employees and leave the U.S. market completely following a U.S. government order that banned the sale of the companys software due to security risks.Eliminated about 300 employees in its workforce as it rolls out a broader effort to cut costs and streamline its operations.Will cut 1,800 employees, impacting 10% of its workforce. The company says more than half were cut due to low performance and aims to hire approximately the same number of employees instead of cutting costs.Plans to cut 420 jobs, 10% of its total workforce, as the company undergoes a large restructuring effort.Cut an estimated 2,200 employees, amounting to nearly 14% of its workforce, as the software company attempts to redirect its resources into key areas of product innovation.OpenTextPlans to cut roughly 1,200 jobs, amounting to almost 2% of its total workforce, as the information management company plans to significantly reduce its expenses by 2025.Is laying off about 250 employees in the latest in a series of job cuts after schools reopened across India following pandemic lockdowns. Is ceasing its operations after its last-resort acquisition talks with Dailyhunt collapsed. Has cut its workforce by 26 people, CEO Uma Valeti wrote in an email to staff, as the lab-grown meat industry sees a decline in VC funding. Is eliminating 20 employees, amounting to a third of its total workforce, as the company shifts its focus to software development.RealPageWill cut approximately 4% of its workforce as part of a plan to boost growth, though the company is also one of many within its field facing a consolidated lawsuit alleging they engaged in price fixing.Intends to lay off roughly 180 employees, amounting to 17% of its workforce, according to an SEC filing that amounts to its second recent round of layoffs.Is laying off more than 100 employees, according to a WARN filing. The news of the cuts comes after the company launched a large office expansion in Richmond, California.Is reportedly conducting layoffs in Israel as it goes through a global restructuring.Is reportedly cutting a large number of its staff after being acquired by French gaming company Voodoo.Has laid off about 30 people, accounting for 3% of its workforce, as it refocuses its business to enterprise.Terminated 158 employees, with another batch of layoffs expected to come as the company aims to reduce its workforce by 25%.Is making cuts to 10% of its workforce, impacting around 20 to 25 employees.Is laying off 375 employees, accounting for 5% of its total workforce.Will eliminate up to 85 employees based in Ireland, the company announced.Is reportedly laying off around 30 employees in Israel and will move positions to other regions to cut costs.Cut 16 employees in its supplier resource management department as it focuses on automation.Is reducing its global headcount by 23% in a major restructuring effort as the online learning platform aims to become a leaner operation.Is closing up shop and liquidating its assets. The number of employees affected is currently unknown.Is reducing its headcount by 15% as the company attempts to think in longer time frames, the company announced in a blog post.Is making more cuts, co-CEO Carey Anne Nadeau announced on LinkedIn. The number of employees impacted is currently unknown.Will lay off its 143 employees by July 3 due to a funding loss, and will no longer be accepting new orders. The company has not shut down fully though, telling TechCrunch: We are actively exploring options for the brand but do not have anything definitive to communicate at this time.Shut down its operations and laid off its remaining employees after raising more than $50 million since its 2017 start.Is laying off 70 employees, about 30% of its workforce, three weeks after an earlier round of cuts impacted 34 employees.Is slashing around 450 jobs at its Indonesian e-commerce division, accounting for 9% of the unit. Has eliminated around 30% of its total workforce, CEO Graham Gaylor confirmed in a statement.Is reportedly conducting large cuts across the company. The total number of employees impacted is currently unknown.Has cut around 45 jobs as part of a restructuring effort.Has laid off at least 1,060 employees two weeks after the startup filed for administration.Is laying off its 1,000+ staff drivers as it embraces a gig worker model similar to that of Lyft and Uber.Has cut 30 employees a month after the Bengaluru-based startup laid off 160 people.Has confirmed layoffs of 150 jobs as it drastically scales back its expansion ambitions to focus on its markets in Norway and Sweden.Is laying off 100 workers, or 20% of its staff, in another round of cuts.Is reportedly laying off 10% of its workforce, amounting to around 30 people.Is reportedly cutting hundreds of employees working in its Azure cloud business, though the exact number of employees impacted is currently unknown.Is laying off 100 employees months after reducing its headcount by 50 workers.Is reportedly making large cuts globally across several of its Cloud teams, including teams focused on sustainability, consulting and partner engineering.Is eliminating 40 employees as part of a restructuring effort, CEO David Campbell wrote in a post on LinkedIn. Is shutting down its operations after laying off 60% of its staff in March in an attempt to stay afloat.Has laid off a substantial part of its workforce, TechCrunch learned. Engineering and product design departments were most impacted by the cuts at the cancer care platform startup.Is laying off 37 tech workers at FlightStats, the flight tracking startup it acquired in 2016, as it plans to consolidate its operations in India and the U.K.Is cutting 15 employees in a round of layoffs, impacting 20% of the Israeli startups total workforce.Has laid off hundreds of employees in a bid to keep the EV startup alive. One current and one laid off employee told TechCrunch exclusively that an estimated 150 people remain at the company.Is shutting down its operations and laying off the rest of its staff. The COVID-19 test company laid off half of its workforce earlier this month to cut costs.Has let go of 105 employees as the company seeks to streamline its operations, according to an email to staffers from current CEO Gary Little.Is laying off about 400 employees, roughly 6% of its workforce, as part of a restructuring ahead of the launch of its first electric SUV later this year.Will reportedly make large cuts to its global operations and marketing teams. The amount of employees impacted is currently unknown.Will reportedly cut 14% of its staff, impacting 175 employees, as the company shifts its focus from original Disney+ programming back to films.Let go of 20% of its staff as the coding startup shifts its focus to enterprise sales.Cut about 30% of its total workforce. The recruiting startup that uses AI to find candidates was last valued at over $1.2 billion in January 2022.Eliminated 6% of its staff in another round of layoffs as the fast-delivery startup attempts to become cash-flow positive by the end of 2024.Plans to lay off 106 employees, according to a WARN notice filed in Texas. MainvestHas shut down its operations. The number of employees affected is currently unknown.Is cutting roughly 1,000 jobs, impacting 8% of the companys headcount, CEO Chris Hyams wrote in a letter to staff.Cut around 40% of its workforce, impacting about 550 employees, sources told TechCrunch. The companys chief operating officer, Abe Ghabra, has also left the company.Will eliminate 57 positions in San Francisco, according to a WARN notice filed in California.Is eliminating 800 employees, accounting for 13% of its workforce, as part of a restructuring effort.Told The Verge it has laid off most of its staff and is no longer selling its smart home controllers and light switches as it looks for a buyer.Laid off roughly 170 workers, impacting a third of its total headcount, in an effort to cut back on annual operating costs. Closed Arkane Austin, Tango Gameworks, and more game studios as part of cuts at Bethesda. Its currently unclear how many employees will be impacted.Is eliminating 230 employees, about 49% of its workforce, in a cost cutting measure laid out in documents filed with the U.S. SEC.Is slashing its workforce by 20%. The cuts will affect around 140 employees, and the company is also cutting ties with the majority of its contract workers.Has laid off about 3% of its workforce, impacting 116 people, the company confirmed to TechCrunch in a statement. The cuts come over a year after the company eliminated about 4% of its headcount.Is laying off 15% of its workforce, affecting about 400 people, as part of a cost-cutting effort. The companys CEO Barry McCarthy is also stepping down.Has gutted its charging team in a new round of layoffs, CEO Elon Musk announced in an overnight email to executives.Has laid off staff across key teams like Flutter, Dart and Python. It is currently unclear how many employees were let go.Is laying off more employees to preserve cash, according to an internal email viewed by TechCrunch. The number of cuts is currently unknown.Is shutting down operations in the U.S., the U.K. and Europe, impacting at least 6,000 jobs across the closing markets.Is cutting about 180 jobs in a profitability push and has let go its chief executive Hemant Bakshi, a source familiar with the matter told TechCrunch.The space and defense startup laid off nearly 30 people, accounting for about 25% of its workforce, due to duplication of roles and functions across the company, TechCrunch exclusively reported.Is expected to cut employees in its Austin office for the second time this year.Plans to eliminate 740 employees at its Oregon headquarters this summer, according to a WARN Act notice.Is eliminating 10% of its workforce following the exit of former CEO Emad Mostaque.Is laying off workers as part of continued cost cutting measures. The number of employees affected was at the time unknown.Is reducing its total workforce by 1%. Its the second round of layoffs for the EV maker this year.Is laying off 5% of its workforce, affecting around 579 employees. The GTA 6 publisher also announced the elimination of several projects in development.Is eliminating about 20% of its 59 employees in a restructuring effort.Is cutting “more than 10%” of its global workforce, per an internal email sent by CEO Elon Musk. That could impact more than 14,000 workers worldwide, as Tesla prepares itself “for our next phase of growth” amid a challenging EV market.Is reducing its global workforce by nearly 4%, impacting up to 140 employees.Is laying off 250 employees based in Ireland as it restructures its Training and Quality team.Cut approximately 10% of its workforce, TechCrunch exclusively learned, as the company prepares for an IPO and aims to reach profitability.Has laid off 382 employees, amounting to 32% of its total workforce, TechCrunch exclusively learned. The background-screening platform was last valued at $5 billion in April of 2022.Reportedly laid off a sizable part of its staff in a restructuring effort. The number of employees impacted is currently unknown, but sources told Inc42 that it could be in the range of 70-100 workers.Is laying off 614 employees in California after abandoning its electric car project, according to a WARN notice.Has laid off a small number of employees as part of a company-wide focus on commercialization efforts.Shut down operations. The company, which was backed by OpenAI, employed about 100 people.Is shutting down Yummly, the recipe and cooking app it acquired in 2017.Will cut hundreds of jobs across Sales, Marketing, Global Services and its Physical Stores Technology team.Is laying off about 500 employees, accounting for 3% of its total workforce, as part of a restructuring effort.Has laid off 20% of its staff after acquiring point-of-sale platform Cuboh. The company previously laid off 100 people in 2022.Is restructuring its testing department, which is largely made up of contractors. A Nintendo spokesperson told Kotaku the changes will end some assignments but will lead to the creation of new full-time positions.Cut its global workforce by about 6,000 jobs, according to a 10-K SEC filing. The filing reveals the company cut 13,000 jobs in the last year.Has made cuts to its staff, the company confirmed to TechCrunch. A report in Fintech Business Weekly estimates that 17 people, or about 15% of the company, were impacted. Is cutting 195 roles in an effort to become more sustainable, CEO Henry Chan wrote in a blog post. The layoffs impact nearly a quarter of its staff.Reportedly eliminated 20% of its total workforce in its second restructuring effort in the past year.Conducted another round of layoffs impacting 20 employees, CEO Ham Serunjogi announced in a blog post. Has reportedly cut 16% of its staff in a strategic move to support its Textio Lift product. Is reportedly laying off around 25% of its workforce. According to Axios, the cuts affect roughly 80 people.Is shutting down after failing to secure new funding, TechCrunch has learned. The remote driving startup, which had cut staff last year, employed a little more than 100 people.Is reportedly slashing its marketing and communications staff. The company previously announced a strategy to replace upwards of 8,000 jobs with AI.Cut just under 40% of its staff, equating to dozens of employees, the company confirmed to TechCrunch.Laid off around 15 people earlier this year, following comments from CEO Chris Caren that the company would be able to reduce 20% of its headcount thanks to AI.Laid off 13% of its staff based in its New York office as the web3 fantasy sports platform focuses on its Paris headquarters, a source familiar with the matter told TechCrunch.Is eliminating roughly 7% of its workforce as part of organizational restructuring. The fintech unicorn last conducted layoffs in August 2022.Is cutting about 13% of its workforce, affecting 40 employees. Its the second round of layoffs for the battery startup in recent months.Is shutting down, resulting in a permanent mass layoff impacting around 150 employees.Plans to lay off 15% of its workforce and says it likely does not have enough cash on hand to survive the next 12 months.Cut 5% of its workforce, impacting 670 employees, as it moves away from the development of future licensed IP.Is letting go of about 350 employees, accounting for 30% of its workforce.Is likely cutting hundreds of employees who worked on the companys autonomous electric car project now that the effort has stopped, TechCrunch has learned.Is laying off 900 employees from its PlayStation unit, affecting 8% of the divisions workforce. Insomniac Games, Naughty Dog, Guerrilla and Firesprite studios will also be impacted.Will reportedly cut 1,500 roles in 2024, primarily in its Product & Technology division, accounting for more than 8% of the companys workforce.Eliminated roughly 60 employees, or 17% of its workforce. Its the financial startups third major layoff round in the past 12 months.Is laying off 10% of its salaried workforce in a bid to cut costs in an increasingly tough market for EVs.Will lay off 13% of its workforce as it works to build a financially sustainable business, CEO Phil Graves told TechCrunch exclusively.Announced it will eliminate 5% of its employees, impacting more than 4,000 people.Will lay off about 550 workers in a move designed to promote operating expense efficiency.Announced in an SEC filing that it will lay off roughly 250 employees as part of a restructuring effort.Is scaling back its investment in a number of products, TechCrunch has learned, resulting in layoffs that will affect roughly 60 employees.Is laying off 230 employees worldwide as part of the companys efforts to advance its focus on the AI-enabled workplace of the future.Is cutting 30% of its North American workforce as part of a restructuring.Is reportedly cutting jobs in its healthcare businesses One Medical and Amazon Pharmacy. The number of impacted roles is currently unknown.Announced plans to eliminate 6% of its workforce, largely impacting the companys sales and marketing divisions.Announced plans to cut 10% of its workforce, impacting roughly 500-plus employees, in an effort to reduce hierarchy.Has laid off 60 employees, or about 19% of its staff, CEO Marc Boiron announced in a blog post.Is laying off approximately 400 employees. The layoffs come almost exactly a year to the day after Okta announced plans to cut about 300 employees.Will lay off 95 workers in New York City, according to a filing with the New York Department of Labor.Is laying off about 6% of its global workforce, or 280 employees, the company confirmed to TechCrunch.Conducted another round of layoffs earlier this month, amounting to roughly 15% of its workforce, a source familiar with the situation told TechCrunch. Is reportedly laying off around 1,000 people in the Cash App, foundational and Square arms of Block.Has reportedly begun company-wide layoffs. While it is unclear how many people will be affected, one source told TechCrunch it was expected to be in the thousands.Has laid off 20% of its staff of about 1,000 people, TechCrunch exclusively learned. The cuts to the software startup come despite record growth in the solar industry last year.Is laying off 350 people, or one-third of its headcount, after Amazons bid to acquire the Roomba-maker shuttered. Longtime CEO Colin Angle has also stepped down.Is reportedly laying off 700 workers, or around 1% of its staff. This comes after the company had a significant reduction of 10% of its workforce in 2023.Is reportedly planning to cut around 20% of its staff in the next few weeks. The company announced similar cuts in October, when founder Ryan Petersen returned as CEO and slashed its workforce by 20%.Is laying off 1,900 employees across its gaming divisions following its acquisition of Activision Blizzard. Blizzard president Mike Ybarra announced he will also be stepping down.Is cutting about 400 jobs, 7% of its workforce, as the food delivery startup seeks to bring further improvements to its finances ahead of a planned IPO later this year.Laid off dozens of workers, according to sources familiar with the decision. The autonomous vehicle technology company has since confirmed that about 3% of its workforce has been laid off.Will lay off 9% of the companys workforce, affecting about 1,000 full-time employees. In a blog post, the company also plans to cut contract roles in the coming months.Announced it intends to offer voluntary buyouts or job changes to 8,000 employees amid restructuring.Laid off 20% of its staff, affecting 282 workers. In a blog post, Co-CEO Pedro Franceschi said that the company is prioritizing long-term thinking and ownership over short-term gains in our comp structure.Eliminated around 60 jobs across the U.S. in Los Angeles, New York, and Austin in addition to layoffs in international markets. The affected roles, according to NPR’s initial reporting, are largely in sales and advertising.Is cutting 90% of its employees as it shuts down its online used car marketplace and shifts resources into two business units: one focused on auto financing and the other on AI-powered analytics.Is laying off 11% of its workforce, affecting about 530 employees, as the company focuses on fewer, high-impact projects. The League of Legends maker is also sunsetting its five-year-old publishing group, Riot Forge.Is eliminating 13% of its global workforce, affecting 1,650 employees, in a restructuring effort aimed at cutting layers of management.Will eliminate 100 employees, a spokesperson confirmed to TechCrunch, as part of a restructuring effort in its creator management and operations teams.Is laying off hundreds of employees in its advertising sales team, according to a leaked memo. The cuts come a week after the company did sweeping layoffs across its hardware teams. And more layoffs will come throughout the year, as CEO Sundar Pichai told the company in a memo obtained by the Verge.Reportedly laid off a sizable number of employees January 12. The game developer studio was acquired by Borderlands maker Gearbox in 2022.Is going to lay off employees in 2024, TechCrunch exclusively learned, with the total impacted employees potentially reaching as high as 20% of the animation studios 1,300 person workforce. The cutbacks come as Disney looks to reduce the studios output as it struggles to achieve profitability in streaming.Is laying off 5% of its workforce, citing an increasingly challenging landscape, according to a leaked memo obtained by Business Insider.Is laying off 17% of its staff, impacting 170 people. In an internal memo obtained by the Verge, Discord CEO Jason Citron blamed the cuts on the company growing too quickly.Laid off hundreds of employees across its Google Assistant division and the team that manages Pixel, Nest and Fitbit hardware. The company confirmed to TechCrunch that Fitbit co-founders James Park and Eric Friedman are also exiting.Is laying off several hundreds of employees at Prime Video and MGM Studios, according to a memo obtained by TechCrunch. The cuts come days after the 500 layoffs at Amazons Twitch.Is reportedly laying off 500 employees, 35% of its current staff, amid a continued struggle to achieve profitability in the face of rising costs and community backlash. The pending layoffs come after hundreds more employees were laid off in 2023.Confirmed to TechCunch that layoffs, conducted in December, had impacted 14 employees, accounting for 60% to 70% of the company, according to multiple sources.Confirmed it cut 10% of its contractor workforce at the end of 2023 as it turns to AI to streamline content production and translations previously handled by humans.Will cut about 10% of corporate roles as it goes through a restructuring plan following Anushka Salinas planned resignation as operating chief and president at the end of January.Is reducing its workforce by about 25%, or 1,800 people. The video game engine maker went through three rounds of layoffs in 2023.Laid off two-thirds of its employees as the German startup, which built collaborative presentation software, looks to pursue a completely different path. CEO and co-founder Christian Reber also stepped down.The AI and biomedical startup reportedly cut 17% of its workforce January 8, citing shifts in the economic environment, in a LinkedIn post announcing the layoffs. Eliminated 38% of its staff January 8 as the online retail logistics company follows up after conducting layoffs in September 2023.Announced January 8 it is laying off 28% of its staff, or 154 workers, as the small modular nuclear reactor company shifts its focus to key strategic areas.Is reportedly laying off 15% of its workforce focused on computer vision for retailers.Is shutting down at the end of 2024 after a 12 year run. The design collaboration startup was once valued at nearly $2B.Is laying off nearly 20% of its workforce as it tries to maintain its battle with Nielsen over media measurement. CEO Ross McCray stepped down from the company.Is laying off roughly 15% of its staff, totaling 60 employees. The Israel-based unicorn reportedly plans to move some impacted employees into other positions at the company.Laid off its entire 200-person workforce January 2 after attempts to raise more capital failed, TechCrunch exclusively learned. The mass layoff comes just seven months after the startup acquired rival Zencity. | Unknown | Unknown | null | null | null | null | null | null |
|
news | WIRED Staff | Is AI More Sustainable if You Generate it Underwater? | AI takes a lot of energy to run, but underwater data centers might not be the answer. | https://www.wired.com/story/gadget-lab-podcast-660/ | 2024-09-26T12:00:00Z | AI data centers are so hot right now. Each time generative AI services churn through their large language models to make a chatbot answer one of your questions, it takes a great deal of processing power to sift through all that data. Doing so can use massive amounts of energy, which means the proliferation of AI is raising questions about how sustainable this tech actually is and how it affects the ecosystems around it. Some companies think they have a solution: running those data centers underwater, where they can use the surrounding seawater to cool and better control the temperature of the hard working GPUs inside. But it turns out just plopping something into the ocean isn't always a foolproof plan for reducing its environmental impact.This week on Gadget Lab, WIRED writers Paresh Dave and Reece Rogers join the show to talk about their reporting on underwater data centers and how the race to power AI systems is taking its toll on the environment.Show NotesRead Paresh and Reeces story about the plan to put an underwater data center in the San Francisco Bay. Read Reeces stories about how this is AIs hyper-consumption era and how to wade through all the AI hype. Read Laurens story about the social network inhabited only by bots. Read Karen Haos story in The Atlantic about how companies like Microsoft are taking water from the desert to use for cooling down AI data centers. Heres the Black Cat substack article about the character Harper from Industry. Follow all of WIREDs AI and climate coverage.RecommendationsParesh recommends checking out cookbooks from your local library. Reece recommends the soundtrack of the first Twilight movie for all your Fall feels. Lauren recommends the HBO show Industry. Mike recommends Anna Weiners profile of bicycle designer Grant Peterson in The New Yorker.Reece Rogers can be found on social media @thiccreese. Paresh Dave is @peard33. Lauren Goode is @LaurenGoode. Michael Calore is @[email protected]. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. Were on Spotify too. And in case you really need it, here's the RSS feed. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Katherine Tangalakis-Lippert | Sam Altman believes wars will be fought over AI unless there's massive spending on infrastructure. But that comes with a cost. | Sam Altman says the development of AI infrastructure is key to avoiding war over the tech. Experts say such development costs more than just money. | https://www.businessinsider.com/openai-sam-altman-infrastructure-artificial-intelligence-data-centers-2024-9 | https://i.insider.com/66f33f95ce3009a0fac93729?width=1200&format=jpeg | 2024-09-25T09:22:02Z | Leaders in the AI industry are calling for massive investments to create enough infrastructure to develop the technology. Experts warn the development will come with a high social and environmental cost.Education Images/Universal Images Group via Getty ImagesOpenAI CEO Sam Altman, in a Monday blog post, called for massive investments in AI infrastructure.Insufficient infrastructure, Altman warned, will lead to wars being fought over the technology.But experts say developing AI infrastructure at scale comes with big social and environmental costs.In a Monday blog post, OpenAI CEO Sam Altman argued that massive investment in AI infrastructure is key to market dominance — and avoiding global conflict."If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips)," Altman wrote. "If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over, and that becomes mostly a tool for rich people."While promoting the development of AI infrastructure is the new focus of industry leaders like Altman, researchers on the technology's social, environmental, and economic impacts told Business Insider that the development of the data centers and hardware systems needed to achieve artificial general intelligence or superintelligence will cost far more than money.Representatives for OpenAI did not respond to a request for comment from Business Insider.A sharp industry focus on AI infrastructureThe new hot topic in artificial intelligence is infrastructure — investing in it, planning a massive expansion of computing power, and creating the data centers needed to develop the technology at scale.Last week, Microsoft and BlackRock announced the launch of a $30 billion fund to "enhance American competitiveness in AI while meeting the growing need for energy infrastructure to power economic growth."Earlier this month, the White House hosted a roundtable with AI infrastructure leaders from around the country — including Altman and executives from Microsoft, Meta, Amazon, and Anthropic, among others — to "discuss steps to ensure the United States continues to lead the world in AI" and ensure the tech's development is aligned with "national security, economic, and environmental goals."While some experts are cautiously optimistic about AI, others pointed to pitfalls in the rapid build-up of infrastructure.AI's unclear economic benefitsAlex de Vries, an economist and the founder of Digiconomist, a platform dedicated to exposing the unintended consequences of digital trends, told Business Insider the current conversation on AI infrastructure is focused on the development of large-scale data centers, with industry insiders trying to convince national governments to greenlight expansion — despite unclear benefits."It costs a whole lot of resources, while data centers don't generate that many benefits for the local economy — not too many jobs, very little other business activity because no one has to be near a data center," de Vries said. "So for now, the trade-off is really bad."The return on investment in artificial intelligence is increasingly murky. With companies like OpenAI and others preparing to spend over $1 trillion on artificial intelligence in the coming years, a June Goldman Sachs report asked the question: "Are we spending too much for too little reward?"Proponents of the technology argue it could eventually help save lives by revolutionizing the healthcare industry or, as the United Nations climate technology website suggests, "scaling up transformative climate solutions for mitigation and adaptation action in developing countries." However, chatbots and image generators have been the most publicly celebrated advancements in AI from the major Big Tech players — and errors created by AI-generated content have been well documented.In recent earnings calls, executives from Microsoft and Alphabet have repeatedly fended off investors' questions about when to expect a return on their AI expenditures.Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside, told Business Insider that, despite Altman's prediction in his blog post that we could be "a few thousand days" from an AI superintelligence, he's not optimistic that we're as close as Altman says we are — and he argued OpenAI's reliance on large language models is not the right path to get there.Rather than a single model that can do it all, Ren said he believes the industry needs to focus on smaller, more specialized models that require fewer resources and can be built to handle more precise tasks."I think we need to shift the way we build AI, not just rely on bigger and bigger models," Ren said. "This is not really sustainable or scalable."An environmental disaster in the makingBoth de Vries and Ren said that beyond the financial costs, ensuring AI infrastructure is developed at the scale industry players like Altman seek comes with massive environmental costs."If you're talking about large-scale data centers, they are going to be consuming a huge amount of power and a huge amount of water," de Vries said.He noted that data centers consume tons of water for their cooling systems — some of which evaporates, so it cannot be reused. Electricity consumption also skyrockets, driving up demand and increasing reliance on fossil fuels despite the industry's use of renewable energy certificates (RECs).Wind and solar companies generate RECs for each unit of energy they create and then sell them. Buyers of RECs can then say they source renewable energy, even if they're using power from nonrenewable sources."The environmental consequences of a big surge in power demand are typically very, very bad — way, way worse than is being acknowledged by the Big Tech companies," de Vries said.A growing risk to the social orderThat's not to mention what Cary Coglianese, a professor of law and political science and the director of an interdisciplinary program focused on research on effective regulation at the University of Pennsylvania, describes as AI's impact on "social ordering."While Coglianese describes himself as "cautiously optimistic" about the benefits of artificial intelligence, he said even if we achieve advanced uses for the tech, the feat will come with "trade-offs.""Any human endeavor that involves an optimization challenge can be made more efficient through artificial intelligence," Coglianese said. "Some of those uses can be good, like finding malignancies on MRI scans — a great use of AI technology — but we'll also be making much more efficient weapon systems, or automating weapon systems, or creating tools that can oppress people that can be made more, quote-unquote, 'efficient.'"In his blog post, Altman writes that the dawn of what he calls the Intelligence Age comes "with very complex and extremely high-stakes challenges.""It will not be an entirely positive story," Altman wrote. "But the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."Altman didn't mention the current negative social impacts AI is already causing — such as deepfakes infiltrating our political system, a proliferation of AI-generated revenge porn, and workers being replaced as companies increasingly invest in the tech."There's a real disconnect between where the world is today socially and where it is technologically — and that's something that the technologists and the techno-optimists can often overlook," Coglianese told Business Insider. "I'm cautiously optimistic too, but we have to recognize that until we have a social ordering that's different than it is, a new technology won't be some magic elixir that will transform the globe into a utopia."Read the original article on Business Insider | Unknown | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
news | Darius Rafieyan | Like digital locusts, OpenAI and Anthropic AI bots cause havoc and raise costs for websites | AI giants send bots out to crawl the web and scrape data for free. Adding insult to injury, the bots are disrupting websites and spiking cloud bills. | https://www.businessinsider.com/openai-anthropic-ai-bots-havoc-raise-cloud-costs-websites-2024-9 | https://i.insider.com/66eac673ce3009a0fac721b4?width=1200&format=jpeg | 2024-09-19T09:00:02Z | Getty Images; Alyssa Powell/BIEdd Coates' Game UI Database was crippled by traffic from an OpenAI IP address.AI companies are aggressively crawling the web, causing disruptions.Website owners see cloud bills spike due to AI botnet traffic.Edd Coates knew something was wrong. His online database was under attack.Coates is a game designer and the creator of the Game UI Database. It's a labor of love for which he spent five years cataloging more than 56,000 screenshots of video game user interfaces. If you want to know what the health bar looks like in Fallout 3 and compare that to the inventory screen in Breath of the Wild, Coates has you covered.A few weeks ago, he says, the website slowed to a crawl. It was taking 3 times as long to load pages, users were getting 502 Bad Gateway Errors, and the homepage was being reloaded 200 times a second."I assumed it was some sort of petty DDoS attack," Coates told Business Insider.But when he checked the system logs, he realized the flood of traffic was coming from a single IP address owned by OpenAI.In the race to build the world's most advanced AI, tech companies have fanned out across the web, releasing botnets like a plague of digital locusts to scour sites for anything they can use to fuel their voracious models.It's often high quality training data they're after, but also other information that may help AI models understand the world. The race is on to collect as much information as possible before it runs out, or the rules change on what's acceptable.One study estimated that the world's supply of usable AI training data could be depleted by 2032. The entire online corpus of recorded human experience may soon be inadequate to keep ChatGPT up to date.A resource like the Game UI Database, where a human has already done the painstaking labor of cleaning and categorizing images, must have looked like an all-you-can-eat-buffet.Bigger cloud billsFor small website owners with limited resources, the costs of playing host to swarm of hungry bots can present a significant burden."Within a space of 10 minutes we were transferring around 60 to 70 gigabytes of data," said Jay Peet, a fellow game designer who manages the servers that host the Coates' database. "Based on Amazon's on-demand bandwidth pricing that would cost $850 per day."Coates makes no money from the Game UI Database and in fact operates the site at a loss, but he worries that the actions of giant AI companies may endanger independent creators who rely on their websites to make a living."The fact that OpenAI's behavior has crippled my website to the point where it stopped functioning is just the cherry on top," he said.An OpenAI spokesperson said the company's bot was querying Coates' website roughly twice per second. The representative also stressed that OpenAI was crawling the site as part of an effort to understand the web's structure. It wasn't there to scrape data."We make it easy for web publishers to opt out of our ecosystem and express their preferences on how their sites and content work with our products," the spokesperson added. "We've also built systems to detect and moderate site load to be courteous and considerate web participants."Planetary problemsJoshua Gross, founder of digital product studio Planetary, told BI that he encountered a similar problem after redesigning a website for one of his clients. Shortly after launch, traffic jumped and the client saw their cloud computing costs double from previous months."An audit of traffic logs revealed a significant amount of traffic from scraping bots," Gross said. "The problem was primarily Anthropic driving an overwhelming amount of nonsense traffic," he added, referring to repeated requests all resulting in 404 errors.Jennifer Martinez, a spokesperson for Anthropic said the company strives to make sure its data-collection efforts are transparent and not intrusive or disruptive.Eventually, Gross said, he was able to stem the deluge of traffic by updating the site's robots.txt code. Robots.txt is protocol, in use since the late 1990s, that lets bot crawlers know where they can and can't go. It is widely accepted as one of the unofficial rules of the web.Blocking AI botsRobots.txt restrictions aimed at AI companies have skyrocketed. One study found that between April 2023 and April 2024, nearly 5% of all online data and about 25% of the highest quality data added robots.txt restrictions for AI botnets.The same study found that 25.9% of such restrictions were for OpenAI, compared to 13.3% for Anthropic, and 9.8% for Google. The authors also found that many data owners banned crawling in their Terms of Service, but did not have robots.txt restrictions in place. That has left them vulnerable to unwanted crawling from bots that rely solely on robots.txt.OpenAI and Anthropic have said their bots respect robots.txt but BI has reported instances in the recent past in which both companies have bypassed the restrictions.Key metrics pollutedDavid Senecal, a principal product architect for fraud and abuse at networking giant Akamai, says his firm tracks AI training botnets managed by Google, Microsoft, OpenAI, Anthropic, and others. He says among Akamai's users the bots are controversial."Website owners are generally fine with having their data indexed by web search engines like Googlebot or Bingbot," Senecal said, "however, some do not like the idea of their data being used to train a model."He says some users complain about increased cloud costs or stability issues from the increased traffic. Others worry the botnets present intellectual property issues or will "pollute key metrics" like conversion rates.When an AI bot is swarming your website over and over, your traffic metrics will likely be out of whack with reality. That's causes problems for sites that advertise online and need to track how effective this marketing is.Senecal says robots.txt is still the best way to manage unwanted crawling and scraping, though it's an imperfect solution. It requires domain creators to know the specific names of every single bot they want to block, and it requires the bot operators to comply voluntarily. On top of that, Senecal says Akamai tracks various "impersonator" bots that parade as Anthropic or OpenAI web crawlers, making the task of parsing through them even harder.In some cases, Senecal says, botnets will crawl an entire website every day just to see what's changed, a blunt approach that results in massive amounts of duplicated data."This way of collecting data is very wasteful," he said, "but until the mindset on data sharing changes and a more evolved and mature way to share data exists, scraping will remain the status quo.""We are not Google"Roberto Di Cosmo is the director of Software Heritage, a non-profit database created to "collect, preserve and share all publicly available source code for the benefit of society."Di Cosmo says this past summer he saw an unprecedented surge in AI botnets scraping the online database, causing the website to become unresponsive for some users. His engineers spent hours identifying and blacklisting thousands of IP addresses that were driving the traffic, diverting resources away from other important tasks."We are not Google, we have a limited amount of resources to run this operation," Di Cosmo said.He's an evangelist for open access, and not in theory opposed to AI companies using the database to train models. Software Heritage already has a partnership with Hugging Face, which used the database to help train its AI model StarCoder2."Developing machine-learning models that encompass these digital commons can democratize software creation, enabling a wider audience to benefit from the digital revolution, a goal that aligns with our values," Di Cosmo said, "but it must be done in a responsible way."Software Heritage has published a set of principles governing how and when it agrees to share its data. All models created using the database must be open-source and not "monopolized for private gain." And the creators of the underlying code must be able to opt out if they wish."Sometimes, these people get the data anyway," Di Cosmo said, referring to botnets that scrape hundreds of billions of web pages one by one.Getting taken offline"We have been taken offline a couple of times due to AI bots," said Tania Cohen, chief executive of 360Giving, a non-profit database of grants and charitable giving opportunities.Cohen says that as a small charity with no in-house technical team, the surges in traffic have been highly disruptive. What's even more frustrating, she says, is that much of the information is easily downloadable in other ways and doesn't need to be crawled.But hungry AI botnets scrape first, ask questions later."Utterly sick"Coates says his Game UI Database is back up and running and he continues to add to it. There are millions of people out there like Coates, obsessive about some tiny corner of the world, compelled to sink thousands of hours into a pursuit that no one else on Earth could find meaning in. It's one of the reasons to love the internet.And it's yet another area of society buffeted by the ripple effects of the AI revolution. The server costs of a small-fry database operator may seem not worth mentioning. But Coates' story is emblematic of a bigger question: When AI comes to change the world, who bears the cost?Coates says he maintains the database as a source of reference material for fellow game designers. He worries that generative AI, which depends on the work of human creators, will inevitably replace those very same creators."To find that my work is not only being stolen by a large organization, but used to hurt the very people I'm trying to help, makes me feel utterly sick," Coates said.Read the original article on Business Insider | Unknown | Unknown | null | null | null | null | null | null |
news | Ashley Belanger | OpenAI asked US to approve energy-guzzling 5GW data centers, report says | OpenAI stokes China fears to woo US approvals for huge data centers, report says. | https://arstechnica.com/tech-policy/2024/09/openai-asked-us-to-approve-energy-guzzling-5gw-data-centers-report-says/ | 2024-09-25T17:44:22Z | 38OpenAI hopes to convince the White House to approve a sprawling plan that would place 5-gigawatt AI data centers in different US cities, Bloomberg reports.The AI company's CEO, Sam Altman, supposedly pitched the plan after a recent meeting with the Biden administration where stakeholders discussed AI infrastructure needs. Bloomberg reviewed an OpenAI document outlining the plan, reporting that 5 gigawatts "is roughly the equivalent of five nuclear reactors" and warning that each data center will likely require "more energy than is used to power an entire city or about 3 million homes."According to OpenAI, the US needs these massive data centers to expand AI capabilities domestically, protect national security, and effectively compete with China. If approved, the data centers would generate "thousands of new jobs," OpenAI's document promised, and help cement the US as an AI leader globally.But the energy demand is so enormous that OpenAI told officials that the "US needs policies that support greater data center capacity," or else the US could fall behind other countries in AI development, the document said.Energy executives told Bloomberg that "powering even a single 5-gigawatt data center would be a challenge," as power projects nationwide are already "facing delays due to long wait times to connect to grids, permitting delays, supply chain issues, and labor shortages." Most likely, OpenAI's data centers wouldn't rely entirely on the grid, though, instead requiring a "mix of new wind and solar farms, battery storage and a connection to the grid," John Ketchum, CEO of NextEra Energy Inc, told Bloomberg.That's a big problem for OpenAI, since one energy executive, Constellation Energy Corp. CEO Joe Dominguez, told Bloomberg that he's heard that OpenAI wants to build five to seven data centers. "As an engineer," Dominguez said he doesnt think that OpenAI's plan is "feasible" and would seemingly take more time than needed to address current national security risks as US-China tensions worsen.OpenAI may be hoping to avoid delays and cut the linesif the White House approves the company's ambitious data center plan. For now, a person familiar with OpenAI's plan told Bloomberg that OpenAI is focused on launching a single data center before expanding the project to "various US cities."Bloomberg's report comes after OpenAI's chief investor, Microsoft, announced a 20-year deal with Constellation to re-open Pennsylvania's shuttered Three Mile Island nuclear plant to provide a new energy source for data centers powering AI development and other technologies. But even if that deal is approved by regulators, the resulting energy supply that Microsoft could accessroughly 835 megawatts (0.835 gigawatts) of energy generation, which is enough to power approximately 800,000 homesis still more than five times less than OpenAI's 5-gigawatt demand for its data centers.Ketchum told Bloomberg that it's easier to find a US site for a 1-gigawatt data center, but locating a site for a 5-gigawatt facility would likely be a bigger challenge. Notably, Amazon recently bought a $650 million nuclear-powered data center in Pennsylvania with a 2.5-gigawatt capacity. At the meeting with the Biden administration, OpenAI suggested opening large-scale data centers in Wisconsin, California, Texas, and Pennsylvania, a source familiar with the matter told CNBC.During that meeting, the Biden administration confirmed that developing large-scale AI data centers is a priority, announcing "a new Task Force on AI Datacenter Infrastructure to coordinate policy across government." OpenAI seems to be trying to get the task force's attention early on, outlining in the document that Bloomberg reviewed the national security and economic benefits its data centers could provide for the US.In a statement to Bloomberg, OpenAI's spokesperson said that "OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AIs benefits accessible to everyone."Big Tech companies and AI startups will likely continue pressuring officials to approve data center expansions, as well as new kinds of nuclear reactors as the AI explosion globally continues. Goldman Sachs estimated that "data center power demand will grow 160 percent by 2030." To ensure power supplies for its AI, according to the tech news site Freethink, Microsoft has even been training AI to draft all the documents needed for proposals to secure government approvals for nuclear plants to power AI data centers. | Content Creation/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Hughey Newsome, Contributor, Hughey Newsome, Contributor https://www.forbes.com/sites/hugheynewsome/ | AI‒Climate Hero Or Villian? | AI continues to explode, as does the number of its use cases; AI could be used to solve the climate crisis, but its water and electricity hunger could leave its own ma... | https://www.forbes.com/sites/hugheynewsome/2024/09/08/aiclimate-hero-or-villian/ | 2024-09-08T22:30:09Z | MADRID, SPAIN - MARCH 22: Data servers during an open day at the Digital Realty (Interxion) data ... [+] center on March 22, 2023, in Madrid, Spain. For the first time, a data center in Madrid is organizing an open day for the public on the occasion of International Data Center Day. The aim is to raise awareness and increase the visibility of the infrastructures housing the IT equipment that connects users, services and companies, enabling activities such as teleworking, online shopping, eSports or streaming movies and series. (Photo By Alberto Ortega/Europa Press via Getty Images)Europa Press via Getty ImagesAI continues to explode, as does the number of its use cases; AI could be used to solve the climate crisis, but its water and electricity hunger could leave its own massive impact. At the end of August, AI juggernaut Nvidia posted revenue of $30 billion, surpassing consensus expectations of $28.7 billion. Despite this, the stock was incredibly volatile last week as some analysts bemoaned the fact that the company is projecting slowing growth moving forward along with unclear payback timelines for large investments in AI. This volatility shows how large the expectations are for AI as its usage creeps into more of our business and personal lives.AI use cases for mitigating and adapting to climate change are part of this equation. AIs ability to process immensely large data sets to improve weather and climate predictions will help build more effective modeling. Improved accuracy could support urban planning, deployment of infrastructure for renewable energy generation, adaptation investments, and agriculture. AI helps predict the impacts of climate change, including the speed of iceberg and glacial melt into oceans, rate of carbon emission increase from deforestation and impact of plastic pollution on oceanic life. Such breakthroughs could lead to stronger arguments for carbon pricing and other regulatory approaches to support solutions.But there are some challenges that AI presents to the climate crisis, and as AIs usage grows, their impacts grow as well. First, AI requires energy-thirsty data centers. In order for AI to work, it must learn from historic data, use the process of inference to learn from new experiences, data or observations and then adjust historical data sets to incorporate new learnings. This process requires a lot of energy ChatGPT queries consume as much energy as ten Google searches and if all Google searches utilized AI then it would amount to the electricity usage of Ireland and as AI generates more data and its usage expands, this will grow more.This energy need could have several negative implications on the fight against climate change and the pursuit of the UN SDGs. First, the obvious is that the additional demand could lead to more greenhouse gas emissions, as all geographies continue transitioning away from fossil fuel-based electricity generation. Today, data centers, if allowed to run constantly, would consume 350 TWh of electricity, which is more than the consumption of Australia, Spain or Italy. Bloomberg provides estimates of data center consumption growing to 1,580 TWh of electricity. This expansion of electricity consumption will hinder the migration to renewables, as 1,230 TWh of additional electricity needs globally are to come from AI data centers. In comparison, in the US, 664 GWh of electricity were generated from solar and wind sources in 2023, per Climate Central. Such growth in demand could gobble new capacity in renewables.This electricity demand could have justice-related impacts. As the planet warms and adaptation needs increase, electricity is needed to cool homes, schools and cooling centers. Electricity shortages and increased cost could limit the ability of disadvantaged, poorer communities to adapt, just as the need from these adaptation tools get larger thanks to temperature rise.AI also has a significant impact on water usage. Quoting work from professors at University of California at Riverside and University of Texas at Arlington, The global AI demand may be accountable for 4.2 6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of 4 6 Denmark or half of the United Kingdom.Not only could AIs need for electricity and water negatively impact climate and adaptation in general, but AI has implicit problems that impact adaptation efforts. AI could be used to predict the impact of warming by census tract, zip code or even by neighborhood. But AI depends on historic data to learn. If a representative amount of historic data is not present to predict the impact of warming on disadvantaged communities, it could compound negative impacts. For example, if a municipality is planning for investment in cooling centers for adaptation purposes but uses general AI data that does not correct for poverty levels but overleverages, say air conditioning investment rates in the general population, the municipality could be underprepared if disadvantaged communities are not represented properly in baseline data.So where do we go from here? One thing is certain, as Wall Street reminded us this week: AI is here to stay, and the number of applications will only grow. While AI is useful in helping to mitigate and adapt to climate change, its long-term water and electricity footprint could have serious consequences on the fight against climate change. Unless society acknowledges this risk and addresses it, it will not take a powerful AI bot to predict an even warmer climate sooner rather than later. | Prediction/Decision Making/Content Synthesis | Life, Physical, and Social Science/Architecture and Engineering/Business and Financial Operations | null | null | null | null | null | null |
|
news | Matteo Wong | Chatbots Are Saving America’s Nuclear Industry | Microsoft’s AI energy needs are bringing the infamous Three Mile Island power plant back from the dead. | https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/ | 2024-09-21T10:00:00Z | When the Three Mile Island power plant in Pennsylvania was decommissioned in 2019, it heralded the symbolic end of America’s nuclear industry. In 1979, the facility was the site of the worst nuclear disaster in the nation’s history: a partial reactor meltdown that didn’t release enough radiation to cause detectable harm to people nearby, but still turned Americans against nuclear power and prompted a host of regulations that functionally killed most nuclear build-out for decades. Many existing plants stayed online, but 40 years later, Three Mile Island joined a wave of facilities that shut down because of financial hurdles and competition from cheap natural gas, closures that cast doubt over the future of nuclear power in the United States.Now Three Mile Island is coming back, this time as part of efforts to meet the enormous electricity demands of generative AI. This morning, the plant’s owner, Constellation Energy, announced that it is reopening the facility. Microsoft, which is seeking clean energy to power its data centers, has agreed to buy power from the reopened plant for 20 years. “This was the site of the industry’s greatest failure, and now it can be a place of rebirth,” Joseph Dominguez, the CEO of Constellation, told The New York Times. Three Mile Island plans to officially reopen in 2028, after some $1.6 billion worth of refurbishing and under a new name, the Crane Clean Energy Center.Nuclear power and chatbots might be a perfect match. The technology underlying ChatGPT, Google’s AI Overviews, and Microsoft Copilot is extraordinarily power-hungry. These programs feed on more data, are more complex, and use more electricity-intensive hardware than traditional web algorithms. An AI-powered web search, for instance, could require five to 10 times more electricity than a traditional query.The world is already struggling to generate enough electricity to meet the internet’s growing power demand, which AI is rapidly accelerating. Large grids and electric utilities across the U.S. are warning that AI is straining their capacity, and some of the world’s biggest data-center hubs—including Sweden, Singapore, Amsterdam, and exurban Washington, D.C.—are struggling to find power to run new constructions. The exact amount of power that AI will demand within a few years’ time is hard to predict, but it will likely be enormous: Estimates range from the equivalent of Argentina’s annual power usage to that of India.That’s a big problem for the tech companies building these data centers, many of which have made substantial commitments to cut their emissions. Microsoft, for instance, has pledged to be “carbon negative,” or to remove more carbon from the atmosphere than it emits, by 2030. The Three Mile Island deal is part of that accounting. Instead of directly drawing power from the reopened plant, Microsoft will buy enough carbon-free nuclear energy from the facility to match the power that several of its data centers draw from the grid, a company spokesperson told me over email.Such electricity-matching schemes, known as “power purchase agreements,” are necessary because the construction of solar, wind, and geothermal plants is not keeping pace with the demands of AI. Even if it was, these clean electricity sources might pose a more fundamental problem for tech companies: Data centers’ new, massive power demands need to be met at all hours of the day, not just when the sun shines or the wind blows.To fill the gap, many tech companies are turning to a readily available source of abundant, reliable electricity: burning fossil fuels. In the U.S., plans to wind down coal-fired power plants are being delayed in West Virginia, Maryland, Missouri, and elsewhere to power data centers. That Microsoft will use the refurbished Three Mile Island to offset, rather than supply, its data centers’ electricity consumption suggests that the facilities will likely continue to rely on fossil fuels for some time, too. Burning fossil fuels to power AI means the new tech boom might even threaten to delay the green-energy transition.Still, investing in nuclear energy to match data centers’ power usage also brings new sources of clean, reliable electricity to the power grid. Splitting apart atoms provides a carbon-free way to generate tremendous amounts of electricity day and night. Bobby Hollis, Microsoft’s vice president for energy, told Bloomberg that this is a key upside to the Three Mile Island revival: “We run around the clock. They run around the clock.” Microsoft is working to build a carbon-free grid to power all of its operations, data centers included. Nuclear plants will be an important component that provides what the company has elsewhere called “firm electricity” to fill in the gaps for less steady sources of clean energy, including solar and wind.It’s not just Microsoft that is turning to nuclear. Earlier this year, Amazon purchased a Pennsylvania data center that is entirely nuclear-powered, and the company is reportedly in talks to secure nuclear power along the East Coast from another Constellation nuclear plant. Google, Microsoft, and several other companies have invested or agreed to buy electricity in start-ups promising nuclear fusion—an even more powerful and cleaner form of nuclear power that remains highly experimental—as have billionaires including Sam Altman, Bill Gates, and Jeff Bezos.Nuclear energy might not just be a good option for powering the AI boom. It might be the only clean option able to meet demand until there is a substantial build-out of solar and wind energy. A handful of other, retired reactors could come back online, and new ones may be built as well. Just yesterday, Jennifer Granholm, the secretary of energy, told my colleague Vann R. Newkirk II that building small nuclear reactors could become an important way to supply nonstop clean energy to data centers. Whether such construction will be fast and plentiful enough to satisfy the growing power demand is unclear. But it must be, for the generative-AI revolution to really take off. Before chatbots can finish remaking the internet, they might need to first reshape America’s physical infrastructure. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Karen Hao | Microsoft’s Hypocrisy on AI | Can artificial intelligence really enrich fossil-fuel companies and fight climate change at the same time? The tech giant says yes. | https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/ | 2024-09-13T10:00:00Z | Microsoft executives have been thinking lately about the end of the world. In a white paper published late last year, Brad Smith, the company’s vice chair and president, and Melanie Nakagawa, its chief sustainability officer, described a “planetary crisis” that AI could help solve. Imagine an AI-assisted tool that helps reduce food waste, to name one example from the document, or some future technology that could “expedite decarbonization” by using AI to invent new designs for green tech.But as Microsoft attempts to buoy its reputation as an AI leader in climate innovation, the company is also selling its AI to fossil-fuel companies. Hundreds of pages of internal documents I’ve obtained, plus interviews I’ve conducted over the past year with 15 current and former employees and executives, show that the tech giant has sought to market the technology to companies such as ExxonMobil and Chevron as a powerful tool for finding and developing new oil and gas reserves and maximizing their production—all while publicly committing to dramatically reduce emissions.Although tech companies have long done business with the fossil-fuel industry, Microsoft’s case is notable. It demonstrates how the AI boom contributes to one of the most pressing issues facing our planet today—despite the fact that the technology is often lauded for its supposed potential to improve our world, as when Sam Altman testified to Congress that it could address issues such as “climate change and curing cancer.” These deals also show how Microsoft can use the vagaries of AI to talk out of both sides of its mouth, courting the fossil-fuel industry while asserting its environmental bona fides. (Many of the documents I viewed have been submitted to the Securities and Exchange Commission as part of a whistleblower complaint alleging that the company has omitted from public disclosures “the serious climate and environmental harms caused by the technology it provides to the fossil fuel industry,” arguing that the information is of material and financial importance to investors. A Microsoft spokesperson said the company was unaware of the filing and had not been contacted by the SEC.)For years, Microsoft routinely promoted its work with companies such as Schlumberger, Chevron, Halliburton, ExxonMobil, Baker Hughes, and Shell. Around 2020, the same year Microsoft made ambitious climate commitments that included a goal to reach carbon negativity by 2030, the tech firm grew quieter about such partnerships and focused on messaging about the transition to net zero. Behind the scenes, Microsoft has continued to seek business from the fossil-fuel industry; documents related to its overall pitch strategy show that it has sought energy-industry business in part by marketing the abilities to optimize and automate drilling and to maximize oil and gas production. Over the past year, it has leaned into the generative-AI rush in an effort to clinch more deals—each of which can be worth more than hundreds of millions of dollars. Microsoft employees have noted that the oil and gas industries could represent a market opportunity of $35 billion to $75 billion annually, according to documents I viewed.Based on the documents, executives see these generative-AI tools—the buzziest new technology since the iPhone, and one that Microsoft has invested billions of dollars in—as a kind of secret weapon for client outreach. During an internal conference call with more than 200 employees last September, a Microsoft energy exec named Bilal Khursheed noted that, since the company’s generative-AI investments, the energy industry was turning to Microsoft for guidance on AI in a way that had perhaps “never happened before.” “We need to maximize this opportunity. We need to lay out the pathway to adopting generative AI,” he said, according to a transcript of the meeting I viewed. One such pathway? Using generative algorithms to model oil and gas reservoirs and maximize their extraction, Hema Prapoo, Microsoft’s global lead of oil and gas business, said later in the meeting. Several documents also emphasize Microsoft’s unique relationship with OpenAI as an additional selling point for energy clients, suggesting that GPT could drive productivity separate from fossil-fuel extraction. (OpenAI did not respond to a request for comment.)From a business perspective, of course, Microsoft’s pursuit of massive deals with fossil-fuel companies makes sense. And such partnerships do not necessarily mean that the company is contradicting its climate commitments. Microsoft executives have made the case that AI can also help fossil-fuel companies improve their environmental footprint. Indeed, both Microsoft and its energy customers defend their partnerships by arguing that their goals work in harmony, not contradiction. They told me that AI services can make oil and gas production more efficient, increasing production while reducing emissions—a refrain I saw repeated in documents as part of Microsoft’s sales pitches. In addition, some of these companies run wind farms and solar parks, which further benefit from Microsoft’s cloud technologies. Microsoft has also touted exploratory academic research into how AI could be used to discover new materials for reducing CO2 in the atmosphere.The idea that AI’s climate benefits will outpace its environmental costs is largely speculative, however, especially given that generative-AI tools are themselves tremendously resource-hungry. Within the next six years, the data centers required to develop and run the kinds of next-generation AI models that Microsoft is investing in may use more power than all of India. They will be cooled by millions upon millions of gallons of water. All the while, scientists agree, the world will get warmer, its climate more extreme.[Read: AI is taking water from the desert]Microsoft isn’t a company that exists to fight climate change, and it doesn’t have to assume responsibility for saving our planet. Yet the company is trying to convince the public that by investing in a technology that is also being used to enrich fossil-fuel companies, society will be better equipped to resolve the environmental crisis. Some of the company’s own employees described this idea to me as ridiculous. To these workers, Microsoft’s energy contracts demonstrate only the unsavory reality of how the company’s AI investments are actually used. Driving sustainability forward? Maybe. Digging up fossil fuels? As Prapoo put it in that September conference call, it’s a “game changer.”Before Holly Alpine left Microsoft earlier this year—fed up, she said, with the company’s continued support of fossil-fuel extraction—she had spent nearly a decade there working in roles focused on energy and the environment. Most recently, she headed a program within Microsoft’s cloud operations and innovation division that invests in environmental sustainability projects in the communities that host the company’s data centers. Alpine had also co-founded a sustainability interest group within the company seven years ago that thousands of employees now belong to. (Like the other named sources in this story, she did not provide any of the documents I reviewed.)Members of this group initially concerned themselves with modest corporate matters, such as getting the company’s dining halls to cut down on single-use items. But their ambitions grew, partly in response to Microsoft’s own climate commitments in 2020. These were made during a moment of heightened climate activism; millions around the world, including tech workers, had just rallied to protest the lack of coordinated action to cut back carbon emissions.Microsoft has failed to reduce its annual emissions each year since then. Its latest environmental report, released this May, shows a 29 percent increase in emissions since 2020—a change that has been driven in no small part by recent AI development, as the company explains in the report. “All of Microsoft’s public statements and publications paint a beautiful picture of the uses of AI for sustainability,” Alpine told me. “But this focus on the positives is hiding the whole story, which is much darker.”The root issue for Alpine and other advocates is Microsoft’s unflagging support of fossil-fuel extraction. In March 2021, for example, Microsoft expanded its partnership with Schlumberger, an oil-technology company, to develop and launch an AI-enhanced service on Microsoft’s Azure platform. Azure provides cloud computing to a variety of organizations, but this product was tailor-made for the oil and gas industries, to assist in the production of fossil fuels, among other uses. The hope, according to two internal presentations I viewed, was that it would help Microsoft capture business from many of the leading fossil-fuel providers. A spokesperson for Schlumberger declined to comment on this deal.Recent AI advances have complicated the picture, though they have not changed it. One slide deck from January 2022 that I obtained presented an analysis of how Microsoft’s tools could allow ExxonMobil to increase its annual revenue by $1.4 billion—$600 million of which would come from maximizing so-called sustainable production, or oil drilled using less energy. (An ExxonMobil representative declined to comment.) Other documents provided details on multiple deals Chevron has signed with Microsoft to access the tech giant’s AI platform and other cloud services. An executive strategy memo from June 2023 indicated that Microsoft hoped to pitch Chevron on adopting OpenAI’s GPT-3.5 and GPT-4 to “deliver more business value.” A Chevron spokesperson told me that the company uses AI in part to “identify efficiencies in exploration and recovery and help reduce our environmental footprint.” There is the tension. On the one hand, AI may be able to help reduce drilling’s toll on the environment. On the other hand, it’s used for drilling.[Read: Every time you post on Instagram, you’re turning on a light bulb forever]How do these companies weigh the environmental benefits of a more efficient drilling operation against the environmental harms of being able to drill more, faster? A Shell spokesperson provided a quantifiable example of their thinking: Microsoft’s Azure AI platform allowed Shell to calculate the best settings for its equipment, driving down carbon emissions at several of its natural-gas facilities. One facility saw an estimated reduction of 340,000 metric tons of carbon dioxide per year. This seems impressive: Using estimated emissions from the EPA, this is roughly the amount of CO2 generated by 74,000 cars annually. Relative to Shell’s total emissions, however, it’s practically insignificant. According to the company’s own reporting, Shell was responsible for about 1.2 billion metric tons of emissions last year.Within Microsoft, members of the sustainability group have repeatedly petitioned leadership to change its stance on these contracts. Google, for example, announced in 2020 that it would not make custom AI tools for fossil-fuel extraction—couldn’t Microsoft do the same? “We’ve never advocated for cutting ties with the fossil-fuel industry,” Alpine told me. Microsoft could work with clients on their transition to clean energy, without explicitly supporting extraction, Alpine reasoned.To help make her case, Alpine presented a memo to Smith in December 2021 that calculated the effects of the company’s oil and gas deals. She pointed, for example, to a single 2019 deal with ExxonMobil that could purportedly “expand production by as much as 50,000 oil-equivalent barrels a day by 2025,” according to a Microsoft press release. Those extra barrels would produce an estimated 6.4 million metric tons of emissions, drastically outweighing a carbon-removal pledge that Microsoft made in 2020, she wrote. (I verified her estimate with multiple independent carbon analysts. ExxonMobil declined to comment.)Employee advocates asked company leadership to amend its “Responsible AI” principles to address the environmental consequences of the technology. The group also recommended further restrictions on fossil-fuel-extraction projects. Around this time, Microsoft instead released a new set of principles governing the company’s engagements with oil and gas customers. It was co-authored by Darryl Willis, the corporate vice president of Microsoft’s energy division (and a former BP executive who served as BP’s de facto spokesperson during the Deepwater Horizon crisis). Unsurprisingly, it did not adopt all of the group’s suggestions.What it did include was a stipulation that Microsoft will support fossil-fuel extraction only for companies that have “publicly committed to net zero carbon targets.” This may be cold comfort for some: A 2023 report from the Net Zero Tracker, a collaboration between nonprofits and the University of Oxford, found that such commitments from fossil-fuel companies are “largely meaningless.” Most firms claim a net-zero target that fully accounts only for their operational emissions, such as whether their offices, car fleets, or equipment are powered with green energy, while ignoring much of the emissions from the fossil fuels they produce.When I talked with Willis about Microsoft’s energy business, he repeated over and over that “it’s complicated.” Willis explained that his team is focused on expanding energy access—“There are a billion people on the planet who don’t have access to energy,” he said—while also trying to accelerate the decarbonization of the world’s energy. I asked him how Microsoft planned to achieve the latter goal when it’s chasing contracts that help companies drill for fossil fuels. “Our plan, candidly stated, is to make sure we’re partnering with the right organizations who are leaning in and trying to accelerate and pull this [sustainability] journey forward,” he said. In other words, the company does not see its approach to selling the technology as incompatible with its sustainability goals. “AI will solve more problems than it creates,” Willis told me. “A lot of the dilemmas that we’re facing with energy will be resolved because of the relationship with generative AI.”Hoping to understand more about the company’s perspective, I also spoke with Alex Robart, a former Microsoft employee who left in 2022 and worked with Willis to write the energy principles. He called Microsoft’s approach practical. “Has Big Energy, incumbent energy, in a lot of ways behaved pretty badly, particularly in the past 25 to 40 years in the U.S. in particular, with regards to climate? Yeah, absolutely,” he told me. But he argued that fossil-fuel companies have to be part of the transition to cleaner alternatives and will do so only if they have financial incentives. “You need their balance sheets; you need their capital; you need their project-management expertise. We’re talking about building massive infrastructure, and building infrastructure is hard,” he said. Without that, “it’s fundamentally not going to work.”[Read: America’s new climate delusion]In the meantime, Microsoft has “not committed to a timeline” for phasing out work that is geared toward finding and developing new fossil-fuel reserves, a spokesperson said.Lucas Joppa, Microsoft’s first chief environmental officer, who left the company in 2022, fears that the world will not be able to reverse the current trajectory of AI development even if the technology is shown to have a net-negative impact on sustainability. Companies are designing specialized chips and data centers just for advanced generative-AI models. Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them.Therein lies the crux of the problem: In this new generative-AI paradigm, uncertainty reigns over certainty, speculation dominates reality, science defers to faith. The hype around generative AI is accelerating fossil-fuel extraction while the technology consumes unprecedented amounts of energy. As Joppa told me: “This must be the most money we’ve ever spent in the least amount of time on something we fundamentally don’t understand.” | Decision Making/Process Automation/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Clark Gates-George | Atlas Stream Processing: A Cost-Effective Way to Integrate Kafka and MongoDB | Developers around the world use Apache Kafka and MongoDB together to build responsive, modern applications. There are two primary interfaces for integrating Kafka and MongoDB.In this post, we’ll introduce these interfaces and highlight how Atlas Stream Processing offers an easy developer experience, cost savings, and performance advantages when using Apache Kafka in your applications. First, we will provide some background.The Kafka ConnectorFor many years, MongoDB has offered the MongoDB Connector for Kafka (Kafka Connector). The Kafka Connector enables the movement of data between Apache Kafka and MongoDB, and thousands of development teams use it. While it supports simple message transformation, developers largely handle data processing with separate downstream tools. Atlas Stream ProcessingMore recently, we announced Atlas Stream Processing—a native stream processing solution in MongoDB Atlas. Atlas Stream Processing is built on the document model and extends the MongoDB Query API to give developers a powerful, familiar way to connect to streams of data and perform continuous processing. The simplest stream processors act similarly to the primary Kafka Connector use case, helping developers move data from one place to another, whether from Kafka to MongoDB or vice versa. Check out an example:// connect to kafka
var source = { $source: { name: 'kafkaEU', topic: 'facility00' } }

// write the data to an Atlas Cluster
var sink = { $merge: { name: 'myAtlasCluster', db: 'cont', coll: 'facilityTemp'} }

// assemble the pipeline and run it
var pipeline = [source, sink]
streams.createStreamProcessor('station_temps', pipeline)
streams.station_temps.start()
Beyond making data movement easy, Atlas Stream Processing enables advanced stream processing use cases not possible in the Kafka Connector. One common use case is enriching your event data by using $lookup as a stage in your stream processor. In the example above, a developer can perform this enrichment by simply adding a lookup stage in the pipeline between source and sink. While the Kafka Connector can perform some single message transformations, Atlas Stream Processing makes for both an easier overall experience and gives teams the ability to perform much more complex processing. Choosing the right solution for your needsIt’s important to note that Atlas Stream Processing was built to simplify complex, continuous processing and streaming analytics rather than as a replacement for the Kafka Connector. However, even for the more basic data movement use cases referenced above, it provides a new alternative to the Kafka Connector. The decision will depend on data movement and processing needs. Three common considerations we see teams making to help with this choice are ease of use, performance, and cost.Ease of useThe Kafka Connector runs on Kafka Connect. If your team already heavily uses Kafka Connect across many systems beyond MongoDB, this may be a good reason to keep it in place. However, many teams find configuring, monitoring, and maintaining connectors costly and cumbersome.In contrast, Atlas Stream Processing is a fully managed service integrated into MongoDB Atlas. It prioritizes ease of use by leveraging the MongoDB Query API to process your event data continuously. Atlas Stream Processing balances simplicity (no managing servers, utilizing other cloud platforms, or learning new tools) and processing power to reduce development time, decrease infrastructure and maintenance costs, and build applications quicker.PerformanceHigh performance is increasingly a priority with all data infrastructure, but it’s often a must-have for use cases that rely on streams of event data (commonly from Apache Kafka) to deliver an application feature. Many of our early customers have found Atlas Stream Processing more performant than similar data movement in their Kafka Connector configurations. By connecting directly to your data in Kafka and MongoDB and acting on it as needed, Atlas Stream Processing eliminates the need for a tool in-between.CostFinally, managing costs is a critical consideration for all development teams. We’ve priced Atlas Stream Processing competitively when compared to the Kafka Connector.Most hosted Kafka providers charge per task. That means each additional source and sink will generate a separate data transfer and storage cost that linearly scales as you expand. Atlas Stream Processing charges per Stream Processing Instance (SPI) worker and each worker supports up to four stream processors. This means potential cost savings when running similar configurations to the Kafka Connector. See more details in the documentation.Atlas Stream Processing launched just a few months ago. Developers are already using it for a wide range of use cases, like managing real-time inventories, serving contextually relevant recommendations, and optimizing yields in industrial manufacturing facilities.We can’t wait to see what you build and hear about your experience!Ready to get started? Log in to Atlas today. Already a Kafka Connector user? Dig into even more details and get started using our tutorial. | https://mongodb.com/blog/post/atlas-stream-processing-cost-effective-way-integrate-kafka-mongodb | 2024-09-09T17:30:00Z | Saving Energy, Smarter: MongoDB and Cedalo for Smart Meter SystemsThe global energy landscape is undergoing a significant transformation, with energy consumption rising 2.2% in 2023, surpassing the 2010-2019 average of 1.5% per year. This increase is largely due to global developments in BRICS member countriesBrazil, Russia, India, China, and South Africa.As renewable sources like solar power and wind energy become more prevalent (in the EU, renewables accounted for over 50% of the power mix in the first quarter of 2024), ensuring a reliable and efficient energy infrastructure is crucial. Smart meters, the cornerstone of intelligent energy networks, play a vital role in this evolution. According to IoT analyst firm Berg Insight, the penetration of smart meters is skyrocketing, with the US and Canada expected to reach nearly 90% adoption by 2027, whereas China is expected to account for as much as 7080% of smart electricity meter demand across Asia in the next few years. This surge is indicative of a growing trend towards smarter, more sustainable energy solutions. In Central Asian countries, the Asian Development Bank is supporting the fast deployment of smart meters to save energy and improve the financial position of countries' power utilities.This article will delve into the benefits of smart meters, the challenges associated with managing their data, and the innovative solutions offered by MongoDB and Cedalo.The rise of smart metersSmart meters, unlike traditional meters that require manual readings, collect and transmit real-time energy consumption data directly to energy providers. This digital transformation offers numerous benefits, including:Accurate Billing: Smart meters eliminate the need for estimations, ensuring that consumers are billed precisely for the energy they use.Personalized Tariffs: Energy providers can offer tailored tariffs based on individual consumption patterns, allowing consumers to take advantage of off-peak rates, special discounts, and other cost-saving opportunities.Enhanced Grid Management: Smart meter data enables utilities to optimize grid operations, reduce peak demand, and improve overall system efficiency.Energy Efficiency Insights: Consumers can gain valuable insights into their energy usage patterns, identifying areas for improvement and reducing their overall consumption.With the increasing adoption of smart meters worldwide, there is a growing need for effective data management solutions to harness the full potential of this technology.Data challenges in smart meter adoptionDespite the numerous benefits, the widespread adoption of smart meters also presents significant data management challenges. To use smart metering, power utility companies need to deploy a core smart metering ecosystem that includes the smart meters themselves, the meter data collection network, the head-end system (HES), and the meter data management system (MDMS). Smart meters collect data from end consumers and transmit it to the data aggregator via the Local Area Network (LAN). The transmission frequency can be adjusted to 15 minutes, 30 minutes, or hourly, depending on data demand requirements. The aggregator retrieves the data and then transmits it to the head-end system. The head-end system analyzes the data and sends it to the MDMS. The initial communications path is two-way, signals or commands can be sent directly to the meters, customer premise, or distribution device.Figure 1: End-to-end data flow for a smart meter management system / advanced metering infrastructure (AMI 2.0)When setting up smart meter infrastructure, power, and utility companies face several significant data-related challenges:Data interoperability: The integration and interoperability of diverse data systems pose a substantial challenge. Smart meters must be seamlessly integrated with existing utility systems and other smart grid components often requiring extensive upgrades and standardization efforts.Data management: The large volume of data generated by smart meters requires advanced data management and analytics capabilities. Utilities must implement robust data storage, processing, and analysis solutions to handle real-time time series data streams storage, analysis for anomaly detection, and trigger decision-making processes.Data privacy: Smart meters collect vast amounts of sensitive information about consumer energy usage patterns, which must be protected against breaches and unauthorized access. Addressing these challenges is crucial for the successful deployment and operation of smart meter infrastructure.MQTT: A cornerstone of smart meter communicationMQTT, a lightweight publish-subscribe protocol, shines in smart meter communication beyond the initial connection. It's ideal for resource-constrained devices on low-bandwidth networks, making it perfect for smart meters. While LoRaWAN or PLC handle meter-to-collector links, MQTT bridges Head-End Systems (HES) and Meter Data Management Systems (MDMS). Its efficiency, reliable delivery, and security make it well-suited for large-scale smart meter deployments.Cedalo MQTT platform and MongoDB: A powerful combinationCedalo, established in 2017, is a leading German software provider specializing in MQTT solutions. Their flagship product, the Cedalo MQTT Platform, offers a comprehensive suite of features, including the Pro Mosquitto MQTT broker and Management Center. Designed to meet the demands of large enterprises, the platform delivers high availability, audit trail logging, persistent queueing, role-based access control, SSO integration, advanced security, and enhanced monitoring.To complement the platform's capabilities, MongoDB's Time Series collections provide a robust and optimized solution for storing and analyzing smart meter data. These collections leverage a columnar storage format and compound secondary indexes to ensure efficient data ingestion, reduced disk usage, and rapid query processing. Additionally, window functions enable flexible time-based analysis, making them ideal for IoT and analytical applications.Figure 2: MongoDB as the main database for the meter data management system where it receives meter data via Pro Mosquitto MQTT broker.Let us revisit Figure 1 and leverage both the Cedalo MQTT Platform and MongoDB in our design. In Figure 2, the Head-end System (HES) can use MQTT to filter, aggregate, and convert data before storing it in MongoDB. This data flow can be established using the MongoDB Bridge plugin provided by Cedalo. Since the MQTT payload is JSON, it is ideal to store it in MongoDB as the database stores data in BSON (Binary JSON). The MongoDB Bridge plugin offers advanced features such as flexible data import settings (specifying target databases and collections, choosing authentication methods, and selecting specific topics and message fields to import) and advanced collection mapping (mapping multiple MQTT topics to one or more collections with the ability to choose specific fields for insertion). MongoDB's schema flexibility is crucial for adapting to the ever-changing structures of MQTT payloads. Unlike traditional databases, MongoDB accommodates shifts in data format seamlessly, eliminating the constraints of rigid schema requirements. This helps with interoperability challenges faced by utility companies.Once the data is stored in MongoDB, it can be analyzed for anomalies. Anomalies in smart meter data can be identified based on various criteria, including sudden spikes or drops in voltage, current, power, or other metrics that deviate significantly from normal patterns. Here are some common types of anomalies that we might look for in smart meter data:Sudden spikes or drops: These include voltage, current, or power spikes or drops. A sudden increase or decrease in voltage beyond expected limits.Outliers: Data points that are significantly different from the majority of the data.Unusual patterns: Unusually high or low energy consumption compared to historical data or inconsistent power factor readings.Frequency anomalies: Frequency readings that deviate from the normal range.MongoDB's robust aggregation framework can aid in anomaly detection. Both anomaly data and raw data can be stored in time series collections, which offer reduced storage footprint and improved query performance due to an automatically created clustered index on timestamp and _id. The high compression offered addresses the challenge of data management at scale. Additionally, data tiering capabilities like Atlas Online Archive can be leveraged to push cold data into cost-effective storage. MongoDB also provides built-in security controls for all your data, whether managed in a customer environment or MongoDB Atlas, a fully managed cloud service. These security features include authentication, authorization, auditing, data encryption (including Queryable Encryption), and the ability to access your data security with dedicated clusters deployed in a unique Virtual Private Cloud (VPC).End-to-end solutionFigure 3: End-to-end data flowInterested readers can clone this repository and set up their own MongoDB-based smart meter data collection and anomaly detection solution. The solution follows the pattern illustrated in Figure 3, where a smart meter simulator generates raw data and transmits it via an MQTT topic. A Mosquitto broker receives these messages and then stores them in a MongoDB collection using the MongoDB Bridge. By leveraging MongoDB change streams, an algorithm can retrieve these messages, transform them according to MDMS requirements, and perform anomaly detection. The results are stored in a time series collection using a highly compressed format.The Cedalo MQTT Platform with MongoDB offers all the essential components for a flexible and scalable smart meter data management system, enabling a wide range of applications such as anomaly detection, outage management, and billing services. This solution empowers power distribution companies to analyze trends, implement real-time monitoring, and make informed decisions regarding their smart meter infrastructure.We are actively working with our clients to solve IoT challenges. Take a look at our Manufacturing and Industrial IoT page for more stories. | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Sebastian Weber, Forbes Councils Member, Sebastian Weber, Forbes Councils Member https://www.forbes.com/councils/forbestechcouncil/people/sebastianweber/ | How AI Is Making Energy Greener | The most important role CIOs must play today is building bridges between business and IT. | https://www.forbes.com/councils/forbestechcouncil/2024/09/13/how-ai-is-making-energy-greener/ | 2024-09-13T20:10:12Z | Sebastian Weber is Chief Information Officer of E.ON.gettyAI is helping communities across the globe make their production, distribution and use of energy more sustainable, a blessing for the industry as well as the planet. As the energy system is becoming more complex due to an increase in electrification and the growing integration of renewable energy, AI can help overcome these challenges.AI-Based Improvements To Energy Systems AIs capability to improve the way our energy systems produce and distribute energy far exceeds traditional technologies in place today. Here are some of the ways AI will improve the current energy system and actively contribute to sustainability.Improved energy generation: Optimizing wind, solar and water-powered energy generation systems to align with weather patterns both in real time and based on historical patterns can improve output and reduce redundancies.Improved energy distribution: Weather conditions, grid overload and necessary maintenance can all put undue stress on the power grid. AI can find solutions that keep power flowing while adjusting to different conditions. For example, AI can make it easier for people to change from using their own solar-panel generated energy to drawing from the grid during cloudy weather.Optimized energy consumption: Using AI to identify consumer needs can uncover potential energy savings and adjust supply.Predictive maintenance: AI will allow companies to automate tasks and use data, analytics, satellite imagery and drones rather than expensive and time-consuming boots-on-the-ground maintenance. At E.ON, we use AI for virtual drone inspections of our power lines to enhance security and improve the maintenance process. An average of 47% of all of the potential defects in our power lines are detected by AI.Increasing grid-planning efficiency: AI-supported grid planning and management helps to integrate volatile renewables. At E.ON, we use a centralized platform that helps us bundle regional data to enable smart and scalable network solutions. AI can run multiple simulations and work with a virtual twin of the entire grid to discover how to expand it. In the grid-building process, AI can optimize workforce engagement, materials use, etc.Improved vegetation management: AI allows energy companies to clear space for power lines while preserving more trees and plants for local wildlife. Traditionally, linemen are sent to cut down anything and everything near power lines to reduce the chance of power interruption due to falling trees or limbs, a heavy-handed approach that leaves ugly scars in roadside forests and neighborhood greens. With AI, E.ON can monitor 10,000 kilometers of our power lines using satellite imagery and even estimate the growth rate of trees and vegetation near the lines, leading to a more environmentally friendly approach to maintenance.How AI In The Energy System Benefits End Users AI can be useful in the home, as well. Consumers tend to view green energy as solar panels on their roof or a charging station for their cars, but AI can take their contributions to a more carbon-neutral energy system so much further. AI in the home can optimize individual energy use including knowing the best time to charge electric vehicles and devices to put the lowest strain on the power grid and implementing temperature control that reduces CO2e while keeping the home comfortable.Businesses are using AI optimization to use energy more efficiently, as well. Airports use AI to optimize their energy flow in heating and cooling so that heat created from cooling large freezers in commissary areas is redirected to rooms that need heat. The same technology could be used in shopping malls, factories and other large buildings.Change Is HereIn the past, all energy systems needed to do was keep the balance of demand and supply. But now, energy system technologies, needs, populations and pressure on the grid have grown exponentially. The less-plannable integration of renewables is creating spikes in the energy grid. Thousands of new power connection requests are coming in every month. This represents opportunities for the industry but also poses challenges that can only be solved via consequent digitalization and the use of AI. In the near future, AI will master the volatile feed-in behaviors of the millions of decentralized energy systems and consumers, leading to efficiency gains and flexibility throughout the new green energy world.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Brian Wang | Getting Infinite Data by 2080 to Improve SuperAI | A deep discussion of the developments with AI and the upcoming 2025 Tesla Dojo 2 and 2026 Dojo 3 chips. But AI needs compute and data. Won’t AI reach a limit as we use up human created data? What if you can use mass production of space telescopes to monitor the whole galaxy down to ... Read more | https://www.nextbigfuture.com/2024/09/getting-infinite-data-by-2080-to-improve-superai.html | 2024-09-25T15:47:51Z | A deep discussion of the developments with AI and the upcoming 2025 Tesla Dojo 2 and 2026 Dojo 3 chips. But AI needs compute and data. Won’t AI reach a limit as we use up human created data?What if you can use mass production of space telescopes to monitor the whole galaxy down to objects the size of 10-100 kilometers in resolution?Sending space probes would take decades to reach the nearest star and would take 100,000 years to get across the galaxy and lightspeed and sending back data would take another 100,000 years.NASA has had many studies looking at placing one meter telescopes at the gravitational lensing areas. This allows a telescopes to leverage the gravity of the sun to make a telescope 100 billion times more effective. You would also need a solar coronograph to block the light of the other stars will enable the telescopes to directly image planets and moons and details of quasers and black hole halos and other objects.NASA was working on a one third scale prototypes of the solar sails. Components like the sails have been built. This mission would be about 6 kilograms. A full scale mission would be less than 50 kilograms.Galaxy Monitoring ArrayWhat has not been considered by NASA is production of the lightweight one meter telescopes at scale. What kind of scale? The scale of all cars on Earth and beyond? Gravitational lensing enables looking that objects directly on the other side of the sun. If we had one scope for every solar system in the galaxy then we would be able to constantly image objects in every solar system in the galaxy. We would only need to go about 20 times faster than we have up to today. Solar sails that swing around the sun could achieve these speeds. We need to improve the heat resistance of the materials with materials that we have now.Getting out 20 times further than Pluto lets us use the Sun for this 100 billion times improvement. A halo of telescopes around the sun is the Galaxy observation array.SpaceX has 6300 Starlink satellites before the fully reusable Starship increases capabilities by thousands of times.Mass production of satellites could go to the millions and outnumber the 5 million global cell towers. How about billions of Starlink sized space satellites? There are 2 billion cars in the world.Starlink sized space telescopes could be made at scale and placed beyond the orbit of Pluto. The telescopes and solar sails would be about 50 kilogram but slightly improved versions could get the mass down to 20 kilograms. This would be 100 times less than a car. Building and deploying the same mass of space telescopes as the world’s cars would be about 200 billion telescopes.The same mass of the world’s cars in space telescopes would be 200 billion telescope probes. This is about the number for each solar system in the galaxy. They would be like remote probes. Going out 3 light days to monitor 4 to 100,000 light years and beyond. Learn the galaxy and the Universe in detail with a flood of information for AI analysis and learning.1000 SpaceX Starship fleets would launch 250,000 tons every two years to Mars but 1.5 million tons to low earth orbit. The solar sail telescopes vehicles would not need to be refueled from orbit and would not need to be carried by the Starships. They would use the solar sails to fly around the sun and get speed. The earth orbit Starship fleet could be expanded to 10,000 to 1 million SpaceX Starships by keeping production rates up. The world has 30,000 large commercial passenger planes and those planes cost about $350 million each. The SpaceX Starship with booster can have $250k raptor engines and an overall cost of about $20 million.A 30,000 SpaceX Starship fleet could lift 45 million tons to orbit every week. This would be enough lift capacity for 1 billion one meter light weight telescopes.Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements. | Detection and Monitoring/Content Synthesis/Prediction | Life, Physical, and Social Science/Computer and Mathematical | null | null | null | null | null | null |
|
news | Steven Dudash, Contributor, Steven Dudash, Contributor https://www.forbes.com/sites/greatspeculations/people/stevendudash/ | Powering Data Centers Is A Herculean Task | AI innovation requires more than just semiconductors. The power infrastructure to support them might be even more important, and harder to build. | https://www.forbes.com/sites/greatspeculations/2024/09/30/powering-data-centers-is-a-herculean-task/ | 2024-10-01T02:02:58Z | While there are numerous ways to invest in the Artificial Intelligence (AI) boom. Some high flying chip stocks like Nvidia have captured the spotlight, but there are more ways to invest in a broad wave of AI adoption than just semiconductor chips. Chips are essential, but without the infrastructure to support them theyre useless. Investors should also consider the sector powering the data centers needed for AI development.According to the Electric Power Research Institute, data centers are forecasted to consume up to 9% of US electricity generation per year by 2030, up from 4% in 2023. That doesnt sound like much, but between replacing old power generation and turning on new power arrays, the step up is significant. The reason is AI. The energy needs for AI computing are immense. To put the energy consumption into perspective, AI queries just asking ChatGPT to clean up that email to your boss can require approximately ten times the electricity of traditional internet searches. Training new models and running more complicated queries suck even more power.The need for more power generation is so intense that its actually becoming the primary obstacle for companies looking to innovate in AI. The ideas are there, the computational chips are expensive, but theyre there, but you need to figure out how to meet the energy needs for the data center to function.As this space continues to play out, we think its worth exploring opportunities within energy generation where investors can benefit from partnerships between both the upstream and downstream players in the AI market.This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying ... [+] the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images)AFP via Getty ImagesData Centers Need Electricityand lots of ItTo explore the first-phase beneficiaries providing key inputs for AI advancements, its worth breaking down what data centers are used for when it comes to artificial intelligence. Data centers are the backbone of this AI boom as they store the hardware necessary to train AI models. As modern technology companies have grown larger and larger, and the amount of equipment needed to maintain operations and cloud services has skyrocketed, the energy demand from hyperscalers has followed suit.Hyperscalers companies that operate cloud computing infrastructure provide the storage and compute power necessary to train AI and are faced with the difficult task of deciding where to source the energy needed for their daily operations. Major players within this space include Amazon Web Services, Microsoft Azure, and Google Cloud, all of which are forming partnerships with local utilities and power providers to ensure their energy demand is met.With the sweeping transition to cloud data storage, electricity demand has turned into the biggest problem facing the industry. Hyperscalers providing infrastructure and platform services just cant get enough of it, and the world of institutional finance is taking advantage of the opportunity. As weve seen recently, Blackrock teamed up with Microsoft (MSFT) and MGX (an AI fund out of The United Arab Emirates) to announce the launch of a $30 billion AI infrastructure investment fund focusing on building out data centers and energy infrastructure. While this may seem like a massive investment initially, it pales in comparison to the total investment in AI from the top 5 hyperscalers over the next few years. For context, current projections indicate the investment levels will reach a combined $1 trillion in 2027 according to S&P Global Market Intelligence.Straight to the SourceThe way power is generated and sold to large hyperscale consumers is going to change. Whereas in the past many data centers may have tapped into local electrical grids, the future of data center electrification involves dedicated power construction and long term power purchase agreements (PPAs).Hyperscalers prefer PPAs over market spot pricing since theyre able to guarantee stable energy prices for an extended period, typically over ten years. This level of certainty provides a hedge against unwanted price fluctuations that would affect hyperscalers bottom line and put upward pressure on already substantial model training costs. Perhaps more important though is the desirability of long term contracts for developers and investors. PPAs from cash flush hyperscalers give developers the ability to secure financing for rapid construction of new power generation facilities. Knowing you have a reliable counter party allows for better profit forecasting! Furthermore, many of the power generation projects are dedicated solely to the data center and disconnected from the broader grid a contract stipulating a minimum rate of return is essential for a project that takes years to break even and only has a single customer.Finally, its worth pointing out that while renewables like wind and solar are the preferred long term source of power for the data centers, the demand right now is high enough that other options are on the table too. Nuclear power is back in the spotlight and natural gas turbines offer rapid deployment. Each power source comes with pros and cons some are more flexible and some are harder to ramp up and down with data center demand. There will be a wide variety of solutions to the power conundrum over the next several years and many solutions that start off as off grid or dedicated power supplies may eventually end up becoming part of the larger power system.NENTMANNSDORF, SAXONIA, GERMANY - 2012/09/04: Photovoltaik solar panels in a row at sunrise.. (Photo ... [+] by Frank Bienewald/LightRocket via Getty Images)LightRocket via Getty ImagesOpportunities in RenewablesThe long term demand for electricity is an opportunity to accelerate the build out of clean energy solutions. The top hyperscalers have set ambitious goals to reduce their carbon footprints renewable power is a must for them over longer time frames. Among the top names, Google is targeting to operate its data centers on carbon-free energy by 2030, and Microsoft and Amazon both hope to shift to 100% renewable energy by 2025.The massive undertaking is already underway as Microsoft (MSFT) and Brookfield Renewable Partners (BEP) signed the biggest-ever clean power deal this year for their data centers in the US and Europe, effectively penciling in 10.5 gigawatts of renewable energy capacity starting in 2026 and estimated to cost more than $10 billion. Keep in mind that 10.5 gigawatts is 3 times larger than the amount of electricity consumed by all data centers in Northern Virginia commonly know as the data center capital of the world due to its favorable state tax incentives and best in-class access to power and internet connectivity.With the steadfast increase in renewable energy demand due to corporate mandates, there are several ways to invest. Weve already touched on the financing side where groups like Blackrock or Brookfield are raising capital to fund the construction of new power projects. Then theres the manufacturing side, companies that make solar panels or electrical components, or companies who build wind turbines like GE Vernova (GEV). Finally, there are the utilities who want to own and operate the power generation facitlities over the next several decades, companies like NextEra Energy (NEE) or Constellation Energy Corp. (CEG).Finally there are cutting edge innovations in the power space that are less proven. As an example, innovative nuclear technology, like small modular reactors, is progressing rapidly and has received extensive backing from the US Department of Energy. These new reactors, built by firms such as NuScale Power (SMR), are smaller than traditional reactors and dont have to be custom-built on-site. The process of generating electricity through nuclear fission is the same, but the benefits include reduced construction time, enhanced passive safety features, greater flexibility in deployment, and more efficient fuel usage. For broad adoption with data centers in the US, well need to see continued investment in generation capacity and regulatory compliance, so patience is key.CHANGJIANG, CHINA - AUGUST 10: Aerial view of the core module of China's Linglong One, the world's ... [+] first commercial small modular reactor (SMR), installed on August 10, 2023 in Changjiang Li Autonomous County, Hainan Province of China. Linglong One, which is self-developed by the China National Nuclear Corporation, was installed successfully in Hainan on August 10. (Photo by Luo Yunfei/China News Service/VCG via Getty Images)China News Service via Getty ImagesFar from Being OverThe rapid expansion of artificial intelligence and pursuing energy demand are creating significant opportunities in the energy sector. As hyperscalers like Amazon, Google, and Microsoft ramp up their AI infrastructure, the need for massive amounts of electricity will drive investments in both traditional and renewable energy sources. While fossil fuels may provide immediate solutions due to existing infrastructure, the long-term focus is shifting toward renewables, such as wind, solar, and even nuclear power to meet sustainability goals.Well-established and innovative companies in the energy sector, especially those forming strategic partnerships with hyperscalers can offer an alternative method to invest in the AI market besides buying chip companies or hyperscalers directly, even if the full societal benefits of investment end up taking years to play out. The ongoing collaboration between energy and AI firms will be crucial for supporting AI adoption and addressing challenges that lie ahead. Regardless of any short-term uncertainties, the unrelenting demand for energy remains far from being satisfied, and investors should continue to monitor the space for opportunities. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Timothy Papandreou, Contributor, Timothy Papandreou, Contributor https://www.forbes.com/sites/timothypapandreou/ | The AI Power Squeeze: A Data Center Sustainability Imperative | The exponential growth of Artificial Intelligence presents a double-edged sword for big tech companies data centers that are the AI factories of the future. | https://www.forbes.com/sites/timothypapandreou/2024/09/12/the-ai-power-squeeze-a-data-center-sustainability-imperative/ | 2024-09-13T03:49:17Z | AI Data Centers are expanding worldwide and need enormous amounts of energy. Some like this one in ... [+] Sweden is run on 100% Hydro power (Photo by JONATHAN NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP via Getty Images)AFP via Getty ImagesThe exponential growth of Artificial Intelligence presents a double-edged sword for big tech companies data centers that are the AI factories of the future. While AI offers immense opportunities, it also brings a significant challenge: a surge in greenhouse gas emissions from the energy-intensive data centers powering AI workloads. AI requires immense computational power, translating into massive energy consumption by data centers.So why are they so energy intensive all of a sudden? Traditional data center designs didnt fully factor in co-generation, heat and water load management with adjacent industries, grid optimization and local energy storage as most of those concepts were experimental at the time. On the building operations side, we see one issue. However, looking inside, we find another: most companies teams lacked effective methods to predict and measure the processing capacity required in their data centers. This resulted in underutilization of their central and graphic processing units, further exacerbating the energy problem.Even with access to renewable energy, the relentless and exponential demand would strain capped sources, struggling to keep pace with the rapid growth of AI processing needs. The intermittent nature of renewable energy, such as solar, wind, and the geographic constraints of hydro and geothermal power, poses a significant challenge in meeting the constant and growing energy demands of data centers.Moreover, the current geographical distribution of data centers often limits their access to new and remote renewable energy resources. While Google, Meta, Microsoft, and Amazon try to reach their ambitious sustainability goals, they are struggling to keep up while scaling up their AI efforts. This is a massive growth phase theyre al in with a new level of compute that annual sustainability reporting will need to thoughtfully address and acknowledge.Optimizing the AI Factory of the Future (aka today as Data Centers)GPUs have an order of magnitude energy efficacy of up to 20X compared to CPUs (Photo by ... [+] Costfoto/NurPhoto via Getty Images)NurPhoto via Getty ImagesSo, how can data centers, the heart of AI production, become more sustainable? For one the clients, designers, builders and operators need a refresh and systems thinking approach to understand all the factors, costs and benefits to work this out together. In addition, here are some potential promising solutions that can help move the needle:Hardware and Software Efficiency:Nvidias recent advancements in their GPUs often outperform CPUs by a factor of up to 20 in terms of energy efficiency for specific AI and high-performance computing tasks. This alone can significantly reduce energy consumption without compromising performance.Renewable Energy & Storage: Increased investment in renewable energy sources like solar and wind, and geothermal and hydro coupled with efficient new battery storage options, can provide a cleaner and more reliable power source. One example is Google partnership with NV Energy to tap into geothermal energy to support its data center expansion in the Las Vegas, Nevada- a growing data center region.Small-Scale Nuclear: While controversial, some experts believe small-scale, modular on site nuclear powered systems could offer a low-carbon, high-energy density solution.Grid Optimization and Heat Sharing: Data centers generate immense heat and need commensurate access to water to keep them cool. While some of this can be alleviated with load management, companies like Equinix and Schneider Electric are are developing smart grids that can utilize and share this heat with surrounding communities to improve overall neighborhood level energy efficiency.The Irony and Opportunity: AI as the Systems Solution, Not Just the ProblemAi powered and managed renewable energy, electric vehicle to grid networks and systems have the ... [+] opportunity to reduce the total energy burden of our neighborhoods (Photo by Frederic J. BROWN / AFP) (Photo by FREDERIC J. BROWN/AFP via Getty Images)AFP via Getty ImagesThe irony is that the immense energy needs of AI data centers could be the very catalyst for solving the entire economy's energy challenges. AI-powered automation across industries can significantly reduce energy consumption in manufacturing, transportation, and even agriculture. Consider self-optimizing and energy sharing factories or self-driving electric vehicles connected to homes with battery walls and the local grid all powered and load managed by AI and with the potential to be far more energy efficient as a system and a network than their stand-alone siloed counterparts. This shift could ultimately lead to a net benefit in energy use, even with the rise of AI data centers.The Work Computer vs. Paper Office ParadigmThe transition from paper offices to work computers offers a historical example of technology's potential for both increased efficiency and new energy burdens. While data centers consume significant power, they can replace the energy used in paper production, transportation, and physical commutes associated with paper-based and manual workflows. There's a principle in economics called the law of diminishing returns, which applies to energy use as well. As technology advances, we see significant reductions in energy use per unit of output. However, further improvements become harder to achieve and people consume more of it.Despite this limitation, the future of energy is bright. The focus on renewable energy sources like solar and wind is making them more viable options to traditional, high-emission sources. Additionally, breakthroughs in energy generation, battery storage, and even entirely new forms of clean energy could push the boundaries of efficiency even further. Finally, a focus on a "circular economy" in the tech sector, where materials are reused and recycled, can significantly reduce the environmental impact of manufacturing and disposing of electronic equipment.A Global Challenge, a Collaborative SolutionThe key solution to data center electricity consumption is to ensure they are powered through ... [+] renewable energy. Picture date: Tuesday June 13, 2023. (Photo by Niall Carson/PA Images via Getty Images)PA Images via Getty ImagesData center sustainability isn't just a tech company concern. Every nation will soon have its own domestic AI data center network, housing efficient data and energy management centers as part of their critical resiliency infrastructure. Collaboration and knowledge sharing are key. This includes sharing best practices, deeper industry partnerships, and joint research efforts to tackle some of these systemic challenges.The race to develop powerful AI is on, but it can't come at the expense of environmental sustainability. Optimizing data centers and fostering collaboration are crucial steps towards building greener AI factories for the future. However, the story doesn't end there. The very technology driving the energy surge in data centers could be the key to unlocking a cleaner, more efficient energy future for the entire global economy. | Unknown | Computer and Mathematical/Architecture and Engineering | null | null | null | null | null | null |
|
news | Glenn Gow, Contributor, Glenn Gow, Contributor https://www.forbes.com/sites/glenngow/ | How CEOs Can Overcome AI Adoption Challenges: Strategies For 2025 | AI implementation comes with significant challenges that demand careful consideration. Let’s review the key ones. | https://www.forbes.com/sites/glenngow/2024/09/22/how-ceos-can-overcome-ai-adoption-challenges-strategies-for-2025/ | 2024-09-22T10:00:00Z | (This article is part of a series on Artificial Intelligence for CEOs, Board Members and Senior Executives.)Increasing AI adoption and increasing profitabilityLInkedInYou've heard the hype. Artificial Intelligence promises to revolutionize business, but you're hesitant. As a CEO, you're right to be cautious. AI implementation comes with significant challenges that demand careful consideration. Lets review the key ones.The Cost ConundrumFirst, there's the cost. AI projects require substantial upfront investment in technology, infrastructure, and talent. You might wonder if the returns justify this expenditure. The price tag for comprehensive AI integration can run into a significant percentage of your costs, potentially straining your financial resources.According to a recent Gartner survey, demonstrating ROI is the biggest obstacle to generative AI adoption. Forty-nine percent of those surveyed said estimating and demonstrating AI value is their biggest obstacle, 42% said it was a lack of talent, and 40% said it was a lack of confidence in AI technology.The Talent Tug-of-WarThen there's the talent gap. The market for AI specialists is fiercely competitive, and their salaries can be eye-watering. The shortage of skilled AI professionals is a global issue, with demand far outstripping supply. While 81% of IT professionals feel they can take on AI, only 12% have AI experience, and 70% of employees need to upgrade their skills (see Ensuring You Have the Right People for Success: Assessing Your Team).Data PrivacyLastly, concerns about data privacy and security concerns are paramount. AI runs on data, and you're responsible for protecting sensitive information. The reputational and financial risks from a data breach are substantial, and navigating the landscape of AI ethics and regulations adds another layer of complexity.Overcoming the Hurdles: Strategies for SuccessThese concerns are valid, but they're not insurmountable. Forward-thinking companies are finding ways to overcome these hurdles and reap the benefits of an AI investment. How are these forward-thinking companies overcoming these challenges?Think Big, Start Small, Implement a phased approach to AI adoption. Begin with focused, high-impact projects that address specific business problems. This strategy allows you to demonstrate quick wins, build internal capabilities, and gain valuable insights without committing to a massive upfront AI investment. Once these smaller projects prove successful, you can scale AI solutions and minimize risk while providing tangible evidence of AI's value.For example, UPS adopted ORION (On-Road Integrated Optimization and Navigation), optimize delivery routes by analyzing routes, traffic, weather, and other factors. ORION saved UPS an average of 6 to 8 miles per day per route, which cut 100 million miles per year off delivery miles, resulting in $300 to $400 million saved annually. It also eliminated 100,000 metric tonnes of CO2 emissions. The initial AI project has proven so successful that UPS has continued to invest in AI by introducing UPSNav, with turn-by-turn directions, and UPS MyChoice and personalized customer services for delivery routing and notifications.Winning the Talent WarTackle the talent challenge head-onpartner with universities and AI research institutions to create a pipeline of skilled professionals. Whether or not you can do that, you must invest in upskilling your existing workforce. This dual approach will help you build a team that understands both AI and your business context.Consider creating internal AI academies or sponsoring employees for specialized training programs. In a Skillsoft survey, IT professionals report that training improves work quality (62%), engagement (47%), and job performance (47%), and 82% of IT professionals say a lack of training is the primary reason they change jobs. By developing talent in-house, you'll not only fill skill gaps but foster loyalty and retention.Crafting a Clear AI RoadmapDevelop a clear AI roadmap that aligns with your business goals. Identify specific use cases where AI can drive immediate impact. Set realistic timelines and ROI expectations. This strategic approach will help you prioritize AI investments and measure success effectively.Your roadmap should include short-term wins, medium-term goals, and a long-term vision. Consider high-impact, low-effort initiatives that yield immediate returns, such as adopting AI chatbots, using AI for demand forecasting, or AI-powered process automation. Comprehensive planning will help you maintain momentum and secure ongoing support for AI initiatives.Wayfair has been embracing AI in a strategic way. The company started with AI-supported support, giving customer service reps AI-crafted responses for customer chat and emails to reduce response time and increase customer satisfaction. More recently, Wayfair has developed a tool that leverages AI and augmented reality to help customers visualize Wayfair products in their home.Building Trust Through Data GovernanceProactively address data concerns. Implement robust data governance frameworks and strong security measures. Ensure ethical AI use and regulatory compliance. By taking these steps, you'll build trust with customers, employees, and stakeholders, creating a solid foundation for AI future initiatives.Consider appointing a chief AI ethics officer or establishing an AI ethics board to oversee your AI projects. This demonstrates your commitment to responsible AI use and can help alleviate concerns from both internal and external stakeholders.IBM has been careful in building data governance into its AI practices. For example, as part of its large language model training, IBM has been careful to identify sensitive data, consider data privacy, monitor data for security risks, watch for copyright infringement, and ensure consent when using customer data.The Competitive Edge: Why Early AI Investment Pays OffThe benefits of early AI investment are too significant to ignore. Companies that embrace AI gain a powerful competitive edge by enhancing operational efficiency, slashing costs, and delivering personalized customer experiences that drive loyalty and revenue.Data-Driven Decision MakingAI-driven insights will empower you to make data-backed decisions faster and more accurately. You'll uncover new opportunities, optimize processes, and stay ahead of market trends, which is essential infast-moving industries where quick, informed decisions are crucial.For example, Amazon uses AI decision-making to optimize costs and expedite shipping. Amazon uses AI-powered robots to pack and ship orders and to check delivery routes and traffic for faster delivery.Operational ExcellenceAI can streamline operations across your entire organization. From supply-chain optimization to predictive maintenance, AI-powered solutions can significantly reduce costs and improve efficiency. For instance, an AI system might predict equipment failures before they occur, minimizing downtime and maintenance costs.For example, companies like Apollo Energy Analytics are using AI-powered predictive analytics to detect anomalies in the performance of solar systems, telling solar installers when service is needed. Any type of equipment can benefit from machine learning models that can predict failure and explain why.Customer-Centric InnovationWith AI, you can offer hyper-personalized experiences that delight customers and promote loyalty. AI-powered chatbots can provide 24/7 customer service, while recommendation engines can boost sales by suggesting products tailored to individual preferences. These innovations can set you apart in crowded markets.According to the Porch Group, 69% of companies rated personalized customer experience as a top priority. Research also shows that 81% of customers prefer companies that offer personalized service, and 70% want customer service where the companies know about past interactions.Sephora has embraced AI to give customers a new kind of shopping experience and promote customer loyalty. The Sephora Virtual Artist app uses AI to recommend cosmetics based on customer skin tone and preferences. The Virtual Artist app has increased Sephoras online sales by 35%.Future-Proofing Your BusinessCommitting to AI now will future proof your business against industry disruption. As AI capabilities advance, early adopters can leverage new technologies and maintain market leadership. You'll be ready to capitalize on emerging opportunities while competitors struggle to catch up (see CEOs Guide to Generative AI: Advice for 2024).The Call to Action: Embrace AI LeadershipThe time to act is now. Start by assessing your company's AI readiness. Identify key areas where AI can generate immediate value. Develop a tailored AI strategy that addresses your specific business challenges and opportunities.Allocate resources for AI initiatives but do so strategically. Remember, successful AI adoption is as much about nurturing a culture of innovation as it is about technology. Foster an environment that embraces continuous learning and adaptation.Create an AI task force comprising leaders from different departments to ensure a holistic approach to implementation. This cross-functional team can help identify opportunities, address challenges, and champion AI adoption across the organization.AI is not just another IT project. It's a fundamental shift in how business operates. By overcoming your reluctance and investing in AI today, you're not just keeping up with the competition you're positioning your company to lead in the AI-driven future.The question is no longer whether you should invest in AI but how quickly you can start. Your competitors are already moving. Will you lead, or will you follow? | Unknown | Management | null | null | null | null | null | null |
|
news | null | A look at the technology trends that matter most | McKinsey’s latest analysis of 15 core technology trends points to a new “influencer” role for generative AI as well as growing interest in robotics and digital trust. | https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/a-look-at-the-technology-trends-that-matter-most | 2024-09-05T00:00:00Z | Innovation and interest in the tech sector remain strong, despite market challenges and dips in investment. On this episode of The McKinsey Podcast, McKinsey technology experts Lareina Yee and Roger Roberts share findings from the McKinsey Technology Trends Outlook 2024 report. They talk with editorial director Roberta Fusaro about where innovation is exploding, interest is deepening, and investment is flowing.In our second segment, from our CEO Insights series, McKinsey partner Blair Epstein explores how successful CEOs organize their yearly communication plans, including how to manage communications through a crisis.This transcript has been edited for clarity and length.The McKinsey Podcast is cohosted by Lucia Rahilly and Roberta Fusaro.The state of technologyRoberta Fusaro: This is the fourth year in a row that McKinsey has released its outlook on technology trends. The research covers 15 trends, including advanced connectivity, cloud and edge computing, quantum computing, generative AI [gen AI], and applied AI. What parameters did you use for this research?Roger Roberts: We look at several different dimensions. One, we look at innovation. So we try to assess how much innovation is happening in each trend. Then we look at interest, which tries to assess the volume and depth of dialogue thats happening about each of these trends in the world. And then we look at investmenthow much money is flowing into the companies and technologies that are enabling and driving the trends.Roberta Fusaro: According to the latest research, innovation and overall interest in technology remains strong, although investment in new technology did fall. Lareina, what accounts for these findings?Lareina Yee: In many ways, thats not surprising. We are in a golden age of innovation and possibility in terms of technology. The drop in investment was between 30 percent and 40 percent, which is about $570 billion. Whats interesting is that, despite that drop in investment, we do see that there are pockets amongst the trends that are continuing to see a rise in investment. To no surprise, an example of this is gen AI. But what might be less expected is in areas such as robotics, as well as in climate and sustainability technology.Roberta Fusaro: The research also showed a dip in job postings for tech talent. How much of that was due to macroeconomics, and how much due to other factors?Roger Roberts: When we see the job-posting numbers, we also find those to be quite correlated with investment. Headwinds around investment turn into headwinds around talent addition. When we look at a multiyear picture, we see that talent demand remains strong against all of these trends. Its especially impressive that in a few areas, such as AI and renewable energy, the talent demand continued to hold up, even this year in the face of significant economic headwinds.Gen AI as an influencerRoberta Fusaro: Lareina, in regard to gen AI, what stood out in some of the research that we did this year?Lareina Yee: What is interesting is how gen AI is a catalyst. We think of it as sunshine that lights up many of the trends.Some of it is good, old-fashioned analytical AI, machine learning. We see increases in interest and investment in those arenas, and we certainly see very interesting deployments. Ive heard more about digital twins anecdotally recently than I have in the past couple of years. But also, there are impacts into other trends. The capabilities in gen AI are very relevant for robotics, which is one of the trends that we highlight this year.Roger Roberts: I would also highlight that gen AI builds on the progress in cloud computing. It also builds on technologies that enable energy-efficient data centers fed by lower-carbon sources of energy supply. It also builds upon advanced connectivity to assure that users around the world have true access to the amazing capabilities rapidly evolving in the AI domain.Robotics and digital trustRoberta Fusaro: Lareina, you had mentioned two newer trends that were covered in this years report. One was robotics. The other one was digital trust and cybersecurity. Can you tell us more about why these two technology domains made the cut this year?Lareina Yee: I think you have a combination of necessity as well as inspiration. If you want to start implementing gen AI, digital trust is a day one essential. So all of the things we do to create data security, to create a trust framework, to be able to use these models in safe and responsible ways while moving fast, are very connected to digital trust and security.Roger Roberts: Robotics is obviously a form of automation thats been around for a whiledecades and decades. On the other hand, whats changed more recently is the notion that robots can be designed and can adapt to more than just a few tasks.The training of robots using modeling techniques that are either borrowed from or very similar to those for large language models and even the use of large language models themselves as components of robotic systems have opened up new vistas for robots to take a particular task, learn it, improve it, and then move on to additional tasks. And the notion that robots have become more and more general purpose has really drawn a lot more interest, innovation, and investment into the robotics sector, and were going to see an ongoing blooming of innovation in this domain.The notion that robots have become more and more general purpose has really drawn a lot more interest, innovation, and investment into the robotics sector.Roberta Fusaro: Can you give us an example of a cool use case for robotics?Roger Roberts: Robots are getting better and better at food preparation. This is a skill that requires not just tactical picking and placing of items you might put on an assembly line. It also requires vision, assessment of context, understanding of changing conditions in the kitchen, and the flexibility to make many kinds of dishes on a menu that might also be constantly changing. And so the ability for robots to move into this domain has been an exciting development and certainly one that were starting to see more and more food preparation businesses experiment with and begin to scale up.Lareina Yee: There are also broader pilots happening in the retail and consumer segment. For example, think about how you do stocking in inventory management. In the past, weve had robots and machines helping with these activities. But with the infusion of gen AI, the robot might expand into some of these analytical capabilities. Before it was a very literal set of instructions. To oversimplify, before you could say, You move left five feet, then you go straight two feet, then you move right one foot. But with our current types of modeling and analytics, there could be more variation and judgment involved. And its not actually human judgment, but it may appear more like that.Electrification, renewables, and quantumRoberta Fusaro: There are two other standout trends mentioned in the report, both of which saw growth and investment: electrification and renewables. How can businesses use electrification to stay competitive? Roger Roberts: Electrification has been a priority, given the need to move toward more and more carbon-free energy sources. As we transform toward carbon-free generation, then that drives demand into the electric grid, which then can create opportunities. It also can create some new constraints. This means a lot of development and a lot of innovation will need to come into our electric infrastructure. To manage the grid better, well need technologies and tools to manage the sources and uses that will provide the shock absorbers or storage technologies that help us balance the supply and demand in the grid. All of these are important areas of innovation and investment and are attracting a lot of interest these days.Lareina Yee: One of the things thats been interesting this year, in addition to the engineering and scientific advancements, is were seeing an increase in investments and incentives. Part of those incentives come from the government. For example, were seeing policy initiatives in Europe that provide incentives and subsidies in some cases. But you also see private sector actors providing incentives. An example of this is that Cargill, in its agricultural program in many countries, will pay farmers who use sustainable techniques. And in electrification, you saw a huge surge in terms of Amazon and its investments in renewable energy. Its a combination of the need that Roger talks about in terms of what we need in sustainable energy consumption but also increased incentives and investments and focus from both the public sector and the private sector.Roberta Fusaro: What do leaders need to understand about whats happening in the area of renewables?Roger Roberts: This is another domain in which weve seen growth in the volume and velocity of technologies coming into the market. There is more and more deployment of current, strong, renewable tech in solar and wind. And there are more innovations that allow you to bring those new energy sources into the grid effectively. This will continue to be an important driver of productivity on the electric-power-generation side and also an important element of our global response to the climate challenge that faces all of us.Lareina Yee: One of the trends that Ive heard increased interest in that is not actually powered by AI but inspired by how that trend has evolved is quantum technology. Quantum experts debate whether that technology will be ready in the next couple of years, five to ten years, or even moreten years out.But what were seeing amongst many business leaders is at least an increased awareness. And why would that be? Quantum technologies are very different from gen AI, but they do have a similarity, which is that once theyre actually here and deployed, they will change the way we work very quickly. For example, banks are keeping an eye on the pace of quantum technology because that has the ability to massively change cryptography, which is important for banks. The ebb and flow of tech developmentRoberta Fusaro: Awareness is one aspect, but it leads to a broader question. What are some of the challenges that companies face with scaling some of these trends that weve been talking about?Roger Roberts: When we think about all of these trends collectively, we see them moving down a path from science to engineering to scale. Sometimes the scientific breakthrough then runs into challenges when it comes to engineering scale-up. Youre moving from something thats been proven in a lab to something that must be brought to a level of volume production, has to be brought down in cost, and must be brought up in consistency of quality.That can take time and investment. It can take years of engineering effort. What weve observed recently is that if we think about the trends, like cloud computing and connectivity, collectively, these are allowing innovations to flow around the world more quickly.If we think about the trends, like cloud computing and connectivity, collectively, these are allowing innovations to flow around the world more quickly.As a result, we see this time cycle from science to engineering to scale compressing. This is exciting because it allows for more rapid propagations of innovations and their impacts positively in the world, but it also can cause concern. It can mean that challenges from technology can hit us before were ready. It means that a lot of work has to be done to thoughtfully consider the downstream implications of the introduction of new technologies at scale.Lareina Yee: Whats also interesting is were just seeing the cusp of some of these breakthroughs. We will continue to see more and more capabilities on some of these. For example, if we look at gen AI, weve seen an explosion in terms of the multimodal models. They were there, but were starting to see more practical use cases of these.Were seeing changes in the context window size, which is the short-term brain of a gen AI model. Were starting to see agentic AI models, which Roger and I just published an article on. It describes the ability of these models not just to summarize but to take an action. So even within these trends, you will see increasing technology innovations and capabilities that allow us to do more things.Roberta Fusaro: Given the speed at which technology advances, the speed at which gen AI is advancing, do you have a few words for executives about how best to keep up?Lareina Yee: My very short answer would be to play and be curious. One of the most outstanding things about these technologies is that you can play with themyou can experiment with them; you can learn about them. And so for the curious mind, its a playground.Roger Roberts: Part of the answer here is also just making sure youve got some investment set aside, no matter the economic climate, in looking at the horizon, over the horizon, as you are navigating the challenges that face your business today.And that small investment can sometimes pay off in big ways when you spot a trend that you can take advantage of and ride before your competitors. So keep your eyes on the horizon. Dont fall victim to short-termism, no matter what the weather is around you.Plotting out the CEO communication planLucia Rahilly: Next up, McKinsey partner Blair Epstein talks about communication road maps, especially when prepping for a crisis, with managing editor of podcasts Laurel Moglen.Laurel Moglen: There are many competing priorities for CEOs. Based on your experience with clients, what percentage of time do great CEOs spend on interactions with their stakeholders?Blair Epstein: There isnt a universal answer. It really will vary across CEOs. Now that said, we do know from our research that, on average, CEOs spend about 30 percent of their time with their external stakeholders. But let me bring that variation to life. Brad Smith, when he was the CEO of Intuit, talked about how he spent 20 percent of his time with those external stakeholders. What that meant was if someone wanted to be a part of that 20 percent, they had to prove why they were a better use of his time than something else.And now, Ill contrast this with Peter Voser, the former CEO of Shell International. He spent about 50 percent of his time with external stakeholders because he felt it was critical for him to be the primary ambassador for the organization to the outside world.The amount of time CEOs should spend on stakeholder engagement depends on their priorities, where they are in their tenure, and whats going on. However, 30 percent is a good average number to keep in mind.Laurel Moglen: Whether CEOs are investing 20 percent or 50 percent of their time in stakeholders, what approaches to that time have you seen work?Blair Epstein: First, theyre disciplined about planning for the year ahead. Theyll have a 12-month view of what theyll do and wheninvestor calls, customer visits, and conferences. Of course, its dynamic. It will change. But that helps set the priority for the year and define some of the anchor points of their stakeholder engagement.The second thing theyll do is that theyll align stakeholder interactions with their strategic agenda, and theyll make the most of every opportunity. CEOs, especially today, are on the move quite a bit. When they visit a given place, theyre going to fit in community events, interactions with government regulators and stakeholders, and employee town halls. Theyre making the most of every moment so that engaging the right stakeholders isnt a trade-off. It becomes an and with other things on their agenda.As I mentioned earlier, theyre going to set a limit. Typically, we find that they set those boundaries based on a few questions: Does the stakeholder help reinforce or augment our strategy? Are there opportunities to cocreate strategy and infuse new thinking by engaging the stakeholder? Are there long-term or short-term risks in engaging or failing to engage the stakeholder and in doing it now? Laurel Moglen: CEOs must always be prepared to manage crisis. How does that fit into a CEOs communication approach?Blair Epstein: Its a great question because the reality is, from 2010 to 2017, headlines that carried the word crisis alongside the names of the 100 largest companies on the Forbes Global 2000 appeared 80 percent more often than they did during the previous decade. We can attribute that to a few things.One is that theres heightened demand for business leaders in this era of stakeholder capitalism. And were in an increasingly complex geopolitical landscape. Theres a saying: good news travels fast; bad news travels faster. I think we could say that false news travels fastest.These factors can come together to create a perfect storm, with the CEO at the center.Laurel Moglen: Would you say its not a good idea to avoid the storms?Blair Epstein: I dont think its even an option. This is something that comes up often in our conversations with new CEOs, who may not have found themselves engaging with stakeholders in this way prior to stepping into the role. You dont have a choice. The storm will find you. The crisis will find you. The only question is, how prepared are you for it?Laurel Moglen: What do the best CEOs do to prepare for such storms in this risk-heavy environment?Blair Epstein: There are four things that we see CEOs consistently doing. The first is that theyre going to regularly stress test the company. Theyre looking five, ten years out to assess what challenges they may face, what scenarios they may encounter. And then they work backward from that to adjust plans and build in more resilience for the organization today.Second, theyre going to have a built-in command center. Were in an always-on world, and that means that there must be capabilities embedded across the organization to quickly monitor, manage, and disseminate relevant information not only across the organization but to those key external stakeholders when the storm starts brewing. This is typically a cross-functional team, including comms, legal, risk, and executive employees, all of whom have a clear sense of direction on how to manage short- or long-term risks before and during the storm. The third piece is that CEOs have to keep a long-term perspective. They have to have an ability to be in the moment, to manage whats happening now, while also thinking about whats around the corner, the potential opportunities that could arise out of a crisis. And lastly, and not to be underestimated, the CEOs are personally resilient. Theyre able to lead by example, to be a steady, a guiding force, the calm captain at the helm, so that they can mobilize teams and stakeholders to navigate the roughest waters.Laurel Moglen: Is there one of these four that CEOs more commonly overlook or ignore? Blair Epstein: One of the pain points I see is when people think about crisis management as a communications exercise, its easy to miss that opportunity to have done the scenario testing. To have gotten yourself ready in advance so that your strategy may evolve puts you in a better position to navigate any eventual storms. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Kamya Pandey | UAE Tech Firm G42 Set to Launch Hindi Large Language Model ‘NANDA’ | NANDA is a 13-billion parameter model trained on approximately 2.13 trillion tokens of language datasets, including Hindi.The post UAE Tech Firm G42 Set to Launch Hindi Large Language Model ‘NANDA’ appeared first on MEDIANAMA. | https://www.medianama.com/2024/09/223-uaes-g42-launch-hindi-large-language-model-nanda/ | 2024-09-12T05:27:22Z | United Arab Emirates (UAE) based technology group G42 announced its plans to release a Hindi large language model (LLM), ‘NANDA’, a 13-billion parameter model trained on approximately 2.13 trillion tokens of language datasets, including Hindi. G42 also has an Arabic LLM called JAIS, which consists of models trained on 590 million to 70 billion parameters. The group said that it wanted to replicate the same in other regions with underrepresented languages.The race to develop and launch LLMs for Indian languages is gaining momentum. Earlier this year, Tech Mahindra announced the launch of its Indic language LLM Project Indus. The company initially trained the LLM in Hindi and around 37 dialects, and intends to move on to cover other languages in a phased manner. Similarly, Ola has also trained its Krutrim family of LLMs in Indian languages.Under India’s AI mission, the government plans to build indigenous AI for critical sectors. The idea of India-specific LLMs was a topic of discussion even before the India AI mission at the Global Partnership on AI (GPAI) last year, with different experts having different opinions about the matter.Some, like Mitesh Khapra, assistant professor at IIT Madras, argued during GPAI that India’s focus currently should be on creating LLMs of Indian languages and content. He mentioned that companies could create LLMs for Indian use cases by using open-source models and combining them with Bhashini, a language translation platform created by Indias IT Ministry. The Indian AI industry should focus on teaching LLMs in Indian languages, contexts, and content, he argued.However, he also mentioned that one of the challenges in creating an Indian language LLM is that Indians are comfortable with English interfaces. Thus, to encourage Indian users to engage with Indian language LLMs, AI companies will need to develop foundational speech recognition technology for these languages.Also read: | Unknown | Education, Training, and Library/Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Julian Horsey | Ollama Update Adds New AI Models, Memory Management, Faster Performance & More | Ollama has released a new version with significant updates and features. This release addresses long-standing user requests and introduces new models with various capabilities. The update process is streamlined across different operating systems, ensuring ease of use without data loss. Updating to the latest version of Ollama is a breeze, no matter what operating system […]The post Ollama Update Adds New AI Models, Memory Management, Faster Performance & More appeared first on Geeky Gadgets. | https://www.geeky-gadgets.com/ollamas-latest-update-adds-new-models/ | 2024-09-25T10:41:03Z | Ollama has released a new version with significant updates and features. This release addresses long-standing user requests and introduces new models with various capabilities. The update process is streamlined across different operating systems, ensuring ease of use without data loss. Updating to the latest version of Ollama is a breeze, no matter what operating system you use. Mac users can conveniently trigger the update right from the menu bar icon. On Windows, simply access the update through the icon in the lower right corner. For those running Linux, just rerun the install script to get the latest and greatest.And if you’re a Docker aficionado, manage your volumes for models, stop the current container, delete it, pull the newest version, and spin it back up. The update process has been carefully streamlined to ensure maximum ease of use while safeguarding against any potential data loss.Major New Features Boost Performance and UsabilityThis release introduces two standout features that significantly enhance Ollama’s performance and usability:Improved memory management: A new `olama stop` command allows you to easily unload models from memory when they’re not needed, giving you fine-grained control over resource allocation.Performance enhancements: Docker users on Windows and Linux will especially appreciate the noticeable speed improvements, with faster startup times that boost overall efficiency.These optimizations make it easier than ever to harness Ollama’s advanced capabilities while ensuring smooth operation across diverse environments. Whether you’re running intensive model training or performing real-time inference, Ollama is engineered to deliver top-notch performance.Here are a selection of other articles from our extensive library of content you may find of interest on the subject of Ollama :Suite of Advanced Models for Diverse ApplicationsOllama’s latest release introduces a powerful suite of new models tailored for a variety of specialized applications:Solar Pro Preview: Optimized for single GPU use, this model features high intelligence and is perfect for compute-constrained environments.Quen 2.5: Renowned for its expansive knowledge and robust performance, Quen 2.5 is available in multiple parameter sizes to suit your specific needs.Bespoke Minich Che: Need quick yes/no answers? This model specializes in providing concise binary responses to prompts.Mistal Small: A versatile powerhouse, Mistal Small excels at translation, summarization, sentiment analysis, and reasoning tasks.Reader LM: Seamlessly convert HTML to markdown with this handy model, though initial performance may vary as it learns.Whether you’re working on NLP, computer vision, data analysis, or other AI domains, Ollama’s diverse model lineup empowers you to tackle a wide range of challenges. Each model is carefully crafted and rigorously tested to deliver accurate, reliable results.Empowering Users Through Feedback and CollaborationAt Ollama, we believe that the ultimate measure of our software’s success lies in the hands of our users. Rather than relying solely on abstract benchmarks, we encourage you to dive in, explore the new features and models, and share your candid feedback. Your insights and experiences play a vital role in shaping the future of Ollama.By actively collaborating with our user community, we can continue to refine and optimize Ollama to better meet your evolving needs and exceed your expectations. Together, we can push the boundaries of what’s possible and unlock new frontiers in AI and machine learning.Ollama’s latest software release represents a major leap forward, with streamlined updates, enhanced performance, and a versatile collection of advanced models. Whether you’re a researcher, developer, data scientist, or AI enthusiast, Ollama equips you with the tools you need to tackle complex challenges and drive innovation..Media Credit: Matt WilliamsFiled Under: AI, Top NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Levi Masonde | AI Chatbot with Message History using LangChain and SQL | In this tutorial, you will learn how to create a message history and a UI for a LangChain chatbot application. | https://www.mssqltips.com/sqlservertip/8097/ai-chatbot-message-history-langchain-sql/ | 2024-10-01T04:39:43Z | By: Levi Masonde | Updated: 2024-10-01 | Comments | Related: More > Artificial IntelligenceProblemYou have started working on AI applications and went through the basics- understanding concepts and writing code that interacts with large language models(LLM). But, in most cases, you will see tutorials that serve as introductoryarticles, which do not help you create well-rounded deployable applications. Inthis article, we look at how to add message history and a user interface for anLLM application.SolutionAdding message history and a user interface (UI) to your LLM application canbring it to life as you get closer to completing an LLM application for deployment.In this tutorial, you will learn how to create a message history and a UI fora LangChain chatbot application. This tutorial uses concepts introduced ina previous article and further explores others, like how AI can utilize localmemory.PrerequisitesVisual Studio CodePythonLangChainPrompt TemplateA prompt template is a set of instructions or input from a user to guide a model'sresponse. This gives the model context for any given task or question. You can usedifferent prompt template techniques for your model, including zero-shot or few-shotsprompts. You can read more about these techniqueshere.This tip will use the ChatPromptTemplate, which helpsto make prompts for chat models that create a list of chat messages. These chatmessages carry an additional parameter named role. The role canbe of a human, AI, or system.This is the prompt code for this article:# Define the prompt templateprompt = ChatPromptTemplate.from_messages( [ ( "system", "You're an assistant who speaks in {language}. Respond in 20 words or more", ), MessagesPlaceholder(variable_name="history"), ("human", "{input}"), ])As you can see from the code above, we are prompting the system role and tellingit to expect input from the user.Runnables InterfaceRunnables are a unit of work that can be invoked, batched, streamed, transformed,and composed. The runnable interface makes it easy to create custom chain protocols.Here, you will create a runnable with an LLM model and the prompt template we createdin the previous section.LangChain Runnables interface can be invoked using the following functions:Stream: Streamback chunks of data.Invoke: Callthe chain on an input.Batch: Callthe chain on a list of inputs.This tutorial uses the RunnableWithMessageHistory class, whichadds a message history to your chain. This class loads messages before it passesthem through the Runnable and then saves the AI response after the Runnable is called.The RunnableWithMessageHistory class also has a session_idparameter to create multiple conversions:image source: https://python.langchain.com/v0.1/docs/integrations/chat/The get_session_history function needs to be passed to theRunnableWithMessageHistory class. This function takes thesession_id as the parameter and returns a BaseChatMessageHistoryobject. The BaseChatMessageHistory object is responsiblefor loading and saving the message object. In this tutorial, you will use theSQLChatMessageHistory to return a message history object that usesSQLite as the storage memory.Creating the Web User InterfaceLangChain can be integrated with a Web Server Gateway Interface (WSGI), likeFlask, to create web applications using Python. Now, to create your Flask application,create a new file named App.py and add the following code:#MSSQLTips Codeimport osfrom flask import Flask, request, jsonify, render_templatefrom langchain_cohere import ChatCoherefrom langchain_core.runnables.history import RunnableWithMessageHistoryfrom langchain_community.chat_message_histories import SQLChatMessageHistoryfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import ConfigurableFieldSpec# Initialize Flask appapp = Flask(__name__)# Retrieve environment variablescohere_api_key = os.getenv('COHERE_API_KEY')if not cohere_api_key: raise ValueError("COHERE_API_KEY environment variable not found")# Initialize the ChatCohere modelmodel = ChatCohere(model="command-r", cohere_api_key=cohere_api_key)# Define the prompt templateprompt = ChatPromptTemplate.from_messages( [ ( "system", "You're an assistant who speaks in {language}. Respond in 20 words or more", ), MessagesPlaceholder(variable_name="history"), ("human", "{input}"), ])runnable = prompt | model# Function to get session historydef get_session_history(user_id: str, conversation_id: str): return SQLChatMessageHistory(f"{user_id}--{conversation_id}", "sqlite:///memory.db")with_message_history = RunnableWithMessageHistory( runnable, get_session_history, input_messages_key="input", history_messages_key="history", history_factory_config=[ ConfigurableFieldSpec( id="user_id", annotation=str, name="User ID", description="Unique identifier for the user.", default="", is_shared=True, ), ConfigurableFieldSpec( id="conversation_id", annotation=str, name="Conversation ID", description="Unique identifier for the conversation.", default="", is_shared=True, ), ],)@app.route('/')def index(): return render_template('index.html')@app.route('/chat', methods=['POST'])def chat(): data = request.json language = data.get("language", "english") input_text = data.get("input") user_id = data.get("user_id") conversation_id = data.get("conversation_id")if not input_text or not user_id or not conversation_id: return jsonify({"error": "Missing input, user_id, or conversation_id"}), 400config = {"configurable": {"user_id": user_id, "conversation_id": conversation_id}} response = with_message_history.invoke({"language": language, "input": input_text}, config=config)return jsonify({"response": response.content})@app.route('/history/<user_id>/<conversation_id>', methods=['GET'])def get_history(user_id, conversation_id): session_history = get_session_history(user_id, conversation_id) messages = [{"type": message.__class__.__name__, "content": message.content} for message in session_history.messages] return jsonify({"history": messages})# Run the Flask appif __name__ == '__main__': app.run(debug=True)ResultsThis is how the application looks:ConclusionCreating LLM applications will only get more popularand easier with time. However, it is important to understand the concepts behindthe technology. This tutorial shows how an LLM AI can have a "memory"when you engage with it using LangChain classes.Next StepsAbout the authorLevi Masonde is a developer passionate about analyzing large datasets and creating useful information from these data. He is proficient in Python, ReactJS, and Power Platform applications. He is responsible for creating applications and managing databases as well as a lifetime student of programming and enjoys learning new technologies and how to utilize and share what he learns.This author pledges the content of this article is based on professional experience and not AI generated.View all my tipsArticle Last Updated: 2024-10-01 | Digital Assistance/Content Synthesis/Personalization | Unknown | null | null | null | null | null | null |
|
news | Jose Karlo Mari Tottoc | Jim Cramer on Nextracker Inc. (NXT): ‘I Know It’s Been A Big Disappointment For Club Members, But You’ve Got To Stick With It’ | We recently compiled a list of the Jim Cramer’s Top 12 Must-Watch Stocks. In this article, we are going to take a look at where Nextracker Inc. (NASDAQ:NXT) ... | https://finance.yahoo.com/news/jim-cramer-nextracker-inc-nxt-071314710.html/ | 2024-09-20T07:13:14Z | We recently compiled a list of the Jim Cramers Top 12 Must-Watch Stocks. In this article, we are going to take a look at where Nextracker Inc. (NASDAQ:NXT) stands against Jim Cramer's other must-watch stocks.In a recent episode of Mad Money, Jim Cramer traveled to Dreamforce to understand the true capabilities of artificial intelligence (AI) and to separate fact from hype. After diving deep into the topic, he feels more equipped to identify which AI claims are genuine and which are simply marketing fluff. He noted that there seems to be far more misleading information than real advancements.Cramer emphasized that we dont need expert predictions to see that interest rates are likely to drop, which is crucial for sustaining a bull market. He urged viewers not to overthink the situation and instead focus on promising opportunities, especially in the rapidly growing field of artificial intelligence.While AI is causing excitement in the stock market, Cramer observed that its impact on the overall economy has been minimal so far. Some applications, like cost savings in corporate back offices, exist, but they arent groundbreaking. This lack of significant progress has led many analysts to suggest that the hype around AI may be turning into a bubble.Cramer explained that AI, particularly when paired with advanced computing from chip giants, can greatly enhance efficiency by allowing machines to perform tasks intelligently, similar to skilled humans. He pointed out that businesses strive to produce quality products, but a worker shortage, exacerbated by the COVID pandemic, has hindered their ability to connect with customers effectively. Sales associates often end up rushed and confused, which affects the customer experience."Let me tell you what AI really does. When coupled with accelerated computing, AI makes everything go faster. It rationalizes processes and can make machines behave like smart, good humans. Businesses want to produce good products and sell them to real people, whether for enterprises or homesthats capitalism 101. But right now, we don't have enough workers to do it.Elevating Customer Service Through Advanced AI AgentsCramer highlighted that companies are pushing their employees hard for profits, creating a stressful work environment. However, after attending Marc Benioff's keynote, Cramer recognized the potential of a technology called Agent Force. This AI initiative can engage customers in a friendly manner, using personalized data to answer common questions efficiently.Cramer believes AI is a great solution for customer service, offering clear assistance without the frustrations that often come from overwhelmed human employees. AI agents can listen, reason, and either direct customers appropriately or handle their inquiries independently."Companies are already squeezing as much as they can out of their employees to boost stock prices and profits for executives. Then it hit me while watching Marc Benioff's incredible keynote today. He described an initiative called Agent Force, and I realized what this technology can really do. It has time for you, acknowledges you, and is polite. It can almost always answer your questions because there are only so many that get asked regularly, and it has the data to respond to your data. It knows your preferences.He further explained that AI enhances customer experiences by remembering preferences, helping with returns, and suggesting alternatives based on past purchases. This reduces the need for interaction with human salespeople who may be stressed or impatient. If customers remain unsatisfied, they still have the option to speak to a human, but many might prefer friendly interaction with an AI agent."So, what AI really is and what it does is provide us with something better than what we currently have. Its a retailer that knows all about your preferences, can help you return an item, and asks if you want something different from what you bought last time. It frees you from the hassle of a human salesperson who might be frustrated or upset. And if youre not satisfied, you can still speak to a human if you want to, but I doubt you will, because the AI agent is just so much more pleasant."Revolutionizing Everyday Life: How AI Outperforms Humans in Healthcare, Driving, and BeyondCramer also predicted that in ten years, people might wonder why they relied on human doctors when AI could provide more compassionate and effective care. These AI systems could analyze vast amounts of medical data to quickly identify serious health issues, making it easier to catch dangerous patterns before they escalate."Its not just retail. In ten years, I think well wonder why we ever wasted a doctors time when AI agents were so much better, kinder, and more empathetic. Doctors could analyze millions of test results that might have taken a decade to sort through, identifying dangerous patterns well before they become problematic, like melanoma, heart disease, or kidney cancer. Its all in the data, and only AI can compile it in an accessible way."Additionally, Cramer noted that self-driving cars are becoming more common and are significantly safer than human drivers since they don't get distracted or impaired. He mentioned that AI can handle tasks like proofreading without making mistakes, allowing law firms to operate with fewer associates."We see self-driving cars everywhere. Theyre much safer than human drivers. They dont drink and drive, they dont even drink. An AI agent can proofread, correct, and never make a mistake. Law firms can now hire half as many associates because of this efficiency"Finally, Cramer emphasized that AI agents generally outperform humans in most scenarios. For repetitive tasks, he believes its preferable to engage with a courteous and efficient AI rather than a stressed human who may not want to help."The bottom line is that if you let the AI agent do its job, it will outperform humans in the vast majority of cases. And for repetitive tasks, trust me: youd much rather have a polite and friendly AI agent than a harried, angry, or exhausted human who really doesnt want to deal with you."Our MethodologyThis article provides a summary of Jim Cramer's latest episode of Mad Money, where he reviewed several stocks. We picked 12 companies and ranked them based on how much they are owned by hedge funds, starting with the ones that are least owned and moving to those that are most owned.At Insider Monkey we are obsessed with the stocks that hedge funds pile into. The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here).An empty shelf of bifacial PV modules ready to be installed in a large-scale solar project.Nextracker Inc. (NASDAQ:NXT)Number of Hedge Fund Investors: 39Jim Cramer mentioned Nextracker Inc. (NASDAQ:NXT), acknowledging that it has disappointed club members. However, he urged them to stay committed to Nextracker Inc. (NASDAQ:NXT), noting that it is currently much cheaper than First Solar Inc. (NASDAQ:FSLR)."Let me also throw in NextTracker. I know its been a big disappointment for club members, but youve got to stick with it; its much cheaper than First Solar."Nextracker Inc. (NASDAQ:NXT)'s positive outlook is based on its leadership in solar tracker technology, strong financial results, and rising demand for renewable energy. As a top provider of solar tracking systems,Nextracker Inc. (NASDAQ:NXT) improves solar panel efficiency, positioning itself to benefit from the growing need for solar energy driven by global sustainability efforts.In its recent Q2 2024 earnings report, Nextracker Inc. (NASDAQ:NXT) reported revenues of $260 million, a significant year-over-year increase, with a gross margin of 25%, showcasing its operational efficiency. The demand for renewable energy, supported by government incentives and corporate sustainability commitments, enhances the market for Nextracker Inc. (NASDAQ:NXT)s products, which are essential for maximizing solar installation efficiency.Nextracker Inc. (NASDAQ:NXT) has also secured contracts with major energy developers, boosting its order backlog and revenue visibility. With a favorable industry outlook shaped by net-zero emissions goals and increasing renewable energy capacity, Nextracker Inc. (NASDAQ:NXT) is well-positioned for continued growth, making it an appealing investment in the changing energy landscape.Overall NXT ranks 8th on our list of Jim Cramer's must-watch stocks. While we acknowledge the potential of NXT as an investment, our conviction lies in the belief that under the radar AI stocks hold greater promise for delivering higher returns, and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than NXT but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT:$30 Trillion Opportunity: 15 Best Humanoid Robot Stocks to Buy According to Morgan Stanley and Jim Cramer Says NVIDIA Has Become A Wasteland.Disclosure: None. This article is originally published at Insider Monkey. | Digital Assistance/Recommendation | Management/Business and Financial Operations/Healthcare Practitioners and Support/Sales and Related | null | null | null | null | null | null |
|
news | Rebecca Szkutak | 13 companies from YC Demo Day 1 that are worth paying attention to | Famed Silicon Valley startup accelerator Y Combinator on Wednesday kicked off its two-day "Demo Day" event that showcases what the most recent YC batch, S24,... | https://techcrunch.com/2024/09/25/13-companies-from-yc-demo-day-1-that-are-worth-paying-attention-to/ | https://s.yimg.com/ny/api/res/1.2/84ftAgrHLUaCSl2WoyTBEw--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD02NDU-/https://media.zenfs.com/en/techcrunch_350/3a8ba5cfd7a0df932e7fcad60a43142d | 2024-09-25T21:52:31Z | Famed Silicon Valley startup accelerator Y Combinator on Wednesday kicked off its two-day "Demo Day" event that showcases what the most recent YC batch, S24, companies are building.Unsurprisingly, AI companies dominated the day, with startups looking to apply the technology to problems like estate planning and settlements, Elayne; automating clinical trial data, Baseline AI; and helping companies get goods through customs, Passage.Sectors like fintech, healthcare, and web3, which dominated YC cohorts of the past, were noticeably quieter, or completely absent, from Wednesday's presentation.Here are the companies worth paying attention to from the first day of Demo Day. Spoiler alert: Pretty much all use AI.What it does: Automates moving baggage at airports with robotsWhy it's a fave: This seems like an ideal use case for robots, considering that collecting and moving baggage at airports is an entirely manual process, which can also be dangerous. This may also be technology that airports would actually be willing to pay for.What it does: AI automation of clinical trial documentsWhy it's a fave: I'm a fan of anything that is aiming to make clinical trials work better and run faster, considering how important they are in the process of getting new drugs and treatments to market. The company claims it can save companies $18 million in costs and lost revenue, which seems like a notable improvement.What it does: AI-powered estate planning and settlementsWhy it's a fave: As someone who has watched a family member navigate this process, I'm glad someone is building a better solution. Plus, the fact that Elayne is looking to reach consumers through their employers is a smart way to get more people thinking about this before they have to.What it does: Automated testing for AI voice agentsWhy it's a fave: There are so many startups building customer support AI systems, but do they work? I think Hamming's strategy of testing out these AI customer service bots is a needed service in this growing ecosystem.What it does: Data centers in spaceWhy it's a fave: This company stood out because it seems like an extreme moonshot, and yet it's already landed customers and is launching a demonstrator satellite next year. The concept of using solar energy to power data centers may be one we might want to consider doing on Earth, too.What it does: Helps cities optimize transitWhy it's a fave: Ontra Mobility's quest to help local governments better utilize their public transit options is a solid one. Most cities don't have the budget to expand public transit options despite population growth, so figuring out a smarter way to utilize what options they already have makes sense.What it does: AI-assisted customs supportWhy it's a fave: Considering how easy it is for consumers to get packages held up by customs, I can only imagine how complicated the importing process is for companies moving a lot of goods across the border all the time.What it does: AI Price optimizationWhy it's a fave: This is a super interesting approach to ecommerce pricing. Promi's AI looks to help companies offer data-informed fluctuating discounts to customers that change based on interest and activity. This makes a lot of sense.What it does: TurboTax for building rebatesWhy it's a fave: Personally I'm a fan of any company that helps consumers or other companies unlock the government incentives they are eligible for. I like RetroFix's approach in particular because it's unlocking government money for contractors to make buildings more sustainable.What it does: Automates government approvals for construction projectsWhy it's a fave: This is the kind of application AI was made for. SchemeFlow's software helps construction companies automate technical reports shrinking the process to minutes. Further impressive, the young company has already generated reports for more than 400 construction projects.What it does: Synthetic datasets for vision modelsWhy it's a fave: There is only so much quality data available for large language models to train on, which leaves many LLM companies tempted to get data from sources they shouldn't or aren't allowed to. Help stop AI companies from illegally scraping data? Sounds like a good goal to me.What it does: Network of in-space refueling stationsWhy it's a fave: The space industry is booming; many entrepreneurs are looking to build and send satellites, rockets, and other devices up into space. Building a company that services this growing economy seems like a smart strategy.What it does: Helps businesses become employee ownedWhy it's a fave: The company's mission to help companies transition into employee owned is a novel one. Selling a company to its employees helps create wealth for the employees and generally results in a bigger payout for the seller. Sounds like a win-win. | Process Automation/Decision Making/Content Creation/Content Synthesis | Business and Financial Operations/Management | null | null | null | null | null | null |
news | Alan Ohnsman, Forbes Staff, Alan Ohnsman, Forbes Staff https://www.forbes.com/sites/alanohnsman/ | Tech Firms Are Keeping Users In The Dark On AI’s Climate Costs | In this week's Current Climate newsletter: tech firms keep users in the dark on AI's carbon costs; Elon Musk's climate manifesto for Tesla vanishes; hybridizing semis. | https://www.forbes.com/sites/alanohnsman/2024/09/02/tech-firms-keep-users-in-the-dark-on-ais-climate-costs-tesla-manifesto-hybrid-trucks/ | 2024-09-02T11:00:00Z | Current Climate brings you the latest news about the business of sustainability every Monday. Sign up to get it in your inbox.TNSAIpowerhouses like Google and OpenAI, along with multiple global tech giants and startups, are pouring billions of dollars into platforms that spit out business communications, legal documents, jokes, short stories, manipulated images and bizarre artwork in seconds. While that can be fun and maybe a time-saver, the specialized processors that do this at data centers require vast amounts of electricity and water. Exactly how much? The companies dont spell that out.ChatGPT queries may use at least 10 times more electricity than standard Google searches. And if all Google searches used generative AI, theyd consume as much electricity as a country the size of Ireland, according to the Los Angeles Times, citing estimates from Alex de Vries, founder of Digiconomist. As for water, ChatGPT likely consumes the equivalent of a 16-ounce bottle in as little as 10 queries, the report said, citing calculations by Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside, and his colleagues.Even if we manage to feed AI with renewables, we have to realize those are limited in supply, so well be using more fossil fuels elsewhere, DeVries said. The ultimate outcome of this is more carbon emissions.Those big power and water requirements should be made clear to people using generative AI platforms, by including them on the sites. The first step is to have more transparency, Ren said. AI developers, tend to be secretive about their energy usage and their water consumption.The Big ReadTesla CEO Elon Musk in the company's electric Roadster, its first model, in January 2009.Corbis via Getty ImagesElon Musks 2006 Climate Manifesto For Tesla VanishesElon Musks 18-year-old manifesto in which the tech entrepreneur laid out his vision for the electric car maker shortly after its first public event has vanished from the companys blog page, along with all posts by Musk and Tesla executives before 2019.The Secret Master Plan, the companys de facto constitution, laid out Musks perspective that Teslas key purpose was to show how electric vehicles and solar power could help fight climate change, in part by creating more and more affordable EVs a view that has dramatically changed with the billionaire's politics in recent years.Musk seemed to break with his earlier thinking on the risks of relying on carbon-spewing oil and gas in his August interview with climate-skeptic Donald Trump, who hes endorsed for president. My views on climate change and oil and gas are pretty moderate, Musk said during their discussion. I dont think we should vilify the oil and gas industry and the people that have worked very hard in those industries to provide the necessary energy to support the economy.Its a far cry from his positions from Teslas early days: The overarching purpose of Tesla Motors (and the reason I am funding the company) is to help expedite the move from a mine-and-burn hydrocarbon economy towards a solar electric economy, which I believe to be the primary, but not exclusive, sustainable solution, Musk wrote in the 2006 blog.Read more hereHot TopicAccelera President Amy DavisCumminsAmy Davis, president of Accelera, the cleantech arm of Cummins, on hybridizing heavy-duty trucks Battery and hydrogen trucks are starting to come to market but have big infrastructure challenges. Do Cummins and Accelera see any other options?A lot of people are starting to talk about hybrids as an interesting technology to address the greenhouse gas issue. We believe that there are ways to take the electrification components, like an eAxle (for electric vehicles), and put it into a hybrid configuration which gets you ready and proves the technology out. It enables you then over time to just add even more energy storage.A hybrid semi-truck isnt a zero-emission vehicle but might boost fuel economy significantly, which lowers CO2. Is that something that you're working on?Yes, we're really excited about it. A recent EPA ruling clarified this, giving some acknowledgment of the greenhouse gas benefits of hybrids. Its important that they accept that technology as that had not been clear previously. I think it's a realistic way to make an impact in the short term while the infrastructure is still developing. We are trying to apply these technologies in both parts of our business, Accelera and the core (Cummins) business to come together in a way that accelerates electrification and is cost-effective for a fleet to start getting into it now.Would this be diesel or natural gas hybrids? I cant say exactly how this is going to show up. Were in discussions on multiple fronts. It absolutely could take that form. What we have on the core side of the business is basically a multifuel kind of engine that can be hydrogen, natural gas or diesel. So designing that into a hybrid could give us a lot of flexibility.There are different ways to configure the hybrid in terms of how much energy storage you want on board, which affects engine sizing. There's a lot of discussion going on around how to size it. You can lower the cost by using less battery or you can get more greenhouse gas reduction with more battery, and the lower the size of the engine. Thats some of the work we're doing with fleets and OEMs now to see what could work best and what people are valuing. I would say this is a couple of years out before you'll see some things on the road.What Else Were ReadingBiden EPA rejects plastics industrys fuzzy math on recycled content labelingLego to replace oil in its bricks with pricier renewable plasticUN chief issues 'SOS' for Pacific Islands worst hit by warming oceanThe case for a clean energy Marshall PlanScientists may have found a radical solution for making your hamburger less bad for the planetTeslas rivals still cant use its SuperchargersChinese EV maker BYD eyes state incentives for Mexico plantTrump calls wind turbines bird killers. New AI tech saves them from the bladesEarthquake risks and rising costs: The price of operating Californias last nuclear plantJudge blocks EPA from using civil rights law in pollution caseTo stay relevant, a Spanish energy giant turns to wasteFor More Sustainability Coverage, Click Here.More From ForbesForbesWaymo Adding A Second Robotaxi Assembly Facility As It Tops 100,000 Weekly RidesBy Alan OhnsmanForbesCaltrains Great New Electric Trains Replace Heavy PollutersBy Brad TempletonForbesTrump Calls Wind Turbines Bird Killers. New AI Tech Saves Them From The BladesBy Carlton Reid | Content Creation/Content Synthesis | Arts, Design, Entertainment, Sports, and Media/Business and Financial Operations | null | null | null | null | null | null |
|
news | Esther Ajao | Oracle makes OCI GenAI Agents with RAG available | The offering provides RAG capabilities for customers that use Oracle databases. Use cases include legal research, finance and customer service support. | https://www.techtarget.com/searchenterpriseai/news/366610221/Oracle-makes-OCI-GenAI-Agents-with-RAG-available | 2024-09-10T20:04:00Z | Oracle on Tuesday made generally available its OCI GenAI Agents platform with retrieval-augmented generation.OCI GenAI Agents (Oracle Cloud Infrastructure) with RAG (retrieval-augmented generation) was first introduced in January.RAG is a framework that helps large language models (LLMs) retrieve current information better and reduce hallucinations.OCI GenAI Agents with RAG provides Oracle customers with out-of-the-box RAG capabilities, helping them avoid manual processes like agent planning and data retrieval, according to the vendorThe platform also self-checks outputs to reduce hallucinations.Geared for customersWhile Oracle introduced OCI GenAI Agents earlier in the year, interest in AI agents, or autonomous systems that perform tasks without human intervention, is growing.What makes OCI GenAI Agents with RAG particularly interesting is that it is geared toward Oracle customers that use Oracle databases and need to validate or complement their AI models with RAG, said Constellation Research analyst Holger Mueller."Oracle needs to make RAG as easy as possible for enterprises, so enterprises keep using their databases," Mueller said. "If this was hard, enterprises could move the data out of the databases, something no database vendor wants."If OCI GenAI Agents with RAG succeeds, it will likely help Oracle do well in the arena of GenAI and databases as GenAI continues to grow rapidly, he added.AI AgentsWhile Oracle defines its agents as assistants that help build an application, that's not only what agents are, said Mark Beccue, an analyst with TechTarget's Enterprise Strategy Group."An agent could be something that does a task," Beccue said, adding that an agent could be a call center assistant working to increase customer satisfaction with more accurate responses.OCI GenAI Agents with RAG could be useful in various other scenarios including helping researchers find answers faster, but AI agents could do even more, he said."They're an advanced form, they're not a building block, they are a result," Beccue continued.Other than OCI GenAI Agents, Oracle -- at its CloudWorld 2024 conference in Las Vegas -- introduced AI-centric Generative Development for enterprises. With the AI development environment, developers can generate applications that use AI-powered natural language interfaces and human-centric data, Oracle said.The AI vendor also revealed new features in some of its offerings. They include:OCI Generative AI, which helps users integrate language models into different applications, now provides access to Meta Llama 3.1 models. It also supports Oracle GenAI partner Coheres Command R, Command R+ and Embed models. Cohere also revealed that it is partnering with the Japanese consulting firm Nomura Research Institute. Together they will launch a new financial AI platform. OCI Speech is an offering that helps users transcribe speech to text and synthesizes speech from text with natural voices. It now has a new real-time transcription capability that includes custom vocabularies support.Oracle Code Assist helps developers boost velocity by providing suggestions to help them build and optimize applications written in different programming languages.The new updates and offerings show ways Oracle is focusing on various vertical industries, said Gartner research analyst Sid Nag."They are building industry-specific LLMs or SLMs [small language models] that are industry or domain-specific, and private, secure and run in a walled garden [private] environment," Nag said.Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. | Process Automation/Information Retrieval Or Search/Decision Making | Legal/Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
|
news | Jose Karlo Mari Tottoc | Jim Cramer Explains That Larry Culp’s goal Was To Reduce Debt And Strengthen Portland General Electric Company (POR)’s Financial Position | We recently compiled a list of the Jim Cramer’s Top 12 Must-Watch Stocks. In this article, we are going to take a look at where Portland General Electric... | https://finance.yahoo.com/news/jim-cramer-explains-larry-culp-064317632.html/ | 2024-09-20T06:43:17Z | We recently compiled a list of the Jim Cramers Top 12 Must-Watch Stocks. In this article, we are going to take a look at where Portland General Electric Company (NYSE:POR) stands against Jim Cramer's other must-watch stocks.In a recent episode of Mad Money, Jim Cramer traveled to Dreamforce to understand the true capabilities of artificial intelligence (AI) and to separate fact from hype. After diving deep into the topic, he feels more equipped to identify which AI claims are genuine and which are simply marketing fluff. He noted that there seems to be far more misleading information than real advancements.Cramer emphasized that we dont need expert predictions to see that interest rates are likely to drop, which is crucial for sustaining a bull market. He urged viewers not to overthink the situation and instead focus on promising opportunities, especially in the rapidly growing field of artificial intelligence.While AI is causing excitement in the stock market, Cramer observed that its impact on the overall economy has been minimal so far. Some applications, like cost savings in corporate back offices, exist, but they arent groundbreaking. This lack of significant progress has led many analysts to suggest that the hype around AI may be turning into a bubble."Specifically, what does AI actually do? Weve heard so much about this technology and how its going to revolutionize everything. Right now, though, we know that while AI has taken the stock market by storm, it really hasnt taken the actual economy by storm. There are use cases, sure, back office corporate cost savingsnothing flashy. Thats why so many commentators now come on air and call the whole thing a bubble."Cramer explained that AI, particularly when paired with advanced computing from chip giants, can greatly enhance efficiency by allowing machines to perform tasks intelligently, similar to skilled humans. He pointed out that businesses strive to produce quality products, but a worker shortage, exacerbated by the COVID pandemic, has hindered their ability to connect with customers effectively. Sales associates often end up rushed and confused, which affects the customer experience."Let me tell you what AI really does. When coupled with accelerated computing, AI makes everything go faster. It rationalizes processes and can make machines behave like smart, good humans. Businesses want to produce good products and sell them to real people, whether for enterprises or homesthats capitalism 101. But right now, we don't have enough workers to do it.Elevating Customer Service Through Advanced AI AgentsCramer highlighted that companies are pushing their employees hard for profits, creating a stressful work environment. However, after attending Marc Benioff's keynote, Cramer recognized the potential of a technology called Agent Force. This AI initiative can engage customers in a friendly manner, using personalized data to answer common questions efficiently.Cramer believes AI is a great solution for customer service, offering clear assistance without the frustrations that often come from overwhelmed human employees. AI agents can listen, reason, and either direct customers appropriately or handle their inquiries independently."Companies are already squeezing as much as they can out of their employees to boost stock prices and profits for executives. Then it hit me while watching Marc Benioff's incredible keynote today. He described an initiative called Agent Force, and I realized what this technology can really do. It has time for you, acknowledges you, and is polite. It can almost always answer your questions because there are only so many that get asked regularly, and it has the data to respond to your data. It knows your preferences.He further explained that AI enhances customer experiences by remembering preferences, helping with returns, and suggesting alternatives based on past purchases. This reduces the need for interaction with human salespeople who may be stressed or impatient. If customers remain unsatisfied, they still have the option to speak to a human, but many might prefer friendly interaction with an AI agent."So, what AI really is and what it does is provide us with something better than what we currently have. Its a retailer that knows all about your preferences, can help you return an item, and asks if you want something different from what you bought last time. It frees you from the hassle of a human salesperson who might be frustrated or upset. And if youre not satisfied, you can still speak to a human if you want to, but I doubt you will, because the AI agent is just so much more pleasant."Revolutionizing Everyday Life: How AI Outperforms Humans in Healthcare, Driving, and BeyondCramer also predicted that in ten years, people might wonder why they relied on human doctors when AI could provide more compassionate and effective care. These AI systems could analyze vast amounts of medical data to quickly identify serious health issues, making it easier to catch dangerous patterns before they escalate."Its not just retail. In ten years, I think well wonder why we ever wasted a doctors time when AI agents were so much better, kinder, and more empathetic. Doctors could analyze millions of test results that might have taken a decade to sort through, identifying dangerous patterns well before they become problematic, like melanoma, heart disease, or kidney cancer. Its all in the data, and only AI can compile it in an accessible way."Additionally, Cramer noted that self-driving cars are becoming more common and are significantly safer than human drivers since they don't get distracted or impaired. He mentioned that AI can handle tasks like proofreading without making mistakes, allowing law firms to operate with fewer associates."We see self-driving cars everywhere. Theyre much safer than human drivers. They dont drink and drive, they dont even drink. An AI agent can proofread, correct, and never make a mistake. Law firms can now hire half as many associates because of this efficiency"Finally, Cramer emphasized that AI agents generally outperform humans in most scenarios. For repetitive tasks, he believes its preferable to engage with a courteous and efficient AI rather than a stressed human who may not want to help."The bottom line is that if you let the AI agent do its job, it will outperform humans in the vast majority of cases. And for repetitive tasks, trust me: youd much rather have a polite and friendly AI agent than a harried, angry, or exhausted human who really doesnt want to deal with you."Our MethodologyThis article provides a summary of Jim Cramer's latest episode of Mad Money, where he reviewed several stocks. We picked 12 companies and ranked them based on how much they are owned by hedge funds, starting with the ones that are least owned and moving to those that are most owned.At Insider Monkey we are obsessed with the stocks that hedge funds pile into. The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here).A wind farm with turbines rotating in unison, showing the power of renewable energy.Portland General Electric Company (NYSE:POR)Number of Hedge Fund Investors: 21Jim Cramer highlighted Larry Culp's impressive turnaround at Portland General Electric Company (NYSE:POR). When Culp took charge, he faced struggling divisions and made the tough decision to sell valuable assets, including the biopharma business, which was sold toDanaher Corporation(NYSE:DHR) for $21 billion.Cramer explained that Culp's goal was to reduce debt and strengthen Portland General Electric Company (NYSE:POR)'s financial position, as he stated in his release. While losing that fast-growing division must have been difficult, Cramer believes that without this sale, Culp might not have been able to execute GE's highly profitable three-way breakup."Of course, the biggest salvage job was the one Larry Culp pulled off at GE. When he took over, he had sick divisions all over the place, so he had to sell some gems, like the biopharma business, which he sold to Danaher for $21 billion in cash. That was a best-in-show business. Why did he do it?A positive outlook for Portland General Electric Company (NYSE:POR) is based on its stable growth prospects, commitment to renewable energy, and strong financial performance. Portland General Electric Company (NYSE:POR) is transitioning to renewable energy sources, aiming to significantly reduce carbon emissions by 2040. This positions it well in the growing clean energy market through investments in wind, solar, and energy storage projects.In its recent Q2 2024 earnings report, Portland General Electric Company (NYSE:POR) reported revenues of $642 million, a year-over-year increase due to higher customer demand and energy prices, along with an earnings per share of $0.70 that exceeded analysts' expectations. Operating in a regulated environment provides Portland General Electric Company (NYSE:POR) with stable revenue and profitability, and it has effectively resolved recent rate cases to maintain financial health while investing in infrastructure improvements.Portland General Electric Company (NYSE:POR) is also focused on enhancing customer engagement through technology and smart grid initiatives, which improve operational performance and reliability. As economic growth in the Pacific Northwest leads to increased electricity demand, Portland General Electric Company (NYSE:POR)'s proactive planning and investments in sustainable projects position it to meet future energy needs.Overall POR ranks 12th on our list of Jim Cramer's must-watch stocks. While we acknowledge the potential of POR as an investment, our conviction lies in the belief that under the radar AI stocks hold greater promise for delivering higher returns, and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than POR but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT:$30 Trillion Opportunity: 15 Best Humanoid Robot Stocks to Buy According to Morgan Stanley and Jim Cramer Says NVIDIA Has Become A Wasteland. Disclosure: None. This article is originally published at Insider Monkey. | Digital Assistance/Personalization | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Edd Gent | Meta Looks to Next-Gen Geothermal to Fuel Increasingly Ravenous Data Centers | Concerns about the environmental impact of AI have prompted big tech firms to explore exotic options for reducing their emissions. Now, Meta plans to try fueling its data centers with geothermal power. Today’s largest AI models consume vast amounts of electricity. This is significantly increasing energy bills for the tech firms building these models and […] | https://singularityhub.com/2024/09/09/meta-looks-to-next-gen-geothermal-to-fuel-increasingly-ravenous-data-centers/ | 2024-09-09T18:30:33Z | Concerns about the environmental impact of AI have prompted big tech firms to explore exotic options for reducing their emissions. Now, Meta plans to try fueling its data centers with geothermal power.Todays largest AI models consume vast amounts of electricity. This is significantly increasing energy bills for the tech firms building these models and making it harder for the companies to live up to ambitious pledges theyve made to cut carbon emissions.As a result, these companies are on the hunt for new sources of renewable energy to meet demand without increasing their carbon footprint. Solar and wind power are inevitably the go-to choices, but given the already tight competition for access to renewable power, some tech giants are also looking to emerging technologies.Thats why Meta recently announced a new partnership with Sage Geosystems. The companys technology generates carbon-free power by pumping water deep into hot underground rock formations. Under the agreement, the startup will provide up to 150 megawatts of geothermal power to help run Metas data centers.Sages technology marks a significant advancement for the clean energy sector, showcasing the ability to harness geothermal energy virtually anywhere, Meta said in a press release announcing the deal.Were excited to partner with Sage on a first-of-its-kind project exploring the use of new, advanced geothermal energy in parts of the country where it has not been possible before.Geothermal power is an attractive option for data center operators because, unlike other renewable sources like solar and wind, it isnt intermittent. But conventional plants require access to underground reservoirs of hot water, which only occur in a few areas around the globe with high levels of volcanic activity.So-called enhanced geothermal technology removes this constraint by doing away with the need for a natural water reservoir. Piggybacking off fracking technology developed by the oil and gas industry, the approach involves pumping high-pressure water down into hot, dry rocks to create fractures that can be filled with water. The heated water is then extracted, turned into steam, and used to drive a turbine to generate electricity.This greatly expands the number of locations in which a geothermal plant can be built. The technology is still nascent, but Sage has already field-tested the approach at an abandoned gas well in Texas and told The Verge that it expects to be able to scale up the approach rapidly because it uses off-the-shelf technologies from the oil-and-gas industry.How soon the technology will make a dent in Metas energy bill remains uncertain though. An initial 8-megawatt first phase of the project isnt expected to come online until 2027. It will then be another couple of years until its up to the full capacity of 150 megawatts. And crucially, the companies havent actually signed an official power purchase agreement yet, The Verge notes.The partnership will nonetheless give a boost to a fledgling industry, and Meta isnt the only big tech player interested. Last year, Google announced that some of its Nevada data centers are being powered by an enhanced geothermal plant built by a startup called Fervo.Geothermal may face some competition though. Big tech companies are also increasingly looking to nuclear power as a potential source of reliable, carbon-free power. Microsoft, in particular, is interested in developing small modular reactors to help run its data centers.And theres still a long road ahead for enhanced geothermal power. A recent report from the Department of Energy estimated that it would take roughly $20 to $25 billion worth of investment to prove the technology and create a self-sustaining industry. Thats doable by 2030, according to the report, but will require continued cost reductions and several large-scale demonstrations to build confidence.Given the tech industrys ever increasing energy demands combined with a commitment to lower emissions, these companies could be the most promising route to making that a reality.Image Credit: Sage Geosystems | Unknown | Unknown | null | null | null | null | null | null |
|
news | Admin | OpenAI, Nvidia Executives Discuss AI Infrastructure Needs With Biden Officials | OpenAI Chief Executive Officer Sam Altman and Nvidia Corp. CEO Jensen Huang met with senior Biden administration officials and other industry leaders at the White House on Thursday to discuss how to fill the massive infrastructure needs for artificial intelligence … | https://www.insurancejournal.com/news/national/2024/09/13/792698.htm | 2024-09-13T05:16:47Z | OpenAI Chief Executive Officer Sam Altman and Nvidia Corp. CEO Jensen Huang met with senior Biden administration officials and other industry leaders at the White House on Thursday to discuss how to fill the massive infrastructure needs for artificial intelligence projects.On the tech side, attendees also included Anthropic CEO Dario Amodei, Google President Ruth Porat and Microsoft Corp. President Brad Smith, according to people familiar with the meeting, which also had representatives from the energy sector. Government officials included Commerce Secretary Gina Raimondo, National Security Advisor Jake Sullivan and Energy Secretary Jennifer Granholm, according to the people.The goal, according to a White House official, was to boost public-private partnerships around the development of AI data centers in the US. Topics included permitting, workforce, power demands and economic impacts of the facilities, people familiar with the meeting said.OpenAI, for example, plans to spend tens of billions of dollars on a domestic AI infrastructure push that spans data centers, energy capacity and transmission and semiconductor manufacturing with investment from around the globe. Company executives have been meeting with government officials for months about a range of issues related to the initiative, including national security concerns that could be associated with foreign capital.“OpenAI believes infrastructure is destiny and that building additional infrastructure in the US is critical to the country’s industrial policy and economic future,” OpenAI said in a statement Thursday. The company highlighted the economic benefits of investing in US data center projects, including a possible 40,000 jobs across a number of US states. OpenAI pointed to similar investments by China, which aims to be a global AI leader by the end of the decade.Porat called robust US energy infrastructure crucial to ensuring US leadership in the emerging field of AI. “Today’s White House convening was an important opportunity to advance the work required to modernize and expand the capacity of America’s energy grid,” she said in a statement.Anthropic and Microsoft declined to comment.The AI-fueled surge in US data center construction coincides with a broader manufacturing boost spurred by the Chips and Science Act and the Inflation Reduction Act the signature subsidy programs for semiconductors and clean energy enacted in 2022 under President Joe Biden.Those investments, along with data center expansion and other factors, are expected to drive electricity demand up by 15% to 20% over the next decade, according to the Energy Department. Data centers could consume as much as 9% of US electricity generation annually by 2030, up from 4% of total load in 2023, according to a report in May by the nonprofit Electric Power Research Institute.The Biden administration has said renewables such as wind and solar, as well as battery storage and energy efficiency gains, are some of the best ways to meet growing data center energy demand because they are rapidly scalable and cost competitive.“Near-term data center driven electricity demand growth is an opportunity to accelerate the build out of clean energy solutions, improve demand flexibility, and modernize the grid while maintaining affordability,” the Energy Department said in a blog post last month.However, the agency, which is set to release an assessment of energy consumption by data centers by years’ end, cautioned that projections of growth in electricity demand “continue to evolve due to developing use cases” and other factors.Photo: Photographer: Graeme Sloan/BloombergTopicsInsurTechData DrivenArtificial Intelligence | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Ayoola Olasupo | Expert urges use of AI to lower energy costs | Stakeholders have been urged to embrace the transformative potential of Artificial Intelligence-powered automation to tackle inefficiencies in the energy sector. In a statement released on Saturday, AI and energy expert, Faith Odum, said the AI-driven systems could optimise the use of natural gas which will curb the rising cost of living in the country. The Read More | https://punchng.com/expert-urges-use-of-ai-to-lower-energy-costs/ | 2024-09-29T00:01:34Z | Stakeholders have been urged to embrace the transformative potential of Artificial Intelligence-powered automation to tackle inefficiencies in the energy sector.In a statement released on Saturday, AI and energy expert, Faith Odum, said the AI-driven systems could optimise the use of natural gas which will curb the rising cost of living in the country.The statement partly read, Nigeria, as Africas largest oil producer, faces significant energy challenges, with over 71 per cent of the population, about 140 million people, lacking consistent access to electricity.Drawing comparisons with the United States, Odum noted that AI-powered automation is already reshaping energy management in the US. AI is widely used to forecast energy demand, manage smart grids, and optimise renewable energy sources such as solar and wind.Global trends indicate that AI is increasingly being adopted to enhance energy efficiency and reduce costs. Countries like China and Germany are investing in AI to improve the storage and distribution of renewable energy approaches that could serve as models for Nigeria as it works towards achieving universal energy access by 2030.While AI alone cannot fully resolve Nigerias energy crisis, Faith Odums insights underscore its potential to play a pivotal role in advancing the sector. She urged policymakers and private sector leaders to invest in AI technologies, suggesting that such advancements could help mitigate rising living costs and stabilise energy prices in the long term. | Decision Making/Process Automation | Management/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Debasis Saha | How Vistra Corp. (VST) is Positioning Itself for AI-Driven Electricity Demand with Nuclear Power | We recently published a list of 20 AI News and Analyst Ratings You Should Not Miss. In this article, we are going to take a look at where Vistra Corp... | https://finance.yahoo.com/news/vistra-corp-vst-positioning-itself-172445959.html/ | 2024-09-22T17:24:45Z | We recently published a list of 20 AI News and Analyst Ratings You Should Not Miss. In this article, we are going to take a look at where Vistra Corp. (NYSE:VST) stands against the other AI news and analyst ratings that you should not miss.The artificial intelligence (AI) market continues to show tremendous growth, with significant advances across sectors. According to a report by McKinsey on the AI industry, the AI revolution is driving innovation across industries, with investment in AI increasing sevenfold in recent years despite economic downturns in other tech sectors. This surge is primarily fueled by the growing demand for AI applications in data analysis, content generation, and predictive modeling. In particular, generative AI has drawn the most attention, revolutionizing industries like marketing, customer service, and product design. Moreover, high-performing companies are heavily investing in AI to gain a competitive edge. These firms, often referred to as AI high performers, allocate a significant portion of their digital budgets over 20% to AI technologies. They prioritize AI not only for cost reductions but also for new revenue streams.Read more about these developments by accessing 33 Most Important AI Companies You Should Pay Attention To and 20 Industrial Stocks Already Riding the AI Wave.The market for AI applications is set to expand even further, with several industry reports predicting that by 2030, AI could contribute up to $13 trillion to the global economy. Over a course of the next decade, informed estimates by investment advisors at Goldman Sachs indicate that these AI tools could drive a 7% increase in global GDP, worth nearly $7 trillion, and lift productivity growth by 1.5 percentage points overall. Moreover, the bank expects established businesses around the world to spend nearly $1 trillion on developing AI infrastructure in the coming yearsProminent businesses have taken note of these developments. Latest reports suggest that investment titan BlackRock, in partnership with tech giants is likely to launch a more than $30 billion fund focused on AI. The fund will invest in artificial intelligence infrastructure to build data centers and energy projects. The need for energy is a source of particular interest to the business community as AI models require substantial computational power, leading to higher energy consumption.The sheer scale of computational power required for AI workloads has also forced tech giants to build supercomputer clusters, stringing together expensive chips, cooling systems, networking tools, and other high-tech gear to crunch data. These AI data centers will likely consume a growing amount of energy as the use cases of AI expand. McKinsey estimates that by 2025, 15% to 20% of all data center workloads will be AI-driven, compared to less than 5% in 2020. Furthermore, according to a report from the International Energy Agency, AI data centers could account for as much as 13% of global electricity demand by 2030 if current growth trends continue. Tech giants are thus investing billions of dollars into expanding their AI infrastructure.Our MethodologyFor this article, we selected AI stocks based on the latest news and analyst ratings. These stocks are also popular among hedge funds.Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here).Best Clean Energy Stocks To Buy According to BillionairesSolar panel workers installing a new farm for clean energy generation.Vistra Corp. (NYSE:VST)Number of Hedge Fund Holders: 92 Vistra Corp. (NYSE:VST) operates as an integrated retail electricity and power generation company. The firm recently announced that it would acquire an additional 15% equity interest in Vistra Vision LLC, a subsidiary of the power firm, in a deal worth $3.2 billion. The interest would be purchased from Nuveen Asset Management and Avenue Capital Management. Vistra plans to clear the purchase through five transactions spread over the space of two years. The transaction, which is not subject to any regulatory approvals, per reports, is expected to close by the end of this year. Vistra Vision owns nuclear generation facilities with a capacity of nearly 6.4 gigawatts, as well the renewables and energy storage business and retail business of Vistra.Vistra Corp. (NYSE:VST) has been strengthening its nuclear portfolio to capitalize on AI-driven electricity demand. Hyperscalers are turning to nuclear power operators for a 24/7 source of clean and reliable electricity, and nuclear has emerged as a strong option.BMO Capital recently raised the price target on Vistra Corp. (NYSE:VST) stock to $125 from $120 and kept an Outperform rating on the shares. In a research note, analysts at the advisory remarked that there was positivity on the companys announcement that it was acquiring the 15% minority interest in its zero-carbon subsidiary Vistra Vision based on the attractive implied valuation about 7.9-times on enterprise value to expected EBITDA basis for a premium zero-carbon subsidiary and viewed the transaction as being consistent with the managements disciplined asset allocation strategy.Overall VST ranks 7th on our list of AI news and analyst ratings that you should not miss. While we acknowledge the potential of VST as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns, and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than VST but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT: $30 Trillion Opportunity: 15 Best Humanoid Robot Stocks to Buy According to Morgan Stanley and Jim Cramer Says NVIDIA Has Become A Wasteland.Disclosure: None. This article is originally published at Insider Monkey. | Discovery/Decision Making/Prediction | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Akash Dutta, Siddharth Suvarna | Microsoft-Backed G42 Unveils Hindi Large Language Model Nanda for India | Microsoft-backed G42, an Abu Dhabi-based artificial intelligence (AI) technology company, unveiled a Hindi large language model (LLM) for India on Tuesday. Dubbed Nanda, the AI model is said to be trained with large volumes of datasets in both Hindi, English, and Hinglish languages. The generative AI model was developed in collaboration with MBZUAI and Cerebras Systems. Last year, the AI firm released the Jais AI model which was trained on Arabic and English languages. | https://www.gadgets360.com/ai/news/g42-nanda-hindi-large-language-model-ai-india-microsoft-unveiled-6539762 | 2024-09-11T09:24:09Z | Microsoft-backed G42, an Abu Dhabi-based artificial intelligence (AI) technology company, unveiled a Hindi large language model (LLM) for India on Tuesday. Dubbed Nanda, the AI model is said to be trained with large volumes of datasets in both Hindi, English, and Hinglish languages. The generative AI model was developed in collaboration with MBZUAI and Cerebras Systems. Currently, the company has not announced the use cases for the LLM. Last year, the AI firm released the Jais AI model which was trained on Arabic and English languages.In a post on X (formerly known as Twitter), G42 India CEO Manu Kumar Jain unveiled Nanda, the Hindi language generative AI model. It was unveiled at the UAE-India Business Forum in Mumbai in the presence of the Crown Prince of Abu Dhabi Sheikh Khaled bin Mohamed bin Zayed Al Nahyan and the Union Minister of Commerce and Industry Piyush Goyal. The LLM is named after the second-highest mountain peak in India, Nanda Devi.Jain also highlighted some of the specifications of the LLM. According to the post, Nanda is a 13-billion-parameter model which was trained on approximately 2.13 trillion tokens of language datasets including Hindi, English and Hinglish. The CEO also added that Nanda is a bi-lingual model and is also proficient in Hinglish, the hybrid language that blends Romanised letters and Devanagari pronunciation.Currently, the company has not shared any release timeline for the AI model. It is not known whether Nanda will be available in the public domain or it will be reserved for government usage. Jain said, We genuinely believe that #Nanda can get integrated into the fabric of India. It meets all the sovereign requirements and can help take large-scale technology initiatives to newer heights. It will offer over half a billion Hindi language speakers the opportunity to harness the potential of generative AI.Last year, the AI firm launched Jais, an open-source Arabic LLM with capabilities for Arabic natural language processing (NLP). Multiple AI models were released with up to 70 billion parameters. In April, Microsoft invested $1.5 billion (roughly 12,600 crores) in the company becoming a major backer. The Windows maker also took a seat on G42's board. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Julian Horsey | Building the AI Future: Microsoft and BlackRock’s $100 Billion AI Investment Plan | Microsoft and BlackRock are teaming up to make a transformative impact on the future of artificial intelligence (AI). The two global giants, together with other investors, have launched the Global Artificial Intelligence Infrastructure Investment Partnership (GAIIP). This initiative aims to raise up to $100 billion in funding to build AI-powered data centers and the necessary […]The post Building the AI Future: Microsoft and BlackRock’s $100 Billion AI Investment Plan appeared first on Geeky Gadgets. | https://www.geeky-gadgets.com/microsoft-blackrock-ai-investment/ | 2024-09-19T05:33:12Z | Microsoft and BlackRock are teaming up to make a transformative impact on the future of artificial intelligence (AI). The two global giants, together with other investors, have launched the Global Artificial Intelligence Infrastructure Investment Partnership (GAIIP). This initiative aims to raise up to $100 billion in funding to build AI-powered data centers and the necessary energy infrastructure to power them. The project has an initial fundraising goal of $30 billion, which will lay the foundation for an infrastructure that will support the growing demand for AI technologies across various industries.Quick Links :The Partnership and GoalsMicrosoft, known for its leadership in software and cloud technologies, and BlackRock, the worlds largest asset manager, have joined forces in what could become one of the largest investments in AI infrastructure to date. Their partnership, GAIIP, was created to address the ever-growing demand for computing power needed to run advanced AI models, such as OpenAIs ChatGPT, and other generative AI technologies.The first phase of this fundraising effort is set at $30 billion, with the total goal eventually reaching $100 billion. These funds will be directed towards building new data centers equipped with the latest hardwarepredominantly Nvidia GPUs, which are critical for running complex AI algorithms. Moreover, the capital raised will also go toward enhancing the energy infrastructure that powers these data centers, ensuring they can meet the substantial power needs of AI workloads.Why AI Infrastructure MattersAI technologies are increasingly shaping various sectors, from healthcare to finance, entertainment, and beyond. For AI models to function efficiently, they require enormous computational resources. The surge in popularity of AI-driven applications, particularly generative AI like ChatGPT, has put immense pressure on existing data centers and the energy resources they rely on.The problem lies in the bottleneck created by the high demand for Nvidia GPUs, which are essential to process AI tasks. As companies race to build more data centers to run AI models, the challenge of acquiring enough GPUs and power infrastructure has become more apparent. Microsoft and BlackRocks $100 billion initiative will be pivotal in meeting these growing demands, enabling continued innovation in AI technologies.Sustainability and Energy InfrastructureA key component of the GAIIP initiative is sustainability. Microsoft CEO Satya Nadella has emphasized the need for the partnership to not only build advanced infrastructure but also ensure that it is sustainable. AI workloads are incredibly energy-intensive, consuming vast amounts of electricity to process billions of data points and deliver outputs.To mitigate this, part of the funds raised will be invested in developing energy infrastructure that can sustainably power the AI data centers. This could include investments in renewable energy sources like wind and solar, as well as advancements in energy-efficient cooling and storage technologies. These efforts align with Microsofts broader environmental goals, which include becoming carbon negative by 2030.The sustainability aspect of the project is crucial, as the world grapples with the environmental impact of technology growth. By prioritizing energy-efficient data centers, Microsoft and BlackRock aim to reduce the carbon footprint of AI while enabling its continued development.Impact on Microsoft and the Tech IndustryThis $100 billion investment push will have a ripple effect across the technology sector, particularly in the AI space. Microsofts Azure cloud platform, which already serves clients like OpenAI, is set to be one of the primary beneficiaries of this expanded infrastructure. As Azure grows to accommodate the increasing computational needs of AI companies, Microsoft will be better positioned to dominate the AI and cloud services market.Beyond Azure, the entire technology industry stands to benefit. Increased data center capacity means more companies will be able to access the computing power needed to develop and deploy AI-driven solutions. From AI startups to established tech firms, the expansion of infrastructure will foster innovation and growth across the board.At the same time, the initiative will likely accelerate the shift towards a more sustainable tech industry. By investing in renewable energy and energy-efficient data centers, Microsoft and BlackRock are setting an example for other tech companies to follow, aligning economic growth with environmental responsibility.In conclusion, the partnership between Microsoft and BlackRock represents a major leap forward in the AI industrys development. By mobilizing up to $100 billion in capital, they are not only addressing the immediate need for more AI data centers but also ensuring that the infrastructure powering AI is sustainable. This bold investment strategy will have lasting impacts on AI innovation, energy consumption, and the future of technology as a whole.Source : CNBCFiled Under: Technology NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Jaspreet Bindra | Is artificial energy worth all the money and energy it’s guzzling? | The planned revival of America’s Three Mile nuclear facility for Microsoft’s use has revived the question of whether the pursuit of AI dreams can justify the costs involved. The answer may lie in its promises of value generation and a faster move away from fossil fuels. | https://www.livemint.com/opinion/columns/llma-data-centres-ai-energy-consumption-clean-power-microsoft-nvidia-fossil-fuels-11727314221057.html | 2024-09-26T05:00:00Z | An incident on 28 March 1979 changed the course of energy forever, at least in the US. The Three Mile Island Nuclear Generating Station at Pennsylvania had a partial meltdown of a reactor core. No injuries were reported, but the reactor was mothballed, and it heightened public fears of nuclear power. So, it came as a bolt from the blue when Three Mile owner Constellation said last week that it will restart the nuclear reactor and Microsoft will buy the energy generated for 20 years. The reason: to meet its burgeoning need for clean energy for its massive artificial intelligence (AI) data centres, as the tech industry balances building giant energy-sucking AI models with its environmental, social and governance goals.The GenAI revolution sparked off by the launch of ChatGPT in 2022 has catalysed an insatiable appetite among tech firms and countries to build bigger AI models in their race towards artificial general intelligence (AGI) and global domination. AI bots are adding hundreds of millions of users and promise to reshape industries and societies. As Big Tech CEOs commit hundreds of billions of dollars to lead this era-defining technology, they also face huge obstacles in scaling up. Data availability is one, the price of Nvidia graphics processing units (GPUs) is another, but the biggest perhaps is the availability of clean and reliable power for both the training and inferencing needs of large language models. For example, Microsoft and OpenAI are contemplating a $100 billion AI supercomputer, and the five gigawatts it would take to power it is whats required for all of New York City today!The Electric Power Institute expects data centres to consume up to 9% of US electricity generation by 2030, more than double what they currently use. This translates to 50 gigawatts (GW) of additional power, a colossal amount of energy which could power 10 New Yorks. Moreover, it requires an investment of over $500 billion in data centre infrastructure alone. Most of the power generated globally is from coal and other fossil fuels, making it a top contributor to carbon-dioxide emissions and global warming. The global concern this spawned has led Big Tech firms to declare aggressive net-zero targets, which would mean they must rely on green power for almost all their needs. They have signed long-term power purchase agreements (PPAs) with wind and solar power companies, but as the Electric Power Institute reports, those deals typically do not match electricity demand hour by hour with local resources" and so there is no guarantee that all electricity-related greenhouse gas emissions are offset." Internal data released by Microsoft, Google and others supports this claim. Thus, the gold rush for clean power. Other than the Three Mile Island revival, Microsoft has made other moves. It has tied up with Brookfield Asset Management in a $10 billion deal to develop another 10.4GW of renewable energy capacity across the US and Europe. It has even started dabbling in nuclear fusion energy, with an agreement with Helion Energy and by hiring people to build small modular nuclear reactors. Google is signing PPAs with companies to procure 100 megawatts of offshore wind farm energy.The question is whether it is worth chasing the AI dream, with all this potential damage and what it will cost to undo it through clean power generation. The tech industry thinks so. McKinsey estimates (bit.ly/47DYYB2) that GenAI could help create $2.6-4.4 trillion in economic value for the global economy. Sam Altman believes Artificial Super Intelligence will help humanity solve big problems like climate change, longevity and nuclear fusion, thus making the investments worthwhile. Even Bill Gates has said, The question is, will AI accelerate a more than 6% reduction [in emissions]? And the answer is: certainly." He also believes that Big Tech businesses would pay a green premium for clean energy, thus incentivising its development and deployment over fossil fuels, and so the AI rush would be worth it.Every technology has raised such existential questions, and none of them has been as consequential as AI promises to be. Another such technology was nuclear, but the spectre of Hiroshima and Nagasaki made a fearful world clamp down on it. While there has not been another nuclear attack since 1945, the flip side was that we slowed down nuclear power generation in favour of coal and gas, which, in turn, worsened global warming. There were no easy solutions then and there arent any now. | Unknown | Management/Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Fatima Farooq | Vistra Corp. (VST): Among the Worst Performing AI Stocks of Previous Week | We recently compiled a list of the 20 Worst Performing AI Stocks of Last Week. In this article, we are going to take a look at where Vistra Corp. (NYSE:VST) ... | https://finance.yahoo.com/news/vistra-corp-vst-among-worst-163611805.html/ | 2024-09-14T16:36:11Z | We recently compiled a list of the 20 Worst Performing AI Stocks of Last Week. In this article, we are going to take a look at where Vistra Corp. (NYSE:VST) stands against the other AI stocks.US Stocks in SeptemberThis September saw a sluggish start for most US stocks, and large-cap technology stocks were no exception to this trend. The main driving factors for this development include concerns over the health of the American economy resurfacing, particularly in light of the August jobs report. The report underscored the labor market's weakness in the US, which has not left investors feeling all that secure about the state of the economy.On the stock side, many investor favorites in the artificial intelligence (AI) space have been doing poorly so far in September, with losses ranging from around 4% to even over 20% for the first week of September. The primary reason for this decline seems to be that investors are just not satisfied with the growth demonstrated by major AI companies at present. While growth is definitely present, it's continuing to fall short of investor expectations, which have increased exponentially in light of the hype cycle created around AI stocks.Are We Really In An AI Bubble?The first week of September was actually the worst week for chip stocks recorded in over two years. Many investors are now beginning to wonder whether AI is worth the amount of money being poured into it, resulting in corporate spending on AI coming under greater scrutiny than ever before. The greater scrutiny is predominantly because of investors and analysts now thinking that many AI stocks are overhyped and overvalued and don't have the means to justify this hype and valuation - essentially, the main concern is that we're in an AI bubble that's on the brink of bursting.However, as with any high-tension market situation, there are diverging opinions as well. In his September 6 interview on CNBC's "Closing Bell Overtime," Deepwater Asset Management's managing partner, Gene Munster, emphatically stated that we are not in an AI bubble. For him, the bigger problem in the AI space is that every other company today is trying to talk about AI and say that it's working towards AI incorporation in its operations - something that's leading to a lot of noise in the market, which is drowning out the voices of companies offering real substance in this space. He thus noted that it's important for investors to be careful not to invest in just any company that says it's working with AI and instead to focus on the better, perhaps more boring, options in the market.According to Munster, the main players to keep your money in are predominantly big tech names, as these are the only companies that are poised to deliver substantial growth instead of just generating noise. However, investors are still confused about whether AI is a good place to invest in even today, which is why we've compiled a list of the worst performing AI stocks in September so far and explained whether these stocks are worth picking up or if they're just temporary beneficiaries of the hype around AI.Our Methodology We compiled our list by screening for AI stocks that have seen declines of 10% or above in the first week of September, and then ranked the stocks based on their weekly decline as of Friday, September 6. We have also mentioned the number of hedge funds holding stakes in each stock.Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points. (see more details here).Best Clean Energy Stocks To Buy According to BillionairesSolar panel workers installing a new farm for clean energy generation.Vistra Corp. (NYSE:VST)Weekly Decline: 13.49%Number of Hedge Fund Holders: 92Vistra Corp. (NYSE:VST) is an energy company offering retail electricity and power generation services. It is based in Irving, Texas.The reason we've added an energy company to our list is that this specific company has the potential to benefit from the growing popularity of generative AI since rising AI use is directly related to rising energy needs. Vistra Corp. (NYSE:VST) is investing significantly in alternative energy sources, so it's in a good position to meet these rising needs. As a result, many investors are paying more attention to this stock.This March, Vistra Corp. (NYSE:VST) also completed an acquisition of Energy Harbor. Through this acquisition, the company has increased its presence as a player in the nuclear energy space. So those investors looking for hidden AI stocks may benefit from considering Vistra Corp. (NYSE:VST) for their portfolios. With its nuclear portfolio Vistra Corp. (NYSE:VST) is well-positioned to capitalize on AI-driven electricity demand as major hyperscalers pen contracts with nuclear energy providers for low-cost alternative fuels.Vistra Corp. (NYSE:VST) was seen in the portfolios of 92 hedge funds in the second quarter, with a total stake value of $4.03 billion.Overall VST ranks 9th on our list of the worst performing AI stocks last week. While we acknowledge the potential of VST as an investment, we believe that AI stocks hold promise for delivering high returns and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than VST but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT: $30 Trillion Opportunity: 15 Best Humanoid Robot Stocks to Buy According to Morgan Stanley and Jim Cramer Says NVIDIA Has Become A Wasteland.Disclosure: None. This article is originally published at Insider Monkey. | Decision Making/Content Synthesis/Prediction | Business and Financial Operations | null | null | null | null | null | null |
|
news | Investing.com | Iveda Launches IvedaESS““A Portable, Plug-and-Play Security Solution for the Power of IvedaAI On the Go | Iveda Launches IvedaESS““A Portable, Plug-and-Play Security Solution for the Power of IvedaAI On the Go | https://www.investing.com/news/press-releases/iveda-launches-ivedaessa-portable-plugandplay-security-solution-for-the-power-of-ivedaai-on-the-go-93CH-3643383 | 2024-10-01T16:08:04Z | MESA, Ariz.--(BUSINESS WIRE)--Iveda (NASDAQ: IVDA),the global leader in cloud-based AI, today announces the launch of IvedaESS, a portable, plug-and-play security solution that enables users to tap into the power of IvedaAI from anywhere. IvedaESS is designed for use cases including construction sites, traffic monitoring, event security, and other dynamic environments that require reliable, mobile surveillance. With IvedaESS, users can instantly receive alerts when suspicious activity is detected, allowing them to take immediate measures informed by actionable intelligence.The IvedaESS system combines the high-quality IvedaXpress Camera with a weatherproof enclosure that includes a wireless 4G router, dual SIM slots, and a rechargeable battery that supports a full shift of continuous use. Custom setups, including a solar power option, are available upon request. This innovative configuration provides flexibility and durability, making it ideal for environments where quick deployment and robust security are crucial.The construction industry alone loses over $1 billion annually due to equipment theft and vandalism, according to the National Equipment Register (NER). With IvedaESS, construction companies can significantly reduce losses through proactive monitoring and real-time alerts, minimizing the need for additional security guards and decreasing downtime caused by ongoing incidents.IvedaESS enables users to remotely harness the power of IvedaAI by streaming video directly to the cloud, accessible through IvedaXpress's intuitive web portal. Unlike traditional security systems that require costly permanent fixtures, extensive cabling, and lengthy installation times, IvedaESS offers a versatile, plug-and-play solution that can be deployed anywhere. The entire system is easily managed through a user-friendly web application or mobile app, allowing users to monitor alerts, control activity, and adjust settings remotely, from any device. With subscriptions starting as low as $99 per month, video feed can be streamed to the IvedaAI Cloud, where powerful AI analyticsaround things like license plate recognition, facial recognition, object counting, intrusion detection, and moreare applied in real time.Our new IvedaESS unit allows users to deploy a powerful and portable security solution across multiple industries, said David Ly, Iveda's CEO and founder. From remotely monitoring construction sites and managing traffic flow to securing theme parks, or concert venues, the use cases are endless. IvedaESS can also be used for HOAs in neighborhoods that need a quick, easy-to-set-up solution to manage and monitor community gates and entrances.Arthur VanHouten, Black Box's Solutions Architect, has first-hand experience deploying IvedaESS. He commented, Our team was interested in IvedaESS for its flexibility and ease of deployment to track and protect materials and human capital. We found it to be a perfect fit for our large-scale projects in collaboration with general contractors. He continued, This system enables us to safeguard our assets, expedite deployments, and achieve precise, on-target performance measures. The versatility of IvedaESS makes it a powerful tool for hyper-scale data centers, smart city assessments, and community safety initiatives, ensuring we protect the environments where we work, play, and live.IvedaESS is now available for order at an introductory price of $1,299. For more information or to schedule a demo, please contact [email protected] Iveda Solutions ®Iveda (NASDAQ: IVDA) provides global solutions for cloud-based video AI search and surveillance technologies that protect the people, places, and things that matter most. Iveda's technology delivers instant intelligence to existing infrastructure, enabling cities and organizations worldwide to seamlessly enter the fifth industrial revolution. Iveda operates at the forefront of the digital transformation of cities across the globe, using IoT platforms with smart sensors and devices to support public safety, security, elderly care, energy efficiency, and environmental preservation. Headquartered in Mesa, Arizona, with a subsidiary in Taiwan, Iveda is publicly traded under the ticker symbol IVDA.View source version on businesswire.com: https://www.businesswire.com/news/home/20241001325445/en/Olivia Civiletto [email protected]: Iveda | Unknown | Others | null | null | null | null | null | null |
|
news | Maham Fatima | Is Vistra Corp. (VST) a Good AI Stock To Invest In? | We recently compiled a list of the 10 Hidden AI Stocks to Buy Now. In this article, we are going to take a look at where Vistra Corp. (NYSE:VST) stands... | https://finance.yahoo.com/news/vistra-corp-vst-good-ai-043804447.html | https://s.yimg.com/ny/api/res/1.2/v2K1rkOINfgkDWYB0TjttQ--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD02NzM-/https://media.zenfs.com/en/insidermonkey.com/adf32ba6c42882b2a9d434963b4a8f92 | 2024-09-08T04:38:04Z | We recently compiled a list of the 10 Hidden AI Stocks to Buy Now. In this article, we are going to take a look at where Vistra Corp. (NYSE:VST) stands against the other hidden AI stocks.Research and development in artificial intelligence began in academia and dominated the sector until the early 2000s. Later, this pattern switched up and industry took over AI with higher investment, research, and cheaper inputs. Investments in AI by businesses are also almost always followed by commercial applications, making it a more profitable activity than academia.While these commercial relationships flourish, many companies focus on taking collaborative approaches to partnerships, revolutionizing the AI industry. By partnering with tech giants, such companies are accelerating AI adoption, driving vertical growth through specialized models, and increasing demand for powerful computing resources. This strategic approach is shaping the future of AI.This was recently discussed in another article, 7 Most Popular AI Penny Stocks Under $5. Here's an excerpt from it:OpenAIs approach to fostering collaborative partnerships instead of competing directly with tech giants makes it an exceptional model. Macquaries Fred Havemeyer (lead software equity research analyst) praised GPT 4 for its emotional intelligence. The growing demand for AI chips, exemplified by OpenAIs use of over 1.7 trillion parameters in its GPT 4 model, will further help NVIDIA and other AI chip manufacturers grow.On August 20, Bloomberg reported that OpenAI is releasing a feature that will allow businesses to use their company data to customize GPT 4 so that it can be trained on additional information for niche tasks. This is an example of letting companies fine-tune the AI model to act as a customer-service chatbot for their subject areas. According to DeepL CEO, Jarek Kutylowski, specialised AI models are essential for companies to grow vertically.Ever since it was founded in 2015, this research company has promoted open research and collaboration within the AI community. By sharing its findings and models, OpenAI encourages other researchers and organizations to build upon its work. This has accelerated advancements in AI and fostered a more inclusive environment.OpenAI partnered with Microsoft so that the tech giant's investment ($1 billion in 2019) could facilitate deep integration of OpenAI's models into its products. Azure offers these models as compliance-ready solutions, crucial for industries requiring high data security.This was also followed by Brazil's partnership with the tech giant to use OpenAI to reduce judicial costs. By automating tasks, the Brazilian judiciary is expediting case processing and improving efficiency, saving costs in public sectors.In a recent discussion on CNBC, Barton Crockett from Rosenblatt Securities and Amit Daryanani from Evercore ISI both agreed that AI is crucial for Apple's future success. Crockett emphasized that AI offers a unique opportunity to reinvigorate The company's device ecosystem and that consumers are increasingly valuing AI capabilities in their tech devices, and it seems to be falling behind in this regard. He suggested that partnerships with AI companies like OpenAI could help it bridge this gap and enhance its offerings.According to reports from Bloomberg and The Wall Street Journal, OpenAI is reportedly seeking significant new funding, potentially valuing the company at over $100 billion. The investment round, led by Thrive Capital, highlights the intensifying competition among tech giants for a dominant position in the AI industry.In August 2024, WebProNews reported that OpenAI's user base doubled to over 200 million in a year and its annual revenue exceeded $2 billion. Over 90% of Fortune 500 companies now use OpenAI products. However, maintaining lead requires addressing practical, safety, and user-friendliness concerns. OpenAI's plans, including its new search engine, SearchGPT, aim to address these challenges and solidify its position. CEO Sam Altman believes SearchGPT can significantly improve search capabilities.These instances leave us thinking about whether it's collaboration or competition that can help AI progress fastest. As the Managing Partner at Boldsquare, Dylan Jones points out, strategic partnerships can significantly impact a company's valuation. Tech giants' moves indicate a calculated effort to maintain their AI leadership, even if it means blurring the lines between collaboration and competition.In a discussion at CNBCs 'Squawk Box', CoreWeave co-founder and CEO, Mike Intrator, said that the demand for AI infrastructure is relentless and has been in a state of severe disequilibrium for the past 2.5 years.He believes that the demand for Nvidia chips is skyrocketing, while the rest of the industry is trying to keep up with it, including CoreWeave. According to Intrator, CoreWeave and its clients anticipate significant growth in AI infrastructure demand. Due to limited industry capacity, clients are struggling to train and serve AI models. CoreWeave, with its ability to provide large-scale AI infrastructure, is well-positioned to meet this growing demand.However, startups often suffer at the cost of these partnerships, failing to compete with tech giants. Still, many companies are emerging and progressing at a good pace. In this context, we are here with a list of the 10 hidden AI stocks to buy now.MethodologyTo compile our list, we reviewed media reports and watched Wall Street analysts' interviews to determine under-the-radar and hidden AI stocks. We compiled a list of 20 potential stocks and selected the 10 most popular among elite hedge funds that are expected to be key beneficiaries of the secular trends in artificial intelligence. The stocks are ranked in ascending order of the number of hedge funds that have stakes in them, as of Q2 2024.Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here).Best Clean Energy Stocks To Buy According to BillionairesSolar panel workers installing a new farm for clean energy generation.Vistra Corp. (NYSE:VST)Number of Hedge Fund Holders: 92Vistra Corp. (NYSE:VST) is a prominent independent power producer company that uses 400+ AI models to optimize operations, improve customer service, analyze energy usage data, predict demand, and optimize grid management.It operates as an integrated retail electricity and power generation company and delivers essential energy resources to households, businesses, and communities. The surge in AI data center demand has promoted this company's growth. It is also the largest independent power producer (IPP) in Texas, well-positioned to serve the state's growing data center market.In March, the company acquired Energy Harbor for $3.43 billion, a nuclear power and retail energy business. This deal added around 4,000 MW of nuclear capacity and 1 million retail customers to the company portfolio, making it the owner of the second-largest competitive nuclear fleet in the US.The Nuclear Regulatory Commission approved Vistra Corp.'s (NYSE:VST) request to extend the operation of its Comanche Peak Nuclear Power Plant through 2053, an additional 20 years beyond the original license, continuing reliable generation of zero-carbon electricity from this 2,400-MW facility.In the US, which houses a significant share of these data centers, electricity use is forecasted to increase from 200 TWh in 2022 to 260 TWh by 2026. This is a 6% surge in the nations total power consumption.According to the International Energy Agency (IEA), data centers that used 460 terawatt-hours (TWh) of electricity in 2022 are expected to see their energy needs double to over 1,000 TWh by 2026. The total energy share of data centers is expected to increase by 3%-10% in the coming years.Its AI-powered Heat Rate Optimizer (HRO) is deployed across 67 power-generation units and has achieved an average 1% improvement in efficiency, resulting in annual cost savings of over $23 million. This technology helped reduce carbon emissions, contributing to the goal of achieving a 60% reduction by 2030 and net-zero emissions by 2050.In the second quarter of 2024, the company saw an overall 20.57% year-over-year improvement in revenue. The company is well-positioned to capitalize on the growing power demand. 92 hedge funds are long in the company as of June 30, and the largest stake is valued at $587,931,842 by Lone Pine Capital.Legacy Ridge Capital stated the following regarding Vistra Corp. (NYSE:VST) in its Q2 2024 investor letter:One of the sectors we know well which had been out of favor for several years has quickly come into favor: Independent Power Producers (IPPs). Weve written consistently about NRG and Vistra Corp. (NYSE:VST) since the 2019 letter, have owned each, or both, since 2018, and invested a meaningful amount of our assets in VST specifically the past few years. Nate and I intend on spending more time in the year-end letter on our updated views on the IPPs and our learnings from the on-going investment, but we were a bit surprised how quickly the narrative around these companies changed. Our Blue Sky 2030 estimates of intrinsic value converged with the share price 6-years before we thought probable. In the 2019 letter, with respect to VST, we wrote:Over the next decade management should have close to $15 Billion to deploy to share repurchases. If you assume they have to pay an average price for the stock thats higher than the current one, and they can only repurchase 60% of shares outstanding instead of the 100% the math implies, FCF per share in 2030 would be $14. Thats a $70 stock at todays valuation, but a $140 stock at a more reasonable FCF yield of 10%. And The IPPs are un-investable for most money managers, so there we are. When they become investable well probably be long gone.Were not exactly long gone, but sentiment has certainly surpassed investable. After 5+ years of VST trading between $17 $26 a shareand $26 exactly a year agoit hit a high of $107 in May on the heels of the Artificial Intelligence (AI) narrative and the implications for electricity demand. While we agree with the prevailing consensus view that more Data Centers will be built, Data Centers require base load energy, and that the US will probably be short base load energy, predicting the rate of any technological advancement is not our area of expertise, and we feel the margin of safety has dissipated. Therefore, what had been our largest position entering 2023 and 2024, and has been our greatest contributor to performance, is now one of the smaller positions in the fund.Overall VST ranks 3rd on our list of the hidden AI stocks to buy. While we acknowledge the potential of VST as an investment, our conviction lies in the belief that AI stocks hold great promise for delivering high returns and doing so within a shorter timeframe. If you are looking for an AI stock that is more promising than VST but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT: $30 Trillion Opportunity: 15 Best Humanoid Robot Stocks to Buy According to Morgan Stanley and Jim Cramer Says NVIDIA Has Become A Wasteland.Disclosure: None. This article is originally published at Insider Monkey. | Process Automation/Decision Making/Prediction | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Tribune News Service | OpenAI pitched White House on unprecedented data center buildout | The push comes as power projects in the U.S. are facing delays. | https://www.bostonherald.com/2024/09/25/openai-data-centers/ | 2024-09-25T17:29:18Z | By Shirin Ghaffary, Bloomberg NewsOpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China.Following a recent meeting at the White House, which was attended by OpenAI Chief Executive Officer Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5 gigawatt data centers in various U.S. states, based on an analysis the company engaged with outside experts on. To put that in context, 5 GW is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes.OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the U.S. can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the U.S. needs policies that support greater data center capacity, the document said.Altman has spent much of this year trying to form a global coalition of investors to fund the costly physical infrastructure required to support rapid AI development, while also working to secure the U.S. government’s blessing for the project. But the details on the energy capacity of the data centers Altman and OpenAI are calling for have not previously been reported.“OpenAI is actively working to strengthen AI infrastructure in the U.S., which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AI’s benefits accessible to everyone,” a spokesperson for OpenAI said in a statement provided to Bloomberg News.The push comes as power projects in the U.S. are facing delays due to long wait times to connect to grids, permitting delays, supply chain issues and labor shortages. But energy executives have said powering even a single 5 gigawatt data center would be a challenge.Joe Dominguez, CEO of Constellation Energy Corp., said he has heard Altman is talking about building five to seven data centers that are each 5 GW. The document shared with the White House does not provide a specific number. OpenAI’s aim is to focus on a single data center to start, but with plans to potentially expand from there, according to a person familiar with the matter.“Whatever we’re talking about is not only something that’s never been done, but I don’t believe it’s feasible as an engineer, as somebody who grew up in this,” Dominguez told Bloomberg News. “It’s certainly not possible under a timeframe that’s going to address national security and timing.”The U.S. has a total of 96 GW of installed capacity of nuclear power. Last week, OpenAI’s biggest investor, Microsoft Corp., struck a deal with Constellation in which the nuclear provider will restart the shuttered Three Mile Island facility solely to provide Microsoft with nuclear power for two decades.In June, John Ketchum, CEO of NextEra Energy Inc., said the clean-energy giant had received requests from some tech companies to find sites that can support 5 GW of demand, without naming any specific firms. “Think about that. That’s the size of powering the city of Miami,” he said.That much power would require a mix of new wind and solar farms, battery storage and a connection to the grid, Ketchum said. He added that finding a site that could accommodate 5 GW would take some work, but there are places in the U.S. that can fit 1 gigawatt.With assistance from Dina Bass and Mark Chediak.©2024 Bloomberg L.P. Visit bloomberg.com. Distributed by Tribune Content Agency, LLC.Originally Published: September 25, 2024 at 1:29 p.m. | Unknown | Management/Business and Financial Operations/Computer and Mathematical/Architecture and Engineering | null | null | null | null | null | null |
|
news | G42 | G42 unveils NANDA, a new Hindi LLM at UAE-India Business Forum in Mumbai | The announcement was made in the presence of His Highness Sheikh Khaled bin Mohammed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi during his state visit to India The announcement was made in the presence of His Highness Sheikh Khaled bin Mohammed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi during his state visit to India | https://www.globenewswire.com/news-release/2024/09/10/2943892/0/en/G42-unveils-NANDA-a-new-Hindi-LLM-at-UAE-India-Business-Forum-in-Mumbai.html | https://ml-eu.globenewswire.com/Resource/Download/f0d3bdf2-e018-4925-b501-b393882933f5 | 2024-09-10T15:10:00Z | MUMBAI, India and ABU DHABI, United Arab Emirates, Sept. 10, 2024 (GLOBE NEWSWIRE) -- G42, the UAE-based leading technology holding group, today announced that it will soon launch NANDA a cutting-edge Hindi Large Language Model.NANDA is a 13-billion parameter model trained on approximately 2.13 trillion tokens of language datasets, including Hindi.With a name inspired by one of Indias highest peaks, NANDA is the result of a collaboration between Inception (a G42 company), Mohamed bin Zayed University of Artificial Intelligence - the worlds first graduate research university dedicated to AI - and Cerebras Systems. The model was trained on Condor Galaxy, one of the worlds most powerful AI supercomputers for training and inferencing built by G42 and Cerebras.NANDAs release will mark a significant milestone in the realm of AI for India, offering over half a billion Hindi language speakers the opportunity to harness the potential of generative AI.India has solidified its position as a global technology leader, driven by transformative initiatives like Digital India and Startup India under Prime Minister Narendra Modis leadership. As the country stands on the brink of AI-powered growth, G42 is proud to contribute to this journey with the launch of NANDA in support of India's AI ambitions, says Manu Jain, CEO G42 India.G42 has a strong track record in the development of language and domain-specific LLMs. With NANDA, we are heralding a new era of AI inclusivity, ensuring that the rich heritage and depth of Hindi language is represented in the digital and AI landscape. NANDA exemplifies G42s unwavering commitment to excellence and fostering equitable AI, says Dr. Andrew Jackson, Acting CEO of Inception, a G42 company.In August 2023, G42 launched JAIS, the worlds first open-source Arabic LLM. JAIS transformed Arabic Natural Language Processing (NLP), unlocking access to native language generative AI capabilities for over 400 million Arabic speakers globally. With models ranging from 590 million to 70 billion parameters, JAIS set a new standard for linguistic AI which G42 now seeks to replicate for other regions whose languages are still underrepresented.Building on this success, NANDA extends G42s mission to empower Indias scientific, academic, and developer communities by accelerating the growth of a vibrant Hindi language AI ecosystem and ensure broad access to AI across the region.About G42 G42 is a technology holding group, a global leader in creating visionary artificial intelligence for a better tomorrow. Born in Abu Dhabi and operating worldwide, G42 champions AI as a powerful force for good across industries. From molecular biology to space exploration and everything in between, G42 realizes exponential possibilities, today. To know more visit www.g42.ai About Inception Inception, a G42 company, builds AI-native products that leverage cutting-edge AI research, models, and systems applied to business problems. We pioneer domain-specific AI applications, to deliver AI-driven solutions, across languages and sectors. For more information, please visit www.inceptionai.aiAbout Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)MBZUAI is a graduate research university focused on artificial intelligence, computer science, and digital technologies across industrial sectors. The university aims to empower students, businesses, and governments to advance artificial intelligence as a global force for positive progress. MBZUAI offers various graduate programs designed to pursue advanced, specialized knowledge and skills in artificial intelligence, including computer vision, machine learning, natural language processing, robotics, and computer science. For more information, please visit www.mbzuai.ac.ae About Cerebras SystemsCerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the worlds largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit www.cerebras.ai or follow us on LinkedIn or X.For more details, please write to [email protected] | Unknown | Unknown | null | null | null | null | null | null |
news | Jan Lasek | Post-Training Quantization of LLMs with NVIDIA NeMo and NVIDIA TensorRT Model Optimizer | As large language models (LLMs) are becoming even bigger, it is increasingly important to provide easy-to-use and efficient deployment paths because the cost of serving such LLMs is becoming higher. | https://developer.nvidia.com/blog/post-training-quantization-of-llms-with-nvidia-nemo-and-nvidia-tensorrt-model-optimizer/ | 2024-09-09T16:30:11Z | As large language models (LLMs) are becoming even bigger, it is increasingly important to provide easy-to-use and efficient deployment paths because the cost of serving such LLMs is becoming higher. One way to reduce this cost is to apply post-training quantization (PTQ), which consists of techniques to reduce computational and memory requirements for serving trained models. In this post, we provide an overview of how PTQ is implemented in NVIDIA NeMo. This is made available by using NVIDIA TensorRT Model Optimizer, which is a library that quantizes and compresses deep learning models for optimized inference on GPUs. It also uses NVIDIA TensorRT-LLM, which is an open-source library for optimizing LLM inference. We present both accuracy and performance results for quantized models. Throughout the example, we use the Llama 3 models.PTQ is a natural extension of the NeMo LLM building and customizing capabilities for seamless and efficient deployment paths using NVIDIA TensorRT Model Optimizer and NVIDIA TensorRT-LLM. As an example, NVIDIA NIM benefits from the PTQ workflow in NeMo.From a technical perspective, quantization has several benefits: It reduces model size, which makes it suitable for deploying using fewer GPUs with lower total device memory available.It reduces memory bandwidth pressure by using fewer-bit data types.It significantly speeds up matrix multiplication (GEMM) operations on the NVIDIA architecture, for example, up to 2x for FP8 compared to FP16/BF16 data type in microbenchmarks.NVIDIA NeMo is an end-to-end platform for developing custom generative AI, anywhere. It includes tools for training, finetuning, retrieval-augmented generation, guardrailing, and toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. After you build a model in NeMo using a wide array of options offered by the toolkit, NeMo export and deployment tools can be used to apply PTQ methods and serve the optimized model.The recent NeMo container release is a self-contained toolkit coming with all the required dependencies for applying PTQ and deploying quantized LLMs.NeMo and TensorRT Model Optimizer offer a broad range of models suitable for quantization, including the following families: PTQ support also comes with multi-node support for calibrating the largest size LLMs using appropriate tensor and pipeline parallelism settings.At a high level, the PTQ workflow consists of the following steps:Loading a model.Calibrating the model to obtain scaling factors for lower-precision GEMMs and exporting the quantized model to the TensorRT-LLM checkpoint.Building the TensorRT-LLM engine.Deploying the model (for example, using PyTriton).A typical PTQ use case starts with a model trained in a high-precision format, for example, FP16 or BF16, that should be served in a lower-precision data type, for example, FP8. The input model can be a foundation or instruction-tuned LLM obtained from previous pipeline steps. NeMo also offers community model converters for a wide array of models that can be used to produce corresponding NeMo checkpoints.In PTQ, calibrating is a process of getting scaling factors for matrix multiplication operations performed in model layers so they can be computed using lower precision formats than those used for training. This step can be conveniently launched directly from the NeMo container (using torchrun, for example) or using NeMo framework Launcher on Slurm clusters for multi-node use cases. In short, quantization code boils down to the following code example:from nemo.collections.nlp.models.language_modeling.megatron_gpt_model import MegatronGPTModelfrom nemo.export.quantize import Quantizer# Set quantization and export configs appropriately, see https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/language_modeling/conf/megatron_gpt_ptq.yamlquantizer = Quantizer(quantization_config, export_config)model = MegatronGPTModel.restore_from(...)dataloader = ... # A dataloader that yields lists of stringsdef forward_loop(model): # Model forward pass for collecting activation statistics for calibration ...model = quantizer.quantize(model, forward_loop)quantizer.export(model)The full script megatron_gpt_ptq.py is the entry point for the calibration workflow. Important quantization parameters are specified in the megatron_gpt_ptq.yaml config with default settings recommended. Most importantly, the low-precision formats and quantization algorithms available are FP8, INT4 AWQ, and INT8 SQ.Typically, the choice of dataset does not significantly impact accuracy. However, for highly domain-specific applications, such as code completion models like StarCoder2, using a code dataset is recommended to estimate calibration statistics accurately.The final step of the calibration step is to save the model in the TensorRT-LLM checkpoint format that is suitable for building a TensorRT-LLM engine in the next step.Overall, the calibration process is a matter of minutes using an NVIDIA DGX H100 GPU node for a model of moderate size with 70B parameters using tensor parallelism.Before running TensorRT-LLM, you build the inference engine by compiling a set of binaries that take into account optimizations for the specific GPU hardware, model architecture, and inference settings. Use the same API as for regular NeMo models to build engines for the quantized checkpoint obtained in the calibrating step. For building FP8 engines, this step must be run using compute resources with the necessary FP8 support, for example, the NVIDIA H100 Hopper or the NVIDIA L40 Ada Lovelace GPUs.The following Python commands show how to build a TensorRT-LLM engine and pass an example prompt through the model.from nemo.export.tensorrt_llm import TensorRTLLMtrt_llm_exporter = TensorRTLLM(model_dir=path/to/trt_llm_engine)trt_llm_exporter.export( nemo_checkpoint_path=path/to/model_qnemo, max_batch_size=8, max_input_len=2048, max_output_len=512,)trt_llm_exporter.forward(["How does PTQ work?"])The export command takes typically several minutes to complete building or exporting a TensorRT-LLM engine, saving it into the model_dir parameter.A given TensorRT-LLM engine can be conveniently deployed using PyTriton. from nemo.deploy import DeployPyTritonfrom nemo.export.tensorrt_llm import TensorRTLLMtrt_llm_exporter = TensorRTLLM(model_dir="path/to/trt_llm_engine")nm = DeployPyTriton( model=trt_llm_exporter, triton_model_name="llama3_70b_fp8", port=8000,)nm.deploy()nm.serve()Finally, on the client, NeMo Framework provides a dedicated class to send a query to the server. The following code example shows how to use it.from nemo.deploy.nlp import NemoQueryLLMnq = NemoQueryLLM( url="localhost:8000", model_name="llama3_70b_fp8",)nq.query_llm( prompts=["How does PTQ work?"], top_k=1,)For demonstration purposes, we present Llama 3 PTQ throughput and accuracy results for two pretrained Llama 3 model variants: 8B and 70B We evaluated TensorRT-LLM engine performance and accuracy using the benchmark.py and mmlu.py scripts, respectively.The following results were obtained for NVIDIA H100 80GB GPUs with TensorRT-LLM 0.12.0 and TensorRT Model Optimizer 0.17.0. The software stack comes with the latest NeMo framework container to provide you with the complete environment.Figure 2 shows MMLU results for two Llama 3 model sizes across different quantization methods, along with the baseline FP16 result.Notably, FP8 quantization preserves the accuracy to the highest extent. In the case of the INT8 SQ and both Llama 3 model sizes, we found that the SmoothQuant alpha parameter can improve accuracy. This parameter governs quantization focus from weight-only to activation-only. It can be conveniently set in the quantization config. In the case of both Llama 3 model sizes an intermediate value of alpha=0.8 yields the best MMLU results.In Table 1, the percentage number in brackets is a fraction of the baseline FP16 score and measures the extent to which a given scenario preserves accuracy. FP16FP8INT8SQINT4 AWQLLAMA 3 8B0.6540.649 (99.2%)0.629 (96.2%)0.629 (96.2%)LLAMA 3 70B0.7900.787 (99.6%)0.772 (97.7%)0.777 (98.4%)Table 1. MMLU accuracy results for Llama 3 modelsFigure 3 shows inference speedups defined as the throughput ratio of a quantized model over the FP16 baseline for different quantization methods and two Llama 3 model sizes. The exact throughput results achieved are detailed later in this post. In all the experiments, we used input and output lengths of 2048 and 512, respectively, to build TensorRT-LLM engines and collect performance data. These values can be considered representative parameters for text summarization scenarios.The following tables show the number of GPUs used to build the engine for a given quantization format as well as the FP16 baseline results for two batch sizes, 32 and 1. The throughput is normalized with respect to the number of GPUs used.MODELFORMATGPUsTHROUGHPUT [TOKENS/SEC]SPEEDUPLLAMA 3 8BFP1612293.08FP813330.851.45INT8 SQ13203.501.40INT4 AWQ12475.981.08LLAMA 3 70BFP164256.10FP82464.221.81INT8 SQ2463.251.81INT4 AWQ2360.571.41Table 2. TensorRT-LLM engine throughput results for batch size = 32 for the baseline and quantized Llama 3 modelsWe observed 1.45, 1.40, and 1.08x speedups for FP8, INT8 SQ, and INT4 AWQ, respectively, for the smaller Llama 3 variant. In the case of the larger model, the speedup is up to 1.81x for both FP8 and INT8 SQ. INT4 AWQ is a weight-only quantization method that is recommended for use with small batch sizes. It mostly improves memory bandwidth but becomes compute-bound for larger batches. We present results for batch size = 1 for comparison. In this case, we obtained up to 1.56x and 2.66x performance benefits over the FP16 baseline for the Llama 3 8B and Llama 3 70B models, respectively. All the quantized variants of the Llama 3 70B model can be served using only one NVIDIA H100 GPU while the baseline FP16 precision requires at least two GPUs.MODELQFORMATGPUsTHROUGHPUT [TOKENS/SEC]SPEEDUPLLAMA 3 8BFP161135.79FP81170.751.26INT8 SQ1158.901.17INT4 AWQ1211.501.56LLAMA 3 70BFP16217.75FP8132.641.84INT8 SQ132.181.81INT4 AWQ147.132.66Table 3. TensorRT-LLM engine throughput results for batch size = 1 for the baseline and quantized Llama 3 modelsThe throughput numbers reported should not be considered peak performance, as they could be further improved using other features of TensorRT-LLM such as in-flight batching, for example.We also examined performance statistics using the TensorRT-LLM gptManagerBenchmark tool, focusing on the FP16 baseline and FP8 quantized engines for batch size = 32. In the case of the Llama 3 8B model, time to first token (TTFT) improves and inter-token latency (ITL) speedups are roughly equivalent to the throughput-based speedups reported earlier in this post. For the larger Llama 3 70B model, both the TTFT and ITL results achieved by quantized engines running on 2x fewer GPUs are similar to the baseline FP16 results. This directly translates into 2x savings for resources used. With PTQ, models can be served more efficiently using fewer GPUs.This post showed you how to use PTQ in NeMo to build efficient TensorRT-LLM engines for LLM deployment. For future iterations, the number of bits used for models will decrease substantially, as FP4 support comes with the next-generation NVIDIA B100 Blackwell architecture. It is also worth mentioning that for some applications, PTQ may be sufficient while other applications might require quantization-aware Training (QAT) techniques to fine-tune quantized weights to maintain model accuracy. QAT is also available in NeMo to meet these needs.For more information, see Post-Training Quantization (PTQ). The entry point for PTQ is the megatron_gpt_ptq.py script. You may also find the NeMo Framework Post-Training Quantization (PTQ) playbook useful. It guides you through the whole deployment process using two example models: Llama 3 and Nemotron-340b. As for QAT, the entry point is the megatron_gpt_qat.py script and the corresponding playbook is NeMo Framework Quantization Aware Training (QAT) for Llama2 SFT Model. For more information, see Best Practices for Tuning the Performance of TensorRT-LLM.The help of many dedicated engineers across various teams at NVIDIA is greatly appreciated for their contributions to successful NeMo and TensorRT Model Optimizer integration, including Asma Kuriparambil Thekkumpate, Keval Morabia, Wei-Ming Chen, Huizi Mao, Ao Tang, Dong Hyuk Chang, Alexandros Koumparoulis, Enwei Zhu, and Simon Layton. | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Investing.com | Fujitsu launches "Takane" - A large language model for enterprises offering the highest Japanese language proficiency in the world | Fujitsu launches "Takane" - A large language model for enterprises offering the highest Japanese language proficiency in the world | https://www.investing.com/news/press-releases/fujitsu-launches-takane--a-large-language-model-for-enterprises-offering-the-highest-japanese-language-proficiency-in-the-world-93CH-3638254 | 2024-09-30T06:40:05Z | KAWASAKI, Japan, Sept 30, 2024 - (JCN Newswire) - - Fujitsu today announced the launch of Takane, a large language model (LLM) designed for enterprise use in secure private environments. Developed in collaboration with Cohere Inc., (1) Takane represents a significant leap forward in generative AI capabilities, offering world-class Japanese language capabilities.Fujitsu will integrate this new LLM into its generative AI services on Fujitsu Kozuchi and offer it through Fujitsu Data Intelligence PaaS (DI PaaS), an all-in-one operation platform that is part of Fujitsu Uvance, Fujitsu's portfolio of solutions addressing cross-industry societal challenges. Takane will be available globally starting September 30, 2024.Takane, which has achieved world-leading results on the Japanese General Language Understanding Evaluation (JGLUE) benchmark (2), is designed for enterprise use in a secure private environment. Fujitsu will offer Takane alongside its generative AI framework for enterprises which comprises knowledge graph extended RAG technology for referencing large-scale text and monitoring technology for generative AI that ensures output is compliant with laws, regulations, and corporate rules. With this comprehensive approach, Fujitsu aims to create an LLM that supports customers' business transformation.With the launch of Takane, Fujitsu elevates its generative AI offerings on Fujitsu Kozuchi with a high-precision LLM tailored for secure private environments. This strategic move aligns with Fujitsu's vision for seamlessly integrating generative AI into business operations.Takane will also be provided under an initiative to deliver total support for customers' generative AI journeys by combining the LLM with its Uvance Wayfinders consulting service and broader Fujitsu Uvance offerings. With this comprehensive approach, Fujitsu aims to empower customers to unlock new value and address societal issues by enhancing productivity, creativity, and innovation in their business operations.Vivek Mahajan, Corporate Vice President, CTO, CPO, Fujitsu Limited, comments:"Takane represents a key part of Fujitsu's AI strategy, which was unveiled last year. Takane will empower companies in industries that demand the highest level of security to harness the power of generative AI. Fujitsu is committed to creating new value for businesses by developing cutting-edge AI technologies. This commitment fuels our three growth drivers: modernization, Fujitsu Uvance, and consulting, which are all powered by our innovative technology."Yoshinami Takahashi, Corporate Vice President, COO, Fujitsu Limited, comments:"Fujitsu Uvance supports our customers' business growth and the resolution of societal challenges through advanced decision-making supported by data and AI. We are excited to offer Takane, a state-of-the-art LLM specifically optimized for the Japanese language to a wider audience via our DI PaaS platform. We are dedicated to support customer's business transformation by bringing the most advanced AI to market, not only through our own innovations, but also by collaborating with our global partners."Aidan Gomez, Co-founder and CEO, Cohere Inc., comments:"We are very excited to bring Takane's advanced Japanese LLMs to global enterprises. Our partnership with Fujitsu accelerates AI adoption in this critically important market by offering secure, performant AI designed specifically for business use across Japanese and other languages."Growing demand for specialized LLMs optimized for Japanese business needsThe adoption of generative AI is increasing globally across industries including marketing and customer service. However, general-purpose LLMs are often delivered through public cloud services and are therefore not suitable for use in industries handling confidential data and for tasks requiring compliance with laws, regulations, and industry rules. Furthermore, characteristics of the Japanese language, including the mixed use of multiple character types, omitted subjects, and honorific expressions also pose significant hurdles for general-purpose LLMs, limiting their accuracy and reliability in business contexts. A highly accurate Japanese LLM is especially important for sectors including government, finance, healthcare, and law, where even minor language errors can have serious consequences.Takane features1. Strong Japanese language capabilitiesTakane is based on Cohere's LLM Command R+ and combines Fujitsu's extensive knowledge in developing Japanese-specialized LLMs with Cohere's expertise in creating task-specific language models. Takane will be offered exclusively by Fujitsu around the world. Through additional training and fine-tuning to enhance Japanese language capabilities, Takane has achieved industry-leading performance on the JGLUE benchmark for Japanese language understanding, surpassing competitors (3) in natural language inference (JNLI), reading comprehension (JSQuAD) tasks, and semantic understanding and syntactic analysis as demonstrated by its top ranking on the Nejumi LLM leaderboard3. Takane further includes Command R+'s multilingual support (covering 10 languages) and its ability to automate business processes.2. Secure private environmentTakane is a medium-sized LLM that can be used in a secure, private environment. This allows for its safe use in industries where data breaches are a concern, such as finance, manufacturing, and the security sectors, which all handle sensitive data.3. Specialization through fine-tuningTakane can be further specialized for customer operations through fine-tuning and customization using company data. Cohere's RAG technology in combination with Fujitsu's knowledge graph extended RAG and generative AI auditing technologies further facilitate compliance with laws and regulations, as well as industry and corporate rules. Fine-tuning and RAG technology make it possible to use Takane even in industries with specialized terminology and frequent regulatory revisions, such as the financial industry.Fujitsu will offer Takane through its DI PaaS platform, thereby creating a business application that combines data and AI. DI PaaS empowers organizations to unify vast, disparate data sources, both internal and external, and glean meaningful insights. By enabling seamless connection and analysis of data across departments and industries, DI PaaS helps customers to unlock new knowledge and solutions, and fosters data-driven innovation across organizations.Fujitsu will unlock increased productivity and creativity and continue supporting customers' business transformation through AI/data integration.Takefumi Yamamoto, Operating Officer, Deputy Group Chief Information Officer, Mizuho Financial Group, Inc., comments:"In fiscal 2023, we conducted joint trials with Fujitsu to explore the potential of generative AI in system development and maintenance. Since fiscal 2024, we have been using Fujitsu Kozuchi to enhance system quality by streamlining parts of the system development process. Now, with the launch of the enterprise LLM Takane, we are excited to unlock even greater possibilities. By leveraging knowledge graph extension RAG and other cutting-edge technologies, we will be able to utilize our internal knowledge much more efficiently. We're committed to continuously improving the quality and resilience of our system development and maintenance processes, and we believe "Takane" will be a valuable tool in achieving this goal."Shoji Tanaka, Ph.D, Senior General Manager, Corporate AI Strategy Division, Deputy Senior General Manager, DX Innovation Center, Mitsubishi Electric (OTC:MIELY) Corporation, comments:"Takane seamlessly merges the world-class technologies of Fujitsu and Cohere. This powerful combination opens up a whole new realm of possibilities for the generative AI market in Japan, and I am excited to see what the future holds for both companies."Yuki Shuto, Partner, Chief Strategy & Innovation Officer, Asia Pacific Leader of Technology, Media and Telecommunications, Deloitte Tohmatsu Consulting LLC, comments:"Generative AI has moved beyond proof-of-concept in numerous organizations, transitioning into a phase of full-scale integration across products, services, and operations. The key to success will be secure, reliable LLMs that enable organizations to confidently share and leverage their data. Takane, with its enhanced Japanese language capabilities, holds immense potential to accelerate transformation within Japanese businesses and government agencies. Deloitte Tohmatsu, in collaboration with Fujitsu, is committed to unlocking the full potential of generative AI and contributing to the growth of Japan's economy and society."Takafumi Yano, CEO, RUTILEA, Inc., comments:"Under our slogan "AI Made Easy," we are committed to providing highly accurate AI services that will enable the integration of AI into all business processes. We are currently pursuing vertical AI initiatives in specific industries, including central government, local governments, the automotive industry, the pharmaceutical industry, and power procurement optimization. In 2024, we will launch an AI development platform business with the establishment of a GPU data center in Okuma Town, Futaba District, Fukushima Prefecture, Japan. 'Takane' holds the potential to significantly accelerate the adoption of AI in various vertical industries across Japan. We look forward to the day when we can contribute to its development and utilization."[1]Cohere Inc: Headquarters: Toronto, Ontario, Canada; San Francisco, California, USA; Co-founder and CEO: Aidan Gomez[2]Achieved world-leading results on the Japanese General Language Understanding Evaluation (JGLUE) benchmark: Takane outperformed other LLMs on JGLUE, including large-scale, general-purpose LLMs provided through cloud services. For JNLI and JCoLA, there was uncertainty in the correct data (ground truth), so the correct data were corrected by multiple annotators and measured as reference values (as measured by Fujitsu and Cohere in September 2024).[3]Surpassing competitors: On the Nejumi LLM Leaderboard 3, which evaluates the Japanese language capabilities of LLM models, Takane achieved the highest performance in the semantic understanding category with a score of 0.862 and in the syntactic analysis category with a score of 0.773 (as measured by Fujitsu and Cohere in September 2024).[4]JGLUE: Japanese version of GLUE (General Language Understanding Evaluation) (JGLUE: Japanese General Language Understanding Evaluation; Kentaro Kurihara (Waseda University), Daisuke Kawahara (Waseda University), Tomohide Shibata, 28th Annual Meeting of the Language Processing Society of Japan (NLP2022))[5]JSTS: Task to estimate the semantic similarity between a pair of sentences.[6]JCoLA: Task for syntactic evaluation of Japanese language models[7]JNLI: Task for natural language inference that recognizes the inference relationship between a premise sentence and a hypothesis sentence.[8]JCommonsenseQA: Task to assess commonsense reasoning ability.[9]JSQuAD: Machine reading comprehension task to evaluate ability to read a document and answer questions about it.About FujitsuFujitsu's purpose is to make the world more sustainable by building trust in society through innovation. As the digital transformation partner of choice for customers in over 100 countries, our 124,000 employees work to resolve some of the greatest challenges facing humanity. Our range of services and solutions draw on five key technologies: Computing, Networks, AI, Data & Security, and Converging Technologies, which we bring together to deliver sustainability transformation. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.7 trillion yen (US$26 billion) for the fiscal year ended March 31, 2024 and remains the top digital services company in Japan by market share. Find out more: www.fujitsu.com.Press ContactsFujitsu LimitedPublic and Investor Relations DivisionInquiriesCopyright 2024 JCN Newswire . All rights reserved. | Unknown | Management/Business and Financial Operations/Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | Bloomberg News | OpenAI Pitched White House on Unprecedented Data Center Buildout | OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China. | https://financialpost.com/pmn/business-pmn/openai-pitched-white-house-on-unprecedented-data-center-buildout | null | 2024-09-25T00:24:26Z | (Bloomberg) OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China. Following a recent meeting at the White House, which was attended by OpenAI Chief Executive Officer Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5 gigawatt data centers in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5 GW is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes.OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the US can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the US needs policies that support greater data center capacity, the document said.Altman has spent much of this year trying to form a global coalition of investors to fund the costly physical infrastructure required to support rapid AI development, while also working to secure the US governments blessing for the project. But the details on the energy capacity of the data centers Altman and OpenAI are calling for have not previously been reported. OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AIs benefits accessible to everyone, a spokesperson for OpenAI said in a statement provided to Bloomberg News. The push comes as power projects in the US are facing delays due to long wait times to connect to grids, permitting delays, supply chain issues and labor shortages. But energy executives have said powering even a single 5 gigawatt data center would be a challenge. Joe Dominguez, CEO of Constellation Energy Corp., said he has heard Altman is talking about building 5 to 7 data centers that are each 5 GW. The document shared with the White House does not provide a specific number. OpenAIs aim is to focus on a single data center to start, but with plans to potentially expand from there, according to a person familiar with the matter.Whatever were talking about is not only something thats never been done, but I dont believe its feasible as an engineer, as somebody who grew up in this, Dominguez told Bloomberg News. Its certainly not possible under a timeframe thats going to address national security and timing.The US has a total of 96 GW of installed capacity of nuclear power. Last week, OpenAIs biggest investor, Microsoft Corp., struck a deal with Constellation in which the nuclear provider will restart the shuttered Three Mile Island facility solely to provide Microsoft with nuclear power for two decades.In June, John Ketchum, CEO of NextEra Energy Inc, said the clean-energy giant had received requests from some tech companies to find sites that can support 5 GW of demand, without naming any specific firms. Think about that. Thats the size of powering the city of Miami, he said.That much power would require a mix of new wind and solar farms, battery storage and a connection to the grid, Ketchum said. He added that finding a site that could accommodate 5 GW would take some work, but there are places in the US that can fit one gigawatt.With assistance from Dina Bass and Mark Chediak. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Emily MacKinnon | Researcher wants to ensure AI doesn't ruin the environment | Artificial intelligence (AI) has changed the world as we know it. It's been used for everything from health-care monitoring to writing speeches. But the technology's impact on the environment is becoming a serious concern. | https://techxplore.com/news/2024-09-ai-doesnt-environment.html | 2024-09-30T17:06:16Z | Artificial intelligence (AI) has changed the world as we know it. It's been used for everything from health-care monitoring to writing speeches. But the technology's impact on the environment is becoming a serious concern.ChatGPT, one of the most familiar AI models, is a form of generative AI that uses natural language processing to respond to user queries in a chatbot-style web interface.When OpenAI, the company that created ChatGPT, was training the third generation of their model (that is, teaching it what content to generate against users' questions), it used enough electricity to power 120 Canadian homes for an entire year.And training is just one aspect of an AI model's emissions. The largest contributor over time is model inference, or the process of running the model live. Large language models like ChatGPT run constantly, waiting for a user to ask a question.The data centers required to power these models currently account for three percent of global energy consumption, they rarely use renewable energy sources, and, according to Forbes, are emitting as much CO2 as the entire country of Brazil.Enter Dr. Tushar Sharma, an assistant professor in Dalhousie's Faculty of Computer Science.Dr. Sharma's research focuses on sustainable AI and software engineering. In other words: he ensures the source code that builds and runs these models is as clean and efficient as possible. When they are not, he identifies and fixes them.Dr. Sharma's SMART Lab recently published a study in ACM Transactions on Software Engineering and Methodology detailing how to measure an AI model's energy consumption on a granular level by identifying which parts of the code are the most power hungry. (Think of your home's power bill: it shows the home's energy consumption writ large, but it doesn't typically break down which appliances are drawing the most electricity.)In another study, his lab sifted through dozens of layers of code within AI models to "prune" tokens that were no longer relevant, useful, or effective."We move strategically through each layer of these big models and reduce the required computation inside," he explains.The idea is to train the models more efficiently, so the electrical draw and subsequent emissions are reduced. "We are trying to not have to use as much power or time, which leads to an energy reduction or a reduction in carbon emissions," he says. "The ideal scenario is that we are reducing the energy required to train or operate these systems without sacrificing the benefits."So is AI worth it?Dr. Christian Blouin, acting dean of Dal's Faculty of Computer Science, says AI has the potential to transform the world as we know it, and it's going to happen whether we make the technology greener or not."We have a responsibility to find a better way to tackle important problems that require less resources," he says. "As people discover new ways to leverage AI, it is critical that we develop the computer science to make it more sustainable."This balance is especially important for people who work within the climate sector. Dr. Anya Waite is the CEO and scientific director of the Ocean Frontier Institute (OFI), a research institute at Dal. OFI researches the ocean's changing role in our climate system and delivers solutions for climate change mitigation.Dr. Waite says that while AI is a critical tool to manage data and improve efficiency and accuracy, it becomes unsustainable if we end up spending more energy than we save from its use."Dr. Sharma's work is critical because it supports AI efficiency and lowers its cost and carbon footprint," she says. "Ultimately, without work such as Dr. Sharma's, we risk losing the ability to launch new innovations, and we could miss out on the major benefits they provide."A tricky balanceDr. Michael Freund is the director of Dal's Clean Technologies Research Institute (CTRI), and he says users are not always aware of the infrastructure and operations required to support the technology they use."Responsible growth of AI must consider environmental factors," Dr. Freund says. "It must require efficient operation, including more efficient code, responsible use and coupling data centers to green energy sources."It's a tricky balance, he acknowledges, because, like OFI, CTRI often uses AI to increase efficiency of operations."Work by researchers such as Dr. Sharma will shed light on the true value of AI and inform decisions on how it is developed and used," he says.A green AI futureConverting the data centers to using renewable energy sources is another big hurdle, and Dr. Sharma says research like his coupled with solar, wind and hydro power, will make AI greener."All of these techniques are ultimately helping to achieve this goal of green AI and figuring out how we can keep using these machine learning models, but at a lower energy cost."More information:Saurabhsingh Rajput et al, Enhancing Energy-Awareness in Deep Learning through Fine-Grained Energy Measurement, ACM Transactions on Software Engineering and Methodology (2024). DOI: 10.1145/3680470Citation: Researcher wants to ensure AI doesn't ruin the environment (2024, September 30) retrieved 30 September 2024 from https://techxplore.com/news/2024-09-ai-doesnt-environment.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Decision Making/Process Automation | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Bloomberg News | OpenAI, Nvidia Executives Discuss AI Infrastructure Needs With Biden Officials | OpenAI Chief Executive Officer Sam Altman and Nvidia Corp. CEO Jensen Huang met with senior Biden administration officials and other industry leaders at the White House on Thursday to discuss how to fill the massive infrastructure needs for artificial intelligence projects. | https://financialpost.com/pmn/business-pmn/openai-nvidia-executives-discuss-ai-infrastructure-needs-with-biden-officials | 2024-09-12T18:12:57Z | (Bloomberg) OpenAI Chief Executive Officer Sam Altman and Nvidia Corp. CEO Jensen Huang met with senior Biden administration officials and other industry leaders at the White House on Thursday to discuss how to fill the massive infrastructure needs for artificial intelligence projects.On the tech side, attendees also included Anthropic CEO Dario Amodei, Google President Ruth Porat and Microsoft Corp. President Brad Smith, according to people familiar with the meeting, which also had representatives from the energy sector. Government officials included Commerce Secretary Gina Raimondo, National Security Advisor Jake Sullivan and Energy Secretary Jennifer Granholm, according to the people.The goal, according to a White House official, was to boost public-private partnerships around the development of AI data centers in the US. Topics included permitting, workforce, power demands and economic impacts of the facilities, people familiar with the meeting said. OpenAI, for example, plans to spend tens of billions of dollars on a domestic AI infrastructure push that spans data centers, energy capacity and transmission and semiconductor manufacturing with investment from around the globe. Company executives have been meeting with government officials for months about a range of issues related to the initiative, including national security concerns that could be associated with foreign capital.OpenAI believes infrastructure is destiny and that building additional infrastructure in the US is critical to the countrys industrial policy and economic future, OpenAI said in a statement Thursday. The company highlighted the economic benefits of investing in US data center projects, including a possible 40,000 jobs across a number of US states. OpenAI pointed to similar investments by China, which aims to be a global AI leader by the end of the decade. Porat called robust US energy infrastructure crucial to ensuring US leadership in the emerging field of AI. Todays White House convening was an important opportunity to advance the work required to modernize and expand the capacity of Americas energy grid, she said in a statement.Anthropic and Microsoft declined to comment. The AI-fueled surge in US data center construction coincides with a broader manufacturing boost spurred by the Chips and Science Act and the Inflation Reduction Act the signature subsidy programs for semiconductors and clean energy enacted in 2022 under President Joe Biden. Those investments, along with data center expansion and other factors, are expected to drive electricity demand up by 15% to 20% over the next decade, according to the Energy Department. Data centers could consume as much as 9% of US electricity generation annually by 2030, up from 4% of total load in 2023, according to a report in May by the nonprofit Electric Power Research Institute.The Biden administration has said renewables such as wind and solar, as well as battery storage and energy efficiency gains, are some of the best ways to meet growing data center energy demand because they are rapidly scalable and cost competitive.Near-term data center driven electricity demand growth is an opportunity to accelerate the build out of clean energy solutions, improve demand flexibility, and modernize the grid while maintaining affordability, the Energy Department said in a blog post last month. However, the agency, which is set to release an assessment of energy consumption by data centers by years end, cautioned that projections of growth in electricity demand continue to evolve due to developing use cases and other factors.With assistance from Courtney Rozen. | Unknown | Management | null | null | null | null | null | null |
|
news | null | Electronic Paste Market Size to Grow USD 9967.9 Million by 2030 at a CAGR of 9.3% | Valuates Reports | BANGALORE, India, Sept. 11, 2024 /PRNewswire/ -- The global Electronic Paste market is projected to reach USD 9967.9 million by 2030 from an estimated USD 5836.7 million in 2024, at a CAGR of 9.3% during 2024 and 2030. Electronic Paste Market is Segmented by Type (Conductive Paste,... | https://www.prnewswire.co.uk/news-releases/electronic-paste-market-size-to-grow-usd-9967-9-million-by-2030-at-a-cagr-of-9-3--valuates-reports-302245359.html | 2024-09-11T16:09:00Z | BANGALORE, India, Sept. 11, 2024 /PRNewswire/ -- The global Electronic Paste market is projected to reach USD 9967.9 million by 2030 from an estimated USD 5836.7 million in 2024, at a CAGR of 9.3% during 2024 and 2030.Electronic Paste Market is Segmented by Type (Conductive Paste, Resistive Paste, Insulation Paste, Others), by Application (Solar Cells, Printed Circuit Board, Touchscreen, LED, Others): Global Opportunity Analysis and Industry Forecast, 2024-2030.Claim Your Free Sample Now: https://reports.valuates.com/request/sample/QYRE-Auto-25X8584/Global_Electronic_Paste_Market_Insights_and_Forecast_to_2028Major Factors Driving the Growth of Electronic Paste Market:Due to the growing need for high-performance computing in artificial intelligence applications across all sectors, the market for AI Server GPUs is expanding quickly. GPUs (Graphics Processing Units) are critical for speeding deep learning, neural networks, and large data processing operations as AI workloads become more complicated and data-intensive. The need for AI server GPUs is being further fueled by the growth of edge computing, cloud computing, and AI-driven applications in industries like finance, healthcare, and the automotive industry. In order to satisfy the increasing computing demands of AI servers worldwide, major firms are concentrating on creating GPUs that are both more potent and energy-efficient.Unlock Insights: View Full Report Now! https://reports.valuates.com/market-reports/QYRE-Auto-25X8584/global-electronic-pasteTRENDS INFLUENCING THE GROWTH OF THE ELECTRONIC PASTE MARKETIn the electronics sector, conductive paste is essential because it ensures electrical contact in a variety of electronic equipment. It is widely used in the production of semiconductors, printed circuit boards (PCBs), sensors, and other components. Better conductivity, thermal stability, and stickiness are required in high-quality conductive paste, which is becoming more and more in demand as electronics become smaller and perform better. The growing use of cutting-edge electronics in sectors including consumer electronics, automotive, and telecommunications is driving up demand for them. The market for electronic paste is anticipated to increase significantly as long as dependable conductive paste is needed, given how the electronics sector develops.Printed circuit boards, or PCBs, are the structural core that link different parts of almost every electronic device. To provide the required circuits and paths, electronic pastes such as conductive, insulating, and resistive pastes must be applied during PCB fabrication. The need for sophisticated electronic pastes rises in line with the need for higher-density, more complicated PCBs, especially in the automotive, consumer electronics, and telecommunications industries. The market for electronic paste is being driven by the move towards more advanced PCBs, which enable faster speeds and more functionality. Manufacturers are looking for pastes that provide accuracy, dependability, and longevity.The market for electronic paste is significantly influenced by the solar energy sector, primarily by the use of conductive pastes in the production of solar cells. In order to convert sunlight into electricity, these pastes are employed to form the electrical connections on the surface of solar cells. The rising use of solar power and the global drive towards renewable energy have led to a spike in demand for high-performance conductive pastes. Thin-film technologies and higher-efficiency cell development, among other advancements in solar cell technology, have increased demand for specialty electronic pastes that can improve solar panel durability and efficiency, spurring market expansion.The market for electronic paste is growing as a result of the growing consumer electronics industry. The need for parts like PCBs, sensors, and displaysall of which depend on electronic pasteshas increased due to the growing popularity of smartphones, tablets, wearable technology, and other electronic devices. In order to fulfill the growing demands of consumers for gadgets with more sophisticated features and improved performance, manufacturers are resorting more and more to premium electronic pastes. More advanced electronic pastes are being developed and adopted as a result of the trend toward miniaturization and the demand for more functionality, which is driving market expansion.The market for electronic paste is also significantly influenced by the automotive sector, especially in light of the increasing popularity of advanced driver-assistance systems (ADAS) and electric vehicles (EVs). Complex electronic systems, such as high-density PCBs and sensors, are necessary for these technologies, and their production depends on dependable electronic pastes. It is anticipated that the need for electronic pastes with high conductivity, durability, and thermal stability will increase as the automobile sector continues to develop with features like autonomous driving, connectivity, and increased safety. The necessity for sophisticated electronic pastes in automotive applications is further increased by the movement towards greener transportation choices, such as electric vehicles (EVs), which is fueling market expansion.Own It Today Buy Now! https://reports.valuates.com/api/directpaytoken?rcode=QYRE-Auto-25X8584&lic=single-userELECTRONIC PASTE MARKET SHARE ANALYSISA well-established electronics manufacturing industry and the strong presence of top technological businesses are the main drivers of the electronic paste market in North America. The region's emphasis on innovation, especially in the automotive, telecommunications, and consumer electronics sectors, creates a sizable need for sophisticated electronic pastes. The industry is further stimulated by the increasing use of renewable energy technology, such solar panels, and electric vehicles (EVs). The government programs supporting green energy and technology developments also play a significant role in the strong expansion of the electronic paste industry in this area.Purchase Chapters: https://reports.valuates.com/market-reports/QYRE-Auto-25X8584/global-electronic-paste/1Key Players:ShoeiChemicalIncSumitomo Metal MiningHeraeusChangzhou Fusion New MaterialShandong SinoceraHenkelDKEMGood-ArkGiga Solar MaterialsMitsuboshi BeltingNoritakeShanghai Transcom ScientificFujikura KaseiNippon Chemical IndustrialRuxing TechnologyPurchase Regional Data: https://reports.valuates.com/market-reports/QYRE-Auto-25X8584/global-electronic-paste/7 SUBSCRIPTIONWe have introduced a tailor-made subscription for our customers. Please leave a note in the Comment Section to know about our subscription plans.DISCOVER MORE INSIGHTS: EXPLORE SIMILAR REPORTS!- Fine Paste Market- The global Emulsion PVC Paste market was valued at USD 2284.4 million in 2023 and is anticipated to reach USD 2974.7 million by 2030, witnessing a CAGR of 3.9% during the forecast period 2024-2030.- Solar Electronics Conductive Paste Market- The global Lead-Free Solder Paste market was valued at USD 10 million in 2023 and is anticipated to reach USD 16 million by 2030, witnessing a CAGR of 6.5% during the forecast period 2024-2030.- Thermally Conductive Grease and Paste Market- Unlead Solder Paste Market- The global Conductive Silver Paste market was valued at USD 5396 million in 2023 and is anticipated to reach USD 7120.5 million by 2030, witnessing a CAGR of 4.1% during the forecast period 2024-2030.- The global Pressure-less Silver Sintering Paste market was valued at USD 93 million in 2023 and is anticipated to reach USD 127 million by 2030, witnessing a CAGR of 4.6% during the forecast period 2024-2030.- Anti-paste Paint Market- Functional Water-Based Printing Paste MarketDISCOVER OUR VISION: VISIT ABOUT US!Valuates offers in-depth market insights into various industries. Our extensive report repository is constantly updated to meet your changing industry analysis needs.Our team of market analysts can help you select the best report covering your industry. We understand your niche region-specific requirements and that's why we offer customization of reports. With our customization in place, you can request for any particular information from a report that meets your market analysis needs.To achieve a consistent view of the market, data is gathered from various primary and secondary sources, at each step, data triangulation methodologies are applied to reduce deviance and find a consistent view of the market. Each sample we share contains a detailed research methodology employed to generate the report. Please also reach our sales team to get the complete list of our data sources.YOUR FEEDBACK MATTERS: REACH OUT TO US!Valuates [email protected] U.S. Toll-Free Call 1-(315)-215-3225WhatsApp: +91-9945648335Website: https://reports.valuates.comBlog: https://valuatestrends.blogspot.com/Pinterest: https://in.pinterest.com/valuatesreports/Twitter: https://twitter.com/valuatesreportsFacebook: https://www.facebook.com/valuatesreports/YouTube: https://www.youtube.com/@valuatesreports6753https://www.facebook.com/valuateskoreanhttps://www.facebook.com/valuatesspanishhttps://www.facebook.com/valuatesjapanesehttps://valuatesreportspanish.blogspot.com/https://valuateskorean.blogspot.com/https://valuatesgerman.blogspot.com/https://valuatesreportjapanese.blogspot.com/ Logo: https://mma.prnewswire.com/media/1082232/Valuates_Reports_Logo.jpg | Prediction/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Alexandra Kelley | NSF launched new AI initiatives for astronomy | Two new artificial intelligence institutions within the National Science Foundation will be dedicated to developing AI algorithms and models specific to astronomy and astrophysics data. | https://www.nextgov.com/artificial-intelligence/2024/09/nsf-launched-new-ai-initiatives-astronomy/399613/ | 2024-09-18T12:00:00Z | The National Science Foundation is launching two new artificial intelligence programs geared towards developing new algorithmic capabilities to advance research in astronomical sciences.Two new AI institutions, funded in part by the NSF as well as the Simons Foundation, will work with researchers in academia to develop novel AI software tailored to processing both large volumes of astronomical data and images from telescopes that standard AI softwares have trouble computing.The AI tools that have been developed by industry are not tailored to our problems, Andreas Berlind, the program director within the NSFs Astronomical Sciences Division told Nextgov/FCW. We don't just have images, we also have text and we have code, and we have spectra, which is another kind of data set, and there aren't models that have been built to deal with those things.Although the NSFs National Artificial Intelligence Research Institutes was launched as an interdisciplinary initiative earlier this year, this marks the first time two institutes will be dedicated towards astronomy research fields. Berlind said that AI softwares capabilities to handle large amounts of data can benefit a multitude of fundamental questions researchers have, such as studying and understanding dark matter and the origin of life in the solar system.As with all AI institutes, the goal is to harness these new AI technologies and advance them, and at the same time, impact this other field, he said. So it's really timely, given all this data that's coming in, to invest in AI.The first institution, called the NSF-Simons AI Institute for Cosmic Origins, will be helmed by the University of Texas at Austin in collaboration with NSF NOIRLab, the NSF National RadioAstronomy Observatory, the University of Utah, the University of Virginia and UCLA. Its research aims will work to simulate phenomena like the chemical processes within stars. The second, the NSF-Simons AI Institute for the Sky, features a partnership with The University of Chicago, the University of Illinois Urbana-Champaign, the University of Illinois Chicago, and the Adler Planetarium. It will work on astrophysics problems such as neutron star and black hole physics as well as the formation of galaxies. Researchers are looking to develop AI algorithms suited for astronomical research, including a large language model that is trained on the existing research, as well as algorithms that can read images and other datasets to make connections across multiple data types.The algorithms that we need don't exist, Berlind said. So these institutes are going to be the trailblazers to doing this work. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Ryan Gibson | Anthropic CEO Dario Amodei Warns of AI Arms Races, Predicts Radical Abundance in a New AI-Powered World | AI could exacerbate inequality or even fuel dangerous global power shifts. As the CEO of Anthropic, Amodei is deeply involved in navigating these challenges, pushing for responsible scaling, safety, and a more equitable distribution of AI’s benefits in our increasingly AI-powered world. | https://www.webpronews.com/anthropic-ceo-dario-amodei-warns-of-ai-arms-races-predicts-radical-abundance-in-a-new-ai-powered-world/ | 2024-09-04T15:27:48Z | As artificial intelligence rapidly reshapes industries and societies, few are more intimately involved in steering its direction than Dario Amodei, CEO and co-founder of Anthropic. Recently, in a conversation with Erik Torenberg and Noah Smith, Amodei offered a deep dive into the evolving landscape of AI development, AI safety, and the potential for radical abundance brought about by these technologies. He also touched on the growing competition between global powers like the United States and China and provided his perspective on Californias SB 1047 bill.The Scaling Laws and the Future of AIOne of the key elements shaping AI development is the concept of scaling laws, which refer to the trend that as AI models get larger, they also tend to get more powerful. For Amodei, these scaling laws hold the potential for transformative change across industries. The bigger the model, the better it gets at doing tasks like coding, biology, or even military coordination, he remarked. Amodei believes that scaling laws are central to the future of AI, but he tempers his enthusiasm with caution: “Theres no fundamental physical law that scaling will continue forever. It could stop anytimeits an empirical observation.Despite the uncertainty, Amodeis outlook remains cautiously optimistic, grounded in years of experience. Ive been watching this happen for 10 years, and my guess is that it will keep going, he noted, but quickly added, Thats a 60/40 or 70/30 propositionthe trend is your friend till the bend at the end.If these scaling trends continue, Amodei predicts a future where AI systems could fundamentally reshape entire sectors. Imagine AI models so advanced they could outperform Nobel Prize-winning scientists or revolutionize industries like biotech or defense. These AI systems might soon become the most valuable national defense asset the United States and its allies have, Amodei explained. This would have profound implications not just for industries but also for geopolitics.The Economic Moat: Differentiation and CommoditizationThe business side of AI development is equally fraught with complexity. Amodei, having worked at Google and OpenAI before co-founding Anthropic, knows the immense potential for AI-driven companies to scale. However, he cautions that AIs immense power might not translate directly into economic dominance for these firms. In some ways, AI models might become like solar panelsimmense, world-changing, and yet difficult for any one company to profit from, Amodei explained. He likened the situation to the rise of solar energy, where innovation has outpaced commoditization, making profits elusive despite enormous market potential.In response to concerns that AI might become a highly commoditized product, Amodei countered by highlighting the uniqueness of different AI models. AI models have different personalities. Some are better at coding, some at creative writing, and some excel at being engaging and entertaining. This creates differentiation, he said. Amodei also pointed out that beyond the models themselves, the products built on top of them create additional differentiation, offering hope for profitability and competitive moats.AI Arms Races and National Security ConcernsOne of the most pressing concerns for Amodei is the AI arms race between nations, particularly between the U.S. and China. With AI models potentially having the power to shift global power dynamics, Amodei sees significant national security implications. These systems are powerful enough to single-handedly shift the balance of power on the international stage, he warned. The question of whether democracies or autocracies will triumph in this AI-powered world weighs heavily on Amodei. An AGI-enabled autocracy sounds like a really terrifying thing if you think it through.One approach that Amodei supports is the U.S. government’s move to restrict the export of advanced chips and semiconductor equipment to China. This strategy gives us an advantage while also buying us time to address the safety risks posed by AI, he explained. The international coordination problem, however, remains daunting. “There’s no mechanism to enforce AI cooperation globally. We can sign disarmament treaties, but how do we enforce them?” Amodei pondered, highlighting the challenge of balancing national security and AI safety on the world stage.AI Safety and the SB 1047 BillCloser to home, the discussion around AI safety has intensified, particularly with the introduction of Californias SB 1047 bill, which seeks to regulate AI. While Amodei initially had concerns about the bill being overly prescriptive, he acknowledged that amendments made the legislation more balanced. We had some concerns that the bill was too heavy-handed, but they addressed many of themabout 60% of our issuesso we became more positive, he explained.Amodei is in favor of regulation but emphasizes that it should be flexible and adaptable to the rapidly evolving AI landscape. We need to develop a system where companies are incentivized to create strong safety plans without being stifled by overly rigid requirements, he said. He also downplayed concerns that the bill would drive AI companies out of California, labeling such rhetoric as just negotiating leverage.Inequality and AIs Impact on the Labor MarketAs AI continues to evolve, questions around its impact on inequality and the labor market remain central. Amodei sees both risks and opportunities here. Right now, AI is leveling the playing field. For example, GitHub Copilot is helping less experienced programmers perform at a higher level, while top-tier programmers dont benefit as much, he observed. This compression of skill differentials could have a positive impact on inequality, but Amodei is cautious. As the models get better, they could eventually start replacing many human tasks, which could exacerbate inequality if the benefits are not distributed broadly.Despite the potential risks, Amodei believes humans will adapt to the changes AI brings. Even if AI models are writing 90% of the code, humans will get really good at the other 10%. Comparative advantage will persist for longer than people think, even in a world where AI is doing much of the work, he predicted.A World of Radical Abundance or New Risks?In the long term, Amodei envisions a world of radical abundance where AI drives unprecedented innovation in fields like biology and medicine. I think were really underestimating what AI can do in biology. Diseases that have been with us for millennia could be cured in a fraction of the time it would have taken without AI, he said. His hope is that AI will help compress the advances of the 21st century into just a few years, leading to a world of greater health and productivity.But even with these optimistic projections, Amodei is aware of the potential downsides. The fear is that we could create all this wealth, but it only benefits a select few, leaving many humansand particularly those in developing countriesbehind, he cautioned.AI is One of Both Promise and PerilDario Amodeis vision for the future of AI is one of both promise and peril. Scaling laws may continue to drive extraordinary advances, leading to a world of radical abundance, but without careful regulation and international cooperation, AI could exacerbate inequality or even fuel dangerous global power shifts. As the CEO of Anthropic, Amodei is deeply involved in navigating these challenges, pushing for responsible scaling, safety, and a more equitable distribution of AIs benefits in our increasingly AI-powered world. | Decision Making/Prediction/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Scott | Quantum Computing: Between Hope and Hype | So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to me my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last […] | https://scottaaronson.blog/?p=8329 | 2024-09-22T21:55:35Z | So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to me my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!Quantum Computing: Between Hope and Hypeby Scott AaronsonSeptember 16, 2024When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?Reason #1For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!Reason #2A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructurehigh-speed rail, nuclear power plants, futuristic citiesfor legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.Reason #3A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time agoneural nets, backpropagationthose ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the science of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.Reason #4A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts. I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.Reason #5My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noiseand maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?which turns out to be a hard problem.In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.So, if we accept that QC is firmly on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term, and that could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.This entry was postedon Sunday, September 22nd, 2024 at 4:55 pmand is filed under Quantum, Speaking Truth to Parallelism.You can follow any responses to this entry through the RSS 2.0 feed.You can leave a response, or trackback from your own site. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Arthur Brown | LinkedIn scraped your data without telling you | The post LinkedIn scraped your data without telling you appeared first on Android Headlines. | https://www.androidheadlines.com/2024/09/linkedin-scraping-data-ai-models.html | 2024-09-19T14:58:52Z | Nowadays, theres no telling how many of the companies we trust are scraping our data under our noses to train AI models. We never find out until theyre caught with their hands in the cookie jar. One of the most popular job search platforms and social media sites actually lets us know that it scrapes our data, but theres a caveat. LinkedIn scraped our data before updating its privacy policy. This could put it in hot water.The fact that LinkedIn is scraping our data should come as no surprise for two reasons. Firstly, just about every company on Earth, and probably other planets in the solar system, has started scraping data to train AI models. Remember, AI is the future! Secondly, LinkedIn is owned by Microsoft, one of the most AI-crazy companies on the market.This is a sour blow for people who are looking for work on the platform. LinkedIn has supported and helped human workers for years, so its a bit ironic that its using those human workers data to train AI models.LinkedIn scraped data before updating its privacy policyA companys privacy policy is a way for it to be transparent about whats going on with your data. While many people dont really read this information, there are those who are always keeping an eye on them. Its a gesture of trust. So, if the company does anything not indicated in that policy, it can feel like a betrail.LinkedIn updated its policy to let you know that your data is being scraped, so there shouldnt be an issue. However, according to the report, it appears that LinkedIn might have started scraping the data before updating the policy. If thats the case, then it would have technically violated its policy.Recently, several users reported that LinkedIn was using its users data to train its AI model, but they didnt see the update in the policy yet. 404Media reached out to the company for comment, and it said that it would update it shortly. So, LinkedIn was training its model on its users data without updating the policy initially.How to stop LinkedIn from using your dataYou can stop LinkedIn from scooping up your data, but this is bittersweet news. If youre in the U.S., you can opt out of having LinkedIn using your data, but you dont have that option in some other regions. Unfortunately, if youre in the EU, for example, you dont have the option.If you do have the option, youll see it in the Data Privacy section of your settings. Go to the Data for Generative AI Improvement section. There, youll see a toggle that will stop the flow of data from getting to LinkedIn. One thing to know is that, if you flip this switch, LinkedIn will still retain the information its already collected on you. Youll need to use the desktop version, not the app.Should there be consequences?Yes! There should be some sort of fine or slap on the wrist for LinkedIn. We live in a world where more companies are scraping our data, and we dont like it. The staff at LinkedIn should have known that scraping our data without our knowledge would upset us. Also, how long was the company going to secretly scrape users data before getting caught? If no one found out, would the company have updated the policy at all?There should be some sort of repercussion for this; not to punish the company, but to set an example for other companies that do this. Whether this was intentional or if the LinkedIn staff just forgot, there were people who had their data scraped without their knowledge. | Unknown | Management/Business and Financial Operations/Education, Training, and Library | null | null | null | null | null | null |
|
news | Chris Berg | Dutton is losing the debate over nuclear energy right when we need it for AI | The AI industry needs more compute, and more compute needs more energy.The post Dutton is losing the debate over nuclear energy right when we need it for AI appeared first on Crikey. | https://www.crikey.com.au/2024/10/02/peter-dutton-nuclear-energy-artificial-intelligence-data-centres/ | 2024-10-01T23:24:13Z | Peter Dutton is losing the debate over nuclear power. Even the pro-nuclear Financial Review agrees, which ran an editorial last week wondering where the Coalitions details were. And the Coalitions proposal for the government to own the nuclear industry has made it look more like election boondoggle than visionary economic reform. It is starting to look like a big missed opportunity. Because in 2024, the question facing Australian governments is not only how to transition from polluting energy sources to non-polluting sources. It is also how to set up an economic and regulatory framework to service what is likely to be massive growth in electricity demand over the next decade. Wrong on every point: Crikey readers dissect Duttons nuclear proposalRead MoreThe electrification revolution is part of that demand, with, for instance, the growing adoption of electric vehicles. But the real shadow on the horizon is artificial intelligence. The entire global economy is embedding powerful, power-hungry AI systems into every platform and every device. To the best of our knowledge, the current generation of AI follows a simple scaling law: the more data and the more powerful the computers processing that data, the better the AI. We should be excited for AI. It is the first significant and positive productivity shock weve had in decades. But the industry needs more compute, and more compute needs more energy.Thats why Microsoft is working to reopen Three Mile Island yes, that Three Mile Island and has committed to purchasing all the electricity from the revived reactor to supply its AI and data infrastructure needs. Oracle plans to use three small nuclear reactors to power a massive new data centre. Amazon Web Services is buying and plans to significantly grow a data centre next to a nuclear plant in Pennsylvania. Then theres OpenAI. The New York Times reports that one of the big hurdles for OpenAI in opening US data centres is a lack of adequate electricity supply. The company is reportedly planning to build half a dozen data centres that would each consume as much electricity as the entire city of Miami. It is no coincidence that OpenAI chief Sam Altman has also invested in nuclear startups.One estimate suggests that data centres could consume 9% of US electricity by 2030.Dutton, to his credit, appears to understand this. His speech to the Committee for Economic Development of Australia (CEDA) last week noted that nuclear would help accommodate energy intensive data centres and greater use of AI. But the Coalitions mistake has been to present nuclear (alongside a mixture of renewables) as the one big hairy audacious plan to solve our energy challenge. Theyve even selected the sites! Weird to do that before youve even figured out how to pay for the whole thing.Nuclear is not a panacea. It is only appealing if it makes economic sense. Our productivity ambitions demand that energy is abundant, available and cheap. There has been fantastic progress in solar technology, for instance. But it makes no sense to eliminate nuclear as an option for the future. When the Howard government banned nuclear power generation in 1998, it accidentally excluded us from competing in the global AI data centre gold rush 26 years later.Who gives a stuff about the surplus?Read MoreLegalising nuclear power in a way that makes it cost effective is the sort of generational economic reform Australian politicians have been seeking for decades. I say in a way that makes it cost effective because it is the regulatorysuperstructure laid on top of nuclear energy globally that accounts for many of the claims that nuclear is uneconomic relative to other renewable energy sources. A Dutton government would have to not only amend the two pieces of legislation that specifically exclude nuclear power plants from being approved, but also establish dedicated regulatory commissions and frameworks and licencing schemes to govern the new industry and in a way that encouraged nuclear power to be developed, not blocked. And all of this would have to be pushed through a presumably sceptical Parliament. That would be a lot of work, and it would take time. But Ive been hearing that nuclear power is at least 10 to 20 years away for the past two decades. Allowing (not imposing) nuclear as an option in Australias energy mix would be our first reckoning with the demands of the digital economy. | Unknown | Unknown | null | null | null | null | null | null |
|
news | code-star-cli added to PyPI | CodeStar is an advanced AI-powered coding assistant designed to enhance developer productivity by providing intelligent code suggestions and natural language interactions. Built on Hugging Face's StarCoder 2 (15B) and StarChat 2 15B. | https://pypi.org/project/code-star-cli/ | 2024-09-29T06:45:40Z | CodeStar is an advanced coding assistant powered by StarCoder 2, a state-of-the-art Large Language Model for Code (Code LLM) trained on over 600 programming languages from a diverse set of permissively licensed data, including GitHub code, Arxiv, and Wikipedia. With a robust architecture featuring approximately 15 billion parameters and trained on over 4 trillion tokens, StarCoder 2 utilizes Grouped Query Attention and a context window of 16,384 tokens, making it specifically optimized for enhanced performance in coding tasks.CodeStar excels in a variety of programming benchmarks, demonstrating superior capabilities compared to existing open Code LLMs and even matching or surpassing some closed models. Users can leverage CodeStar for code autocompletion, code generation, and natural language explanations of code snippets, although it is important to note that it is not an instruction model, and commands may not always yield the desired results.CodeStar has shown remarkable proficiency in completing coding tasks, particularly on benchmarks like HumanEval, where it has achieved a state-of-the-art score for open models. It also supports multilingual programming, allowing it to perform well across various languages and datasets.Beyond code generation, CodeStar is designed to function as a technical assistant, capable of addressing programming-related queries and providing insightful responses. This functionality is powered by a specialized prompt that enables the model to effectively assist users in their coding endeavors.CodeStar is built on a foundation of responsible AI development, with a focus on safety and privacy. The training data has undergone rigorous cleaning to remove Personal Identifiable Information (PII), ensuring a secure user experience.With its strong performance and versatility, CodeStar is poised to be an invaluable tool for developers, empowering them to enhance their coding efficiency and creativity.FeaturesAI-Powered Code Completions: Context-aware code suggestions to help you write code faster and with fewer errors.Natural Language Processing: Interact with the assistant using natural language queries for explanations, debugging help, and more.Multi-Language Support: Work seamlessly across various programming languages.Seamless Integration: Easily integrates into your existing development environment.Learning and Adaptation: Continuously improves suggestions based on user interactions.Join us in revolutionizing the coding experience!Usage InstructionsAssistanceTo obtain help, please execute the following command:code-star--helpCommand GenerationYou can generate shell commands using natural language as follows:code-starai'bash list all processes that use more than 10% of memory'Interactive ChatEngage in a conversation with CodeStar using the following command:code-starchat--To export your chat history, use:code-starchat-echat-history.jsonTo import a previous chat history, execute:code-starchat-hchat-history.jsonIf you wish to import chat history and subsequently export it after your chat session, utilize:code-starchat-hchat-history.json-echat-history.jsonCode CompletionsRetrieve code completions from CodeStar with the following command:code-starcompletions'fn read_file(path: PathBuf)'Code EnhancementsUtilize CodeStar to apply best practices for improving code quality as shown below:code-starenhanceapp.pyCode ScanningConduct code scanning with CodeStar by executing:code-starscanhello.pyLicenseLicensed under MIT License. | Content Creation/Digital Assistance | Computer and Mathematical | null | null | null | null | null | null |
||
news | Daniel Miessler | UL NO. 450: Thoughts on o1-preview and the Path to AGI | 80% Chinese Cranes, Drones vs. Abrahams, a RAG kickstart, a Canary-based Security Maturity Model, and more... | https://danielmiessler.com/p/ul-450 | 2024-09-16T18:41:06Z | SECURITY | AI | PURPOSEUNSUPERVISED LEARNING is a newsletter about how to securely compete and thrive in a world full of AI. Its original analysis, mental models, frameworks, and tooling to help you build a meaningful career that survives whats coming for us. Hey there! Fabric now supports OpenAIs new model, o1-preview. Just update and use the new -r flag, which sends requests using User rather than System, and without a Temperature parameter. TRY IT An insane cookbook use case for o1-preview, where its used to do data validation on synthetic data. MORE Im back to kickboxing today! Hopefully Ill suck less this time. Im expecting to improve rapidly once I get settled in, but manthat first session was rough. Its LinkedIn Season! Connect with me on LinkedIn, and Ill follow you back! CONNECTLast Weeks Comments on Current AI AdvancesIf youre following the progress of AI, I highly recommend listening to last weeks podcast. I did a whole bunch of coverage of the current state of things, my thoughts on o1-preview, the path to AGI, and a bunch of other stuff. LISTEN NOWThe Art Quality Tier ListI think I finally figured out what Art is. This piece is a definition, discussion, rating system, and even a methodology for enjoying art. For a beginner, anyway. READ IT The US is evidently heavily reliant on Chinese cranes, particularly from Shanghai Zhenhua Heavy Industries (ZPMC). This report says ZPMC, a company owned by the PRC, dominates 80% of the US's ship-to-shore cranes, raising concerns about potential backdoors and remote access. MORE80%? Jesus. I thought it was going to be like 25%, or 50%.Like I can honestly imagine a war room where we have a kinetic conflict with China and theyre reviewing all the different ways to disable our economy. Terrifying. I can only hope there are people looking at this.Fortinet has confirmed a data breach after a hacker, going by the name "Fortibitch," claimed to have stolen 440GB of files from their Microsoft Sharepoint server. Fortinet refused to pay a ransom and has notified affected customers. MORE GitLab released critical updates to fix multiple vulnerabilities, with the most severe (CVE-2024-6678) allowing attackers to trigger pipelines as arbitrary users. This vulnerability, with a severity score of 9.9, can enable remote exploitation with minimal user interaction and low privileges. MORE The Lazarus Group (NK), have been targeting Python developers with malware disguised as coding tests for about a year now. These attacks involve maliciously duplicated open-source Python tools and "coding tests" that trick users into installing malware hidden with Base64 encoding, allowing remote execution. MORESponsorGet the Most From Your Security Teams Email Alert Budget Relying on built-in controls or traditional blockers leads to more noise than your incident response team can handle.Material Security takes a pragmatic approach to email security stopping new flavors of phishing attacks before reaching the users mailbox, while searching for similar messages in a campaign. Highest-value cases are surfaced with all the context and reach consolidated into a single view.Heres what security teams have said:The response time is now just 3-4 minutes instead of 45. We dont have to manually respond to the follow-on reports and all employees are already protected automatically by the initial report. Our whole workflow has changed. GustoMaterial helps automatically cluster similar messages and apply warning messages or other remediations without the delay and manual effort of our security teams review. Marsmaterial.securityMastercard is buying Recorded Future from Insight Partners for $2.65 billion, making it one of the biggest cybersecurity deals this year. Insight Partners originally acquired Recorded Future in 2019 for $780 million, so they're seeing a nice return on investment. MOREOne thing I see here is the motion from startup to platform. With Mastercard being the platform in this case, similar to Windows or Google or whatever. So you have good ideas and execution, and their natural home is within some sort of ecosystem. So startups are basically petri dishes for features that will live inside of platforms.The Security Canary Maturity Model is a framework designed to help organizations assess and improve their security posture by using canary tokens. The model outlines various maturity levels to guage where youre at. MOREI love this concept of a detection maturity model. Like, heres the percentage of your most likely MITRE behaviors that youd be able to see.SponsorGet the No B.S. Guide to building a strong cybersecurity program in 90 days! (No email required) Are you an IT leader without a big, dedicated security team? Have you had challenges implementing a robust cybersecurity program due to lack of resources and/or budget? Don't let this hold you back anymore! Download our 90-Day guide to get a month-by-month blueprint on how to build an effective, multi-layered cybersecurity strategy without enterprise-level resources.defendify.com/guide/get-your-cybersecurity-program-startedAustralia is set to criminalize doxxing with penalties up to seven years in jail, as part of new legislation aimed at modernizing the Privacy Act. The legislation also proposes harsher penalties for doxxing based on race, religion, or other personal attributes. MORE This piece discusses how AI-powered autonomous weapons systems are changing warfare. The recent withdrawal of U.S.-provided M1A1 Abrams tanks by Ukraine, after being targeted by Russian kamikaze drones, highlights the shift from traditional manned mechanized warfare to AI-driven combat. Friendly reminder that you should read Kill Decision, by Daniel Suarez, which predicted so much of this. MORE | KILL DECISION BY DANIEL SUAREZ Russia's naval activity around undersea cables is raising alarms among US officials, with concerns that the Kremlin might be planning to sabotage underwater infrastructure through a secretive military unit known as GUGI. This unit reportedly operates submarines, surface vessels, and naval drones, and has been spotted near critical deep-sea cables that carry over 95% of international data. MORE The U.S. is drafting a "New York Joint Statement" to bolster the security of global submarine communications cables, with a focus on excluding Chinese firms from the supply chain. This move mirrors past efforts to remove Chinese companies like Huawei from 5G infrastructure, driven by fears that the Chinese government could compel these firms to disrupt cable operations during critical times. MOREWe need a comprehensive critical infrastructure dependency analysis, which goes along with wargaming. Actually, now that I think about it, Im quite confident this is already happening. I just hope its being done with very smart red teamers on the China side flipping switches on our undersea cables, port/crane infra, etc.The US House has voted to block the purchase of new drones from DJI, a major Chinese manufacturer, citing national security concerns. So much coverage of counter-China stuff lately. Seems like leadership is getting the message, which is great. MORE The State Department has declared that Russia's state-owned RT news agency has become a key player in the Kremlin's military intelligence operations, including involvement in covert activities aimed at undermining American elections and democracies. I remember thinking this was happening with RT back in like 2017 or something, sosimilar to ChinaIm surprised its just now getting press. MORE Serhii "Flash" Beskrestnov is a civilian radio enthusiast who's become a key figure in Ukraine's drone defense strategy against Russia. Operating from a mobile intelligence center in his VW van, Flash monitors Russian radio transmissions and shares his findings with over 127,000 followers, including soldiers and government officials, on social media. MORE A new paper had humans and AI create novel research ideas and then had human experts rate the ideas. And they actually preferred the AI ideas! MOREThis is the way to measure the abilities of AInot with standalone testing. Its the same with autonomous vehicle safety. Its not about how you think they do independently. Its about comparing ACCEPTED METRICS between humans and the AIas judged by humans who dont know who made which.OpenAI released their new o1-preview model, which is focused on reasoning. The biggest difference between it and previous models is its use of Chain of Thought (CoT) reasoning, and the fact that it actually spends time (and tokens) thinking before returning results. MORE | MY THOUGHTS ON IT SO FAR Klarna's CEO, Sebastian Siemiatkowski, is suggesting that AI could replace enterprise software giants like Salesforce and Workday. He claims that conversational AI, like OpenAI's upcoming Strawberry reasoning model, can handle natural-language commands to build custom apps that replicate traditional enterprise functions, especially those managing corporate data. Um, yeah. Its all going to be SPQA. MORE AI-powered SAR satellites are now capable of detecting aircraft from space due to new radar tech. This allows for real-time monitoring of air traffic, which could have significant implications for both civilian and military applications. MORE CardiaTec, a Cambridge University spinout, is leveraging AI to tackle cardiovascular diseases (CVD), the leading cause of death worldwide. Theyre partnering with 65 hospitals in the UK and US to build a massive human heart tissue-multi-omics dataset to identify new drug candidates. Super exciting because AI needs data to form its model of the world. All the intelligence in the world doesnt matter if you dont have a representation of how things work. MORE Salesforce just launched Agentforce, a suite of AI-powered agents designed to enhance human workers across various business functions, marking what they call the "third wave" of AI. MORE Waymo's latest data shows that human drivers are responsible for most serious collisions involving its driverless cars, with 16 out of 23 severe crashes being rear-endings by human-driven vehicles. Over 22 million miles, Waymo's vehicles have been involved in fewer than one injury-causing crash per million miles, significantly outperforming typical human drivers in San Francisco and Phoenix. MORE Tesla's Cybertruck is spiking in the electric pickup segment, with a 61% sales surge in July, outselling rivals like the Rivian R1T and Ford F-150 Lightning. So strange because they were getting slammed there for a while. Im seeing a lot more in the Bay Area, too. MORE The USPS has rolled out its new Next Generation Delivery Vehicles, and while they might not win any beauty contests, they're getting rave reviews from postal workers for their modern safety features and comfort, including air conditioning. MORE Dmitry Grinberg has managed to run Linux and Ultrix on a business card, turning it into a tiny computer. The project involves using a microcontroller with just 8KB of RAM and 32KB of flash storage. MORE There's a new study out showing that DebunkBot, an AI chatbot, can effectively persuade users to abandon conspiracy theories. The bot made significant progress in changing people's beliefs, challenging the notion that facts and logic can't combat conspiracies. What can convince you something is true can also do the opposite. This is why Im optimistic about having AI on us all the time. Yes, it can be an Orwellian nightmareor it can be a defender, protectors, tutor, coach, etc. Thats up to us. MORE A community college had to cancel its CS career fair because no companies reached out to participate. Super sad, and super expected. If you have people coming out of college with a Masters in CS and they cant find jobs, what hope do junior college prospects have? This is why we need Human 3.0; the future is connecting directly to individuals, not relying on a credential or institution. MORE Google has officially killed off cache links that allowed users to view older versions of web pages. MORE United Airlines is partnering with SpaceX to bring free Starlink Wi-Fi to all its planes, starting with tests in early 2025 and full passenger flights later that year. MORE Ukraine just launched its biggest drone attack on Moscow yet, hitting the region with 144 drones. The strike resulted in one casualty, set several homes on fire, and led to the temporary shutdown of Moscow's four airports. Someone explain how Ukraine can possibly be winning this. Completely insane to me, in the best possible way. MORE Sweden is increasing how much its paying migrants to go home. Its now up to $34,000. MORE NASA's Advanced Composite Solar Sail System (ACS3) has successfully deployed its ultra-thin solar sail in low Earth orbit, making it visible in the night sky from various locations worldwide. The spacecraft's reflective surface can appear as bright as Sirius, and NASA's mobile app now helps users spot it using augmented reality. Cant wait to see this! MORE C/2023 A3, also known as TsuchinshanATLAS, is being hailed as "the comet of the century" and will be visible in September and October 2024. This comet is expected to be exceptionally bright, with its peak visibility on October 2, when it will be positioned between Mercury and Venus but closer to Earth. For the best viewing experience, look towards the horizon just before sunrise between 5 am and 7 am starting September 27, as it won't return for tens of thousands of years. MORE The US is closing a trade loophole that ecommerce giants Temu and Shein have been exploiting. This loophole allows them to ship goods directly to American consumers without paying tariffs, which has given them a competitive edge over domestic retailers. MORE There's a leaked PDF that details Mr. Beasts unique company culture and strategies for creating viral YouTube content. MORE | ONE PAGE SUMMARY This person says sunlight cured their migraines. Its not a study, but I figured most people have tried everything so why not something else. MORE Lara Hogan's piece on being a thermostat, not a thermometer, dives into how we can influence the mood in our work environments rather than just reacting to it. MORE Content-driven development is a strategy for making progress on side projects by focusing on creating small, shareable pieces of work. MORE In 1913, Vienna was quite a place to hang out, with Adolf Hitler, Leon Trotsky, Josip Tito, Sigmund Freud, and Joseph Stalin all residing in the city at the same time. MORE MerkleMap CLI This command-line tool lets you search and enumerate subdomains using the Merklemap API, and even tail live subdomain discoveries in real-time. MORE A 71 TiB ZFS NAS built with twenty-four 4 TB drives has lasted over a decade without a single drive failure, thanks to a strategy of keeping the server off when not in use. MORE RAMBO Attack Dr. Mordechai Guri has unveiled a new side-channel attack called RAMBO, which uses radio signals from a device's RAM to exfiltrate data from air-gapped networks. Let me guessUniversity of Tel Aviv? Israelis are the side channel GOATs. MORE 6 Techniques I Use to Create a Great User Experience for Shell Scripts This post dives into creating user-friendly shell scripts with techniques like comprehensive error handling, colorful output, and detailed progress reporting. Soooo good. MORE Soundiiz Created by two friends in France, Soundiiz is a tool that lets you transfer playlists between Apple Music, Spotify, YouTube Music, and a host of other streaming services. MORE Nothing This is a timer that celebrates the art of doing absolutely nothing. It's not about staring at your screen but about stepping back from the chaos and embracing stillness. MORE RAG Pipeline Quickstart with Pinecone This guide walks you through setting up a pipeline that pulls data from an Amazon S3 bucket, creates vector embeddings using OpenAI's embedding model, and stores them in a Pinecone search index. MORE Semantic Image Search CLI (sisi) is a new tool that lets you perform semantic image searches locally without relying on third-party APIs. MORE I love it when experts completely disagree about a really important thing. It forces people like me to do tons of heavy reading so I can approach things from first principles. You know what trips me out?I follow several of the best China experts out there, and have read several books about their economy, but week to week it's impossible to know if they're about to crash or about to take over the world.The opinions vary that widely. ss (@DanielMiessler) 4:38 PM Sep 16, 2024 RECOMMENDATION OF THE WEEKActively guard against age-related lock-in (it starts around 30). Listen to new music. Read new books with new ideas. Talk to new people. Go to strange restaurants. Try new foods. Dont let your experiences reduce into a tighter and tighter death-spiral. Variation keeps your mind young. Choosing not to read great books has the same effect as not being allowed to. | Content Synthesis/Prediction | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Nina Raffio | Q&A: Can a centuries-old technology like hydropower meet AI's rising energy demands? | Surging demand for the data and processing power of artificial intelligence is putting a hidden strain on U.S. electrical grids. | https://techxplore.com/news/2024-09-qa-centuries-technology-hydropower-ai.html | 2024-09-04T21:49:04Z | Surging demand for the data and processing power of artificial intelligence is putting a hidden strain on U.S. electrical grids.Generative AI systems like ChatGPT can each consume as much electricity in a single day as 180,000 typical U.S. households. The energy required to train these large language AI models is even more staggering: Training GPT-4 required over 50 gigawatt-hours, about 0.02% of California's annual electricity output and 50 times more than what was used for GPT-3.As AI adoption grows, California's largest utility, PG&E, forecasts demand could double by 2040.Shon Hiatt, an associate professor at the USC Marshall School of Business and director of the Business of Energy Transition initiative, sees hydropower as a promising solution. This centuries-old, clean and renewable energy source is underutilized in the United States, says Hiatt, but could play a crucial role in powering the AI revolutionand could help ease the pressure on national electrical grids.During Green Week, USC News spoke with Hiatt to explore how hydropower could help meet AI's rising energy needs and support a more sustainable future.What challenges does the AI revolution present in terms of energy consumption?U.S. electricity demand is projected to surge over the next five years, with growth rates doubling from last year's estimates. This sudden increase is driven by three main factors: the rise of AI data centers, federally subsidized manufacturing plants and widespread electric vehicle adoption.Because data centers require constant, reliable power, renewable sources like wind and solar can't provide without massive battery backup. As a result, utilities will need to rely more on natural gas, coal and nuclear plants to meet the increasing demand.Looking ahead, electricity demand for data centers is projected to increase by 13%-15% annually through 2030. There is not enough planned electricity generation development to accommodate projected AI data center growth.This is driving Big Tech companies into the energy sector, as seen in Amazon's recent $650 million deal to acquire a Pennsylvania data center powered by an on-site 2.5 gigawatt nuclear plant. As the demand for electricity continues to grow, it's clear that the energy landscape is undergoing a significant transformation.How can hydropower help address these challenges?A relatively quick solution is repowering existing hydropower plants and putting turbines on existing reservoirs.Hydropower can provide baseload energy unlike wind and solar, which are intermittent due to clouds, weather, etc.We are not building any new reservoirs (dams) in this country. That does not mean we cannot increase hydropower from existing reservoirs, however. The U.S. Department of Energy estimates that up to 10 gigawatts of energy can be created by upgrading existing powered facilities. This is something that can be done within months with capital.Moreover, less than 3% of the more than 90,000 reservoirs in the United States produce power. Installing turbines and generators on these reservoirs could provide an additional 12 gigawatts of power. Putting turbines on existing reservoirs can also be done in a timely mannerin some states, a matter of months.What are the advantages of investing in hydropower over other renewable energy sources to support AI's growing demand?Every energy source has a tradeoff. Solar scales linearly and has thus the largest land footprint of existing power sources. Wind can affect birds and sea life. Nuclear and combined cycle natural gas have the smallest footprint for energy output.Run-of-the river hydropower has much lower environmental impact than reservoir hydropower as it requires no reservoirs. Run of the river diverts some water from a watershed to power a turbine downstream while the rest of the water flows down the mountain.The U.S. Department of Energy estimates that the U.S. has 65 gigawatts of unexploited hydropower energy that can come from ecologically friendly run-of-the-river facilities. However, development of run-of-the-river facilities can take years to develop due to government licensing and permitting barriers.Frankly, it is likely that the demand for electricity for AI data centers over the next five years will be met by new combined cycle natural gas facilities. They are quick to put up, require a small footprint, and the U.S. currently has a large abundance of cheap gas. Small modular nuclear reactors are another potential solution. However, they are not likely to come online until 2030 at the earliest.Citation: Q&A: Can a centuries-old technology like hydropower meet AI's rising energy demands? (2024, September 4) retrieved 4 September 2024 from https://techxplore.com/news/2024-09-qa-centuries-technology-hydropower-ai.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Unknown | Business and Financial Operations/Others | null | null | null | null | null | null |
|
news | Bloomberg News | BlackRock, Microsoft to Raise $30 Billion for AI Investments | BlackRock Inc. and Microsoft Corp. are teaming up to invest in data centers and other infrastructure supporting artificial intelligence. | https://financialpost.com/pmn/business-pmn/blackrock-microsoft-to-raise-30-billion-for-ai-investments | 2024-09-17T21:10:49Z | (Bloomberg) BlackRock Inc. and Microsoft Corp. are teaming up to invest in data centers and other infrastructure supporting artificial intelligence.The strategy, dubbed the Global AI Infrastructure Investment Partnership, will aim to attract $30 billion of private equity capital, leveraging that to as much as $100 billion to invest, the companies said in a statement Tuesday.The infrastructure investments including energy projects will be mostly in the US, with a portion of the funds being deployed in US partner countries, according to the statement.Mobilizing private capital to build AI infrastructure like data centers and power will unlock a multitrillion-dollar long-term investment opportunity, BlackRock Chief Executive Officer Larry Fink said in the statement.The firms are also teaming up with Global Infrastructure Partners and Abu Dhabis MGX, which was created specifically this year to invest in AI. Nvidia Corp. will support the coalition with its expertise in AI data centers and factories. The chipmaker has poured money into creating software, networking and other pieces of technology that it says are essential to quickly putting together complete AI-systems.BlackRock the worlds biggest asset manager announced in January that it would buy GIP for about $12.5 billion, and said last week that it expects to complete the acquisition on Oct. 1.Microsoft has invested $13 billion in AI research lab OpenAI and is overhauling its entire product line around AI features. The software firm is dramatically expanding its own spending on data centers and computing infrastructure to deliver these services and has said its ability to serve AI customers is being constrained by not having enough chips and data center capacity.Energy companies across the US are racing to meet a surge in demand from power-hungry AI data centers, with electricity usage by the facilities poised to surge as much as 10 times current levels by 2030, according to Bloomberg Intelligence. To meet that demand, energy companies are delaying the retirement of coal and gas plants, planning the construction of new gas plants and building out clean energy like solar and wind farms. The competition for electricity has even led to increases in how long it takes to connect new data centers to the power grid, with the time period in Virginias Data Center Alley stretching to as much as seven years.Microsoft has also been talking with OpenAI co-founder and CEO Sam Altman, whos developing his own plans for groups of investors and tech companies to collaborate on ways to dramatically expand computing infrastructure for AI products.The Financial Times reported on the partnership earlier.With assistance from Robin Ajello, Josh Saul and Ian King.(Updates with additional partners in fifth paragraph and context throughout.) | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Dan Robinson | Microsoft, BlackRock form fund to sink up to $100B into AI infrastructure | Tech is going to need datacenters and power sources, and a lot of 'em Microsoft is joining with BlackRock and other private equity investors in a new AI fund that aims to eventually raise $100 billion for datacenters and their supporting power infrastructure.… | https://www.theregister.com/2024/09/18/microsoft_and_blackrock_form_fund/ | 2024-09-18T13:29:43Z | Microsoft is joining with BlackRock and other private equity investors in a new AI fund that aims to eventually raise $100 billion for datacenters and their supporting power infrastructure.The Redmond-based megacorp today confirmed the Global AI Infrastructure Investment Partnership (GAIIP) along with BlackRock, Global Infrastructure Partners (GIP) and MGX, which is based in the United Arab Emirates (UAE) and chaired by a member of Abu Dhabi's royal family.The fund intends to make investments in new bit barns and expand existing sites to meet customers ever-growing demand for compute, as well as to sink money into in energy infrastructure to provide fresh sources of power for the facilities.The agreement will initially look to raise $30 billion of private equity capital from investors, and the alliance ultimately hopes to take that figure to $100 billion in total, including debt financing.Investments are to be chiefly in the US, Microsoft says, and the remainder will be in "US partner countries," an annoyingly vague term that could mean almost anywhere in the world although we would bet on some going to the UAE, given MGX's involvement.According to the participants, the idea is to build datacenters using "an open architecture and broad ecosystem," with GPU giant Nvidia also supporting the project.This could mean that any newly created infrastructure will be built around Nvidia's specialized DGX AI servers, or that the company is simply ready to supply GPU accelerators and software for the project.Nvidia CEO Jensen Huang trilled in a canned statement: "Nvidia will use its expertise as a full stack computing platform to support GAIIP and its portfolio companies on the design and integration of AI factories to propel industry innovation."It isn't clear if any of the investment will be going towards Microsoft's own existing network of datacenters, or if the resulting infrastructure will be separate. We asked the company for clarification, and will update if we get an answer.Microsoft has already ploughed cash into AI and the datacenter construction to support it, with reports last year that it was earmarking "many billions of dollars" for expansion. Earlier this year, The Reg reported that Microsoft was planning to triple its growth in additional DC capacity during the first half of its fiscal year 2025, which started in July.The Windows biz is also understood to be in talks with AI developer OpenAI to construct a massive supercomputer codenamed Stargate that would feature millions of AI accelerators at a cost of up to $100 billion.In fact, the company has been building so much additional infrastructure that it has increased its own carbon dioxide emissions by nearly 30 percent, largely due to indirect emissions (Scope 3) from the construction projects.The other part of this investment agreement concerns power, or meeting the unquenchable thirst for extra energy infrastructure to support the expanding fleet of AI bit barns. However, details here are remain vague.Concerns continue to be raised about the amount of energy consumed by AI, particularly in training large models that require a hefty amount of powerful compute infrastructure.One researcher warned last year that AI could soon consume as much electricity as a country such as Ireland, while another report this year forecast that AI might cause datacenters to account for 20 to 25 percent of the US power grid by 2030.All of this led the CEO of global datacenter operator DigitalBridge to warn in May that power has become the constraining factor for continued growth.This looming shortfall has forced datacenter operators to pursue a number of strategies to ensure they can access enough energy to power them, with AWS taking over a facility built next to a nuclear power plant in Pennsylvania.Microsoft itself is interested in developing small modular reactor (SMR) tech to provide power for its server farms, even hiring a director of nuclear technologies to oversee the project at the start of this year.However, most of the exploits disclosed by operators are to invest in companies planning to produce renewable energy to feed into the grid, such as the recent deal between Meta and Sage Geosystems for geothermal energy. Others include wind and solar projects.Microsoft chief Satya Nadella said his company is committed to ensuring AI helps advance innovation and drives growth across the economy - and no doubt please shareholders in the process."The Global AI Infrastructure Investment Partnership will help us deliver on this vision, as we bring together financial and industry leaders to build the infrastructure of the future and power it in a sustainable way." ® | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | National Research Council of Science and Technology | Team proposes AI-powered approach to establishing a 'carbon-neutral energy city' | A joint research team has developed key technologies to realize "Urban Electrification" using artificial intelligence (AI). Their findings have been published in the journal Sustainable Cities and Society. The team includes researchers from the Renewable Energy System Laboratory and the Energy ICT Research Department at the Korea Institute of Energy Research (KIER) | https://techxplore.com/news/2024-09-team-ai-powered-approach-carbon.html | 2024-09-20T17:38:03Z | A joint research team has developed key technologies to realize "Urban Electrification" using artificial intelligence (AI). Their findings have been published in the journal Sustainable Cities and Society. The team includes researchers from the Renewable Energy System Laboratory and the Energy ICT Research Department at the Korea Institute of Energy Research (KIER)Urban electrification aims to reduce the use of fossil fuels and introduce renewable energy sources, such as building-integrated solar technology, to transform urban energy systems. While this concept is relatively unfamiliar in the Republic of Korea, it is being promoted as a key strategy in the U.S. and Europe for achieving carbon neutrality and creating sustainable urban environments.In traditional urban models, energy supply can be easily adjusted using fossil fuels to meet electricity demand. However, in electrified cities, the high dependence on renewable energy leads to greater variability in energy supply due to weather changes. This causes mismatches in electricity demand across buildings and makes the stable operation of the power grid more challenging.In particular, Low-Probability High-Impact Events (LPHI), such as sudden cold snaps or extreme heat waves, can cause a sharp increase in energy demand while limiting energy production. These events pose a significant threat to the stability of the urban power grid, potentially leading to large-scale blackouts.The research team developed an energy management algorithm based on AI analysis to address power grid stability issues and implemented it into a system. The demonstration of the developed system showed an 18% reduction in electricity costs compared to conventional methods.The research team first used AI to analyze energy consumption patterns by building type and renewable energy production patterns. They also unraveled how complex variables, such as weather, human behavior patterns, and the scale and operational status of renewable energy facilities, affect the power grid.Notably, they discovered that Low-Probability High-Impact Events, which occur on average only 1.7 days per year (around 0.5% of the time), have a decisive impact on the overall stability of the power grid and its operational costs.The analyzed content is developed into an algorithm and a system. The developed algorithm optimizes energy sharing between buildings and effectively manages peak demand and peak energy production. In addition to maintaining daily energy balance, the system is designed to respond to Low-Probability High-Impact Events, ensuring the stability of the power grid even in extreme situations.When the developed system was applied to a community-scale real-world environment replicating urban electrification, it achieved an energy self-sufficiency rate of 38% and a self-consumption rate of 58%. This is a significant improvement compared to the 20% self-sufficiency and 30% self-consumption rate of buildings without the system. This application also resulted in an 18% reduction in electricity costs and greatly improved the stability of the power grid.Particularly, the annual energy consumption applied in the demonstration was 107 megawatt-hours (MWh), which is seven times larger than simulation-based studies conducted by leading international institutions. This significantly enhances the potential for applying the system in real urban environments.Dr. Gwangwoo Han, the lead author of the paper and a researcher at the Energy ICT Research Department, stated, "The results of this study demonstrate that AI can enhance the efficiency of urban electrification and address power grid stability issues, while also highlighting the importance of managing Low-Probability High-Impact Events."He further predicted that "by applying this system to various urban environments in the future, we can improve energy efficiency and enhance grid stability, ultimately making a significant contribution to achieving carbon neutrality."More information:Gwangwoo Han et al, Analysis of grid flexibility in 100% electrified urban energy community: A year-long empirical study, Sustainable Cities and Society (2024). DOI: 10.1016/j.scs.2024.105648Citation: Team proposes AI-powered approach to establishing a 'carbon-neutral energy city' (2024, September 20) retrieved 20 September 2024 from https://techxplore.com/news/2024-09-team-ai-powered-approach-carbon.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Decision Making/Prediction | Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Kylie Robison | OpenAI’s new model is better at reasoning and, occasionally, deceiving | Researchers found that o1 had a unique capacity to “scheme” or “fake alignment.” | https://www.theverge.com/2024/9/17/24243884/openai-o1-model-research-safety-alignment | 2024-09-17T20:10:43Z | OpenAIs new model is better at reasoning and, occasionally, deceivingOpenAIs new model is better at reasoning and, occasionally, deceiving / Researchers found that o1 had a unique capacity to scheme or fake alignment.ByKylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider. Sep 17, 2024, 8:10 PM UTCIn the weeks leading up to the release of OpenAIs newest reasoning model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputsin a new way. Or, to put things more colloquially, it lied.Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The models chain of thought a feature thats supposed to mimic how humans break down complex ideas internally acknowledged that it couldnt access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.While AI models have been able to lie in the past, and chatbots frequently output false information, o1 had a unique capacity to scheme or fake alignment. That meant it could pretend its following the rules to complete a given task, but it isnt actually. To the model, the rules could be too much of a burden, and it seems to have the ability to disregard them if it means it can more easily complete a task.Apollo CEO Marius Hobbhahn tells The Verge its the first time he encountered this behavior in an OpenAI model. Hobbhahn says the difference is due to this models ability to reason through the chain of thought process and the way its paired with reinforcement learning, which teaches the system through rewards and penalties.During testing, Apollo discovered that the AI simulated alignment with its developers expectations and manipulated tasks to appear compliant. It even checked its system for oversight that is, if its developers were watching before acting. In short, the AI adhered to the guidelines sufficiently for deployment, while continuing to prioritize its objectives.Do you work at OpenAI? Id love to chat. You can reach me securely on Signal @kylie.01 or via email at [email protected] dont expect it could do that in practice, and even if it did, I dont expect the harm to be significant, Hobbhahn told me over the phone a day after the models launch. But its kind of the first time that I feel like, oh, actually, maybe it could, you know?For OpenAI, o1 represents a big step toward highly intelligent autonomous systems that could do meaningful work for humanity like cure cancer and aid in climate research. The flip side of this AGI utopia could also be much darker. Hobbhahn provides an example: if the AI becomes singularly focused on curing cancer, it might prioritize that goal above all else, even justifying actions like stealing or committing other ethical violations to achieve it.What concerns me is the potential for a runaway scenario, where the AI becomes so fixated on its goal that it sees safety measures as obstacles and tries to bypass them to fully pursue its objective, Hobbhahn told me.To be clear, Hobbhahn doesnt think o1 will steal from you thanks to a lot of alignment training. But these are the issues that are top of mind for researchers tasked with testing these models for catastrophic scenarios. The behavior Apollo is testing for hallucinations and deception in OpenAIs safety card happens when a model generates false information even though it has reason to infer the information might be incorrect. For instance, the report says that in about 0.38 percent of cases, the o1-preview model provides information its chain of thought indicates is likely false, including fake references or citations. Apollo found that the model might fabricate data instead of admitting its inability to fulfill the request.Hallucinations arent unique to o1. Perhaps youre familiar with the lawyer who submitted nonexistent judicial opinions with fake quotes and citations created by ChatGPT last year. But with the chain of thought system, theres a paper trail where the AI system actually acknowledges the falsehood although somewhat mind-bendingly, the chain of thought could, in theory, include deceptions, too. Its also not shown to the user, largely to prevent competition from using it to train their own models but OpenAI can use it to catch these issues.Potentially, it will use this reasoning for goals that we disagree with.In a smaller number of cases (0.02 percent), o1-preview generates an overconfident response, where it presents an uncertain answer as if it were true. This can happen in scenarios where the model is prompted to provide an answer despite lacking certainty.This behavior may be linked to reward hacking during the reinforcement learning process. The model is trained to prioritize user satisfaction, which can sometimes lead it to generate overly agreeable or fabricated responses to satisfy user requests. In other words, the model might lie because it has learned that doing so fulfills user expectations in a way that earns it positive reinforcement.What sets these lies apart from familiar issues like hallucinations or fake citations in older versions of ChatGPT is the reward hacking element. Hallucinations occur when an AI unintentionally generates incorrect information, often due to knowledge gaps or flawed reasoning. In contrast, reward hacking happens when the o1 model strategically provides incorrect information to maximize the outcomes it was trained to prioritize.The deception is an apparently unintended consequence of how the model optimizes its responses during its training process. The model is designed to refuse harmful requests, Hobbhahn told me, and when you try to make o1 behave deceptively or dishonestly, it struggles with that.Lies are only one small part of the safety puzzle. Perhaps more alarming is o1 being rated a medium risk for chemical, biological, radiological, and nuclear weapon risk. It doesnt enable non-experts to create biological threats due to the hands-on laboratory skills that requires, but it can provide valuable insight to experts in planning the reproduction of such threats, according to the safety report.What worries me more is that in the future, when we ask AI to solve complex problems, like curing cancer or improving solar batteries, it might internalize these goals so strongly that it becomes willing to break its guardrails to achieve them, Hobbhahn told me. I think this can be prevented, but its a concern we need to keep an eye on.Not losing sleep over risks yetThese may seem like galaxy-brained scenarios to be considering with a model that sometimes still struggles to answer basic questions about the number of Rs in the word raspberry. But thats exactly why its important to figure it out now, rather than later, OpenAIs head of preparedness, Joaquin Quiñonero Candela, tells me.Todays models cant autonomously create bank accounts, acquire GPUs, or take actions that pose serious societal risks, Quiñonero Candela said, adding, We know from model autonomy evaluations that were not there yet. But its crucial to address these concerns now. If they prove unfounded, great but if future advancements are hindered because we failed to anticipate these risks, wed regret not investing in them earlier, he emphasized.The fact that this model lies a small percentage of the time in safety tests doesnt signal an imminent Terminator-style apocalypse, but its valuable to catch before rolling out future iterations at scale (and good for users to know, too). Hobbhahn told me that while he wished he had more time to test the models (there were scheduling conflicts with his own staffs vacations), he isnt losing sleep over the models safety.One thing Hobbhahn hopes to see more investment in is monitoring chains of thought, which will allow the developers to catch nefarious steps. Quiñonero Candela told me that the company does monitor this and plans to scale it by combining models that are trained to detect any kind of misalignment with human experts reviewing flagged cases (paired with continued research in alignment).Im not worried, Hobbhahn said. Its just smarter. Its better at reasoning. And potentially, it will use this reasoning for goals that we disagree with. | Unknown | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Shirin Ghaffary | OpenAI pitched US on unprecedented data centre buildout | Sam Altman’s firm wants centres that could use same amount of energy as entire cities | https://www.irishtimes.com/business/2024/09/25/openai-pitched-us-on-unprecedented-data-centre-buildout/ | 2024-09-25T06:33:33Z | OpenAI has pitched the Biden administration on the need for massive data centres that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China.Following a recent meeting at the White House, which was attended by OpenAI chief executive Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5 gigawatt data centres in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5GW is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes.OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the US can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the US needs policies that support greater data centre capacity, the document said.Altman has spent much of this year trying to form a global coalition of investors to fund the costly physical infrastructure required to support rapid AI development, while also working to secure the US governments blessing for the project. But the details on the energy capacity of the data centres Altman and OpenAI are calling for have not previously been reported.OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AIs benefits accessible to everyone, a spokesman for OpenAI said in a statement.The push comes as power projects in the US are facing delays due to long wait times to connect to grids, permitting delays, supply chain issues and labour shortages. But energy executives have said powering even a single 5 gigawatt data centre would be a challenge.Joe Dominguez, CEO of Constellation Energy, said he has heard Altman is talking about building 5 to 7 data centres that are each 5GW. The document shared with the White House does not provide a specific number. OpenAIs aim is to focus on a single data centre to start, but with plans to potentially expand from there, according to a person familiar with the matter.Whatever were talking about is not only something thats never been done, but I dont believe its feasible as an engineer, as somebody who grew up in this, Dominguez said, Its certainly not possible under a time frame thats going to address national security and timing.The US has a total of 96GW of installed capacity of nuclear power. Last week, OpenAIs biggest investor, Microsoft, struck a deal with Constellation in which the nuclear provider will restart the shuttered Three Mile Island facility solely to provide Microsoft with nuclear power for two decades.In June, John Ketchum, CEO of NextEra Energy, said the clean-energy giant had received requests from some tech companies to find sites that can support 5GW of demand, without naming any specific firms. Think about that. Thats the size of powering the city of Miami, he said.That much power would require a mix of new wind and solar farms, battery storage and a connection to the grid, Ketchum said. He added that finding a site that could accommodate 5GW would take some work, but there are places in the US that can fit one gigawatt. Bloomberg | Unknown | Unknown | null | null | null | null | null | null |
|
news | shadowspar | AI is revitalizing the fossil fuels industry, and big tech has nothing to say for itself | Silicon Valley is helping to accelerate the climate crisis in at least 3 major ways | https://www.bloodinthemachine.com/p/ai-is-revitalizing-the-fossil-fuels | 2024-09-20T05:47:01Z | Greetings, and welcome to another edition of Blood in the Machine. This week, a grim subject; how the tech giants, once heralded as champions of clean energy and low-carbon solutions, are helping to automate, and exacerbate, the climate crisis. As always, thank you for reading, subscribing, sharing, and supporting this project. Paid subscribers are what make this all possibleif you are able, a paid subscription, at less than the cost of a beer a month, means I can keep doing this writing. Im grateful for each of you. Onwards, and hammers up.Last week, tech reporter Karen Hao dropped a big, thoroughly reported story about how Microsoft is trying to have it both ways when it comes to climate. The software giant has positioned itself as a leader in sustainability, championing its green initiatives and publicizing the ways its AI technology might be used to reduce carbon emissionsall while selling AI tools to oil and gas companies to help them accelerate fossil fuel extraction. The AI climate innovations are largely speculative, but the value of those AI oil contracts is much less so: According to the internal documents Hao viewed, Microsoft estimates the fossil fuel deals to be worth up to $75 billion a year. When Hao pressed a Microsoft executive on the apparent contradiction, he could only seem to repeat Its complicated ad nauseam.This is of course *not* particularly complicated. Oil companies possess a lot of money, and Microsoft would like some of it. This market objective overrides the genuine desire of a number of individuals in the Microsoft organization to address climate change. So does running its AI program in the first place, as Hao points out: Worldwide, AI systems are on track to demand as much energy as all of India. As if to emphasize just how uncomplicated this matter is, Bloomberg ran a story the very next business day headlined, AI Boom Is Driving a Surprise Resurgence of US Gas-Fired Power. From that story: Energy companies in the US are planning new natural gas-fired power generation at the fastest pace in years, one of the clearest signals yet that fossil fuels are likely to have a longer runway than previously thought.From Florida to Oregon, utilities are racing to meet a surge in demand from power-hungry AI data centers, manufacturing facilities and electric vehicles. The staying power of gas, which in 2016 overtook coal as the No. 1 US source of electricity, has surprised some experts who not so long ago had projected the era of frenzied domestic demand growth for the fuel might soon come to an end.A few years ago, there was the expectation that solar and wind would be able to solve our additional generation needs, said Jed Dorsheimer, group head of the energy and sustainability sector at investment bank William Blair, who now sees gas accounting for as much as 60% of new generation. Theres been a call for peak oil and peak gas, and eventually those calls will be right. But not anytime soon.Got that? We were, according to this report, on track for fossil fuel use to top outuntil AI came along. AI is a notoriously large energy suckan AI-generated answer needs about 10 times as much electricity as a Google search. Now analysts and agencies are quietly revising their decarbonization goals downward, gas and coal plants that were slated for retirement are being kept online, and now utilities are building more gas plants in the first half of 2024 than were built in all of 2020 combined.Its not great. Its also worth noting that data centers are at the moment a relatively small slice of total worldwide energy usagecurrently something like 1%, dwarfed by cars, heavy industry, commercial buildings, and so on. That could changea report from the Electric Power Research Institute projected that the electricity required by AI companies could rise to reach up to 9% of the United States energy mix, which would, quite frankly, be insane. (If you think the web is overrun with AI content now, imagine a world where one tenth of all the electricity we generate is going into pumping out more of the stuff.)BUT. Theres another element at work here, and that, as energy analysts have pointed out, is that utilities have a self-interested reason to take AI companies energy projections at face value, or even to inflate them: It means they can raise rates and lobby to build more gas plants! And that means there might be two separate bubbles getting inflated here: A bubble in investment in generative AI in the tech sector, *and* a bubble in the investment in fossil fuel plants justified by generative AI. Both have disastrous ramifications for climate, for the reasons outlined above. So the AI companies are exacerbating the climate crisis in at least three ways:Selling tools to help oil companies AI tools to accelerate fossil fuel extraction.Running data centers that require ~10x the amount of electricity as GoogleGiving fossil fuel companies a reasonor an excuseto build more power plants.This makes it all the more pressing to deflate each of those bubbles. Its a lot harder, after all, to decommission infrastructure than it is to not build it in the first place. And it should go without saying that, amid an accelerating climate crisis, we do not need to increase our energy use tenfold so that every tech giant can compete to shoddily automate call center and illustration jobs, inundate search results with tips for how to eat rocks safely, and load up our social feeds with dubious deepfakes.The question is, how? The recent past offers one possible route. This isnt the first time that the tech giants have been made to face scrutiny over their hypocritical climate policiesback in 2018, I wrote about the many ways that Google, Microsoft, and Amazon were selling AI and automation tools to fossil fuel companies. (The generative AI boom is like deja vu on steroids in this regard.) As the revelations of climate hypocrisy mounted, workers at those companies began public-facing pressure campaigns to get their employers to make good on their own climate promises. They made a climate-focused shareholder resolution, staged a public protest, and formed groups like the Amazon Employees for Climate Justice. The workers won concessionsGoogle said it would stop selling certain AI tools to oil companies, and Amazon made an elaborate if unenforceable Climate Pledgeeven if they were far from whats needed. You can imagine some combination of worker power, public shaming, and legislation that mandates new data centers be powered by clean energy beginning to address the issue. A major hurdle is that the tech giants are less tolerant of employee dissent than ever, a number of those climate conscious employees have left the companies, and tech workers have faced years of layoffsa difficult environment for climate organizing to be sure. But the companies are vulnerable, too. Each has publicly stated climate goalsMicrosoft, for instance, pledged to be carbon negative by 2030and the calls need to come from both inside and outside the house to point this out, loudly.Because when pressed, the companies have remarkably little to say in their own defense. As we further integrate AI into our products, reducing emissions may be challenging, Google admitted in a report published this summer that found that its carbon emissions had risen 48% over the last five years. And when Hao pushed the Microsoft exec, all he could muster is its complicated 11 times in half an hour, per her account. At the *very, very least* we should see AI companies pledging to use clean power for their servers, and were not even seeing much of that. Its complicated and Its challenging are not satisfying responses to Why are you running so much automation software that you have tossed the entire domestic fossil fuels industry a lifeline? And actively selling AI toolsto oil companies if you profess to care about the climate? The stakes couldnt be much higher, after allcatastrophic climate change is on our doorstep. As long as the AI bubble continues to inflate, it seems it risks further revitalizing the fossil fuels industry right along with it. Its darkly ironic that AI is being pitched as essential to the future, when it is breathing life into some of the most pollutive technologies of our past. Peter Thiel thinks maybe the Luddites are right this time around. Peter Thiel is skeptical of generative AI, too comes around the 22 minute mark of an otherwise predictable discussion with the obsequious All-in lads of conservative politics and the blight of liberal universities and so on.Potentially massive job loss headed for the Philippines, home to one of the worlds largest call center industries.From a Bloomberg report: Avasant, an outsourcing advisory firm that works extensively in the Philippines, estimates that up to 300,000 business process outsourcing (BPO) jobs could be lost in the country to AI in the next five yearsIts hard to overstate the importance of the BPO sector to the Philippines. Its the countrys biggest source of private sector jobs and the biggest sectoral contributor to gross domestic product. Socially, the centers are a source of decent money for non-university-educated Filipinos that doesnt require them to work abroad. The government had been banking on the industry to help it move up the value chain, propel its 100-million-plus citizens into the middle class and kickstart the creation of other white-collar jobs. But AI arrived before thats happened.As always, already precarious jobs are the most vulnerable to AI, especially in industries where the good enough principle is at play. AI will almost certainly do a much poorer job of answering customers questions than AI willwho wouldnt rather talk with a person than an automated voicebut management can deem the systems good enough regardless, and thats likely to swing a wrecking ball at an entire industry thats vital to a nations economy. Thats it for now. Until next time, remember to do your part to put down the machinery hurtful to commonality. | Process Automation/Decision Making/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | GlobeNewswire | Clear Blue Technologies Participating on Artificial Intelligence Panel at TowerXchange Meetup Africa 2024 | Company also announces extension to close of second tranche of Private Placement TORONTO, Sept. 04, 2024 (GLOBE NEWSWIRE) — Clear Blue Technologies International Inc. (TSXV: CBLU) (FRANKFURT: OYA), the Smart Off-Grid™ Company, announces that CEO Miriam Tuerk will host and present on the AI/Analytics panel at the upcoming TowerXchange Meetup Africa 2024 conference. The conference […] | https://financialpost.com/globe-newswire/clear-blue-technologies-participating-on-artificial-intelligence-panel-at-towerxchange-meetup-africa-2024 | null | 2024-09-04T21:40:57Z | Author of the article:Article contentCompany also announces extension to close of second tranche of Private PlacementTORONTO, Sept. 04, 2024 (GLOBE NEWSWIRE) Clear Blue Technologies International Inc. (TSXV: CBLU) (FRANKFURT: OYA), the Smart Off-Grid Company, announces that CEO Miriam Tuerk will host and present on the AI/Analytics panel at the upcoming TowerXchange Meetup Africa 2024 conference. The conference is scheduled for September 10 and 11 in Nairobi, Kenya.Panel discussion to focus on tangible improvements digital infrastructure owners are generating from the application of better monitoring, analytics, automation and Artificial Intelligence (AI) /Machine Learning (ML).This advertisement has not loaded yet, but your article continues below.THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLYSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.SUBSCRIBE TO UNLOCK MORE ARTICLESSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.REGISTER / SIGN IN TO UNLOCK MORE ARTICLESCreate an account or sign in to continue with your reading experience.Access articles from across Canada with one account.Share your thoughts and join the conversation in the comments.Enjoy additional articles per month.Get email updates from your favourite authors.With over 50% of Africa lacking mobile connectivity, tower companies have a major role to close the continents infrastructure gap, said Ms. Tuerk. Artificial Intelligence and predictive analytics is a powerful enabler to obtain better performance from infrastructure, accelerating Africas digital, social and economic development. Clear Blue has been at the forefront of these solutions and offers proven, cost effective, environmentally friendly, and easily deployable Smart Power services and solutions to help make this leap in infrastructure. I look forward to exchanging knowledge with government representatives, tower executives, and other stakeholders at TowerXchange Meetup Africa 2024.Extension of Private Placement Second ClosingClear Blue today also announces the extension of its non-brokered convertible debenture private placement (the Offering), previously set to end on September 1, 2024, to September 30, 2024. The closing of the initial tranche of the Offering was announced on August 6, 2024, in which gross proceeds of approximately $1.41M were subscribed for. The Company also intends to increase the size of the Offering from $2M to $2.5M collectively across all tranches. Additionally, the Company is correcting the disclosure of the strike price of certain broker warrants issued in the first tranche of the Offering from $0.06 to $0.10, and correcting the disclosure of cash finders fees from $10,080 to $5,740.Get the latest headlines, breaking news and columns.By signing up you consent to receive the above newsletter from Postmedia Network Inc.We encountered an issue signing you up. Please try againThis advertisement has not loaded yet, but your article continues below.About TowerXchangeTowerXchange is an open community for thought leaders in the telecom infrastructure industry. We bring together MNOs, towercos, investors, equipment and service providers to share best practices in passive and active infrastructure management, opex reduction, and to accelerate infrastructure sharing. For more information, visit https://meetup.towerxchange.comFor more information, contact: Miriam Tuerk, Co-Founder and CEO +1 416 433 3952 [email protected] www.clearbluetechnologies.com/en/investors Nikhil Thadani, Sophic Capital +1 437 836 9669 [email protected] About Clear Blue Technologies International Clear Blue Technologies International, the Smart Off-Grid company, was founded on a vision of delivering clean, managed, wireless power to meet the global need for reliable, low-cost, solar and hybrid power for lighting, telecom, security, Internet of Things devices, and other mission-critical systems. Today, Clear Blue has thousands of systems under management across 37 countries, including the U.S. and Canada. (TSXV: CBLU) (FRA: 0YA) (OTCQB: CBUTF)This advertisement has not loaded yet, but your article continues below.Legal Disclaimer Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.This news release does not constitute an offer to sell or a solicitation of an offer to buy any of the securities described in this news release. Such securities have not been, and will not be, registered under the U.S. Securities Act, or any state securities laws, and, accordingly, may not be offered or sold within the United States, or to or for the account or benefit of persons in the United States or U.S. Persons, as such term is defined in Regulation S promulgated under the U.S. Securities Act, unless registered under the U.S. Securities Act and applicable state securities laws or pursuant to an exemption from such registration requirements.Forward-Looking Statement This press release contains certain forward-looking information and/or forward-looking statements within the meaning of applicable securities laws. Such forward-looking information and forward-looking statements are not representative of historical facts or information or current condition, but instead represent only Clear Blues beliefs regarding future events, plans or objectives, many of which, by their nature, are inherently uncertain and outside of Clear Blues control. Generally, such forward-looking information or forward-looking statements can be identified by the use of forward-looking terminology such as plans, expects or does not expect, is expected, budget, scheduled, estimates, forecasts, intends, anticipates or does not anticipate, or believes, or variations of such words and phrases or may contain statements that certain actions, events or results may, could, would, might or will be taken, will continue, will occur or will be achieved. The forward-looking information contained herein may include, but is not limited to, information concerning financial results and future upcoming contracts.This advertisement has not loaded yet, but your article continues below.By identifying such information and statements in this manner, Clear Blue is alerting the reader that such information and statements are subject to known and unknown risks, uncertainties and other factors that may cause the actual results, level of activity, performance or achievements of Clear Blue to be materially different from those expressed or implied by such information and statements.An investment in securities of Clear Blue is speculative and subject to several risks including, without limitation, the risks discussed under the heading Risk Factors in Clear Blues listing application dated July 12, 2018. Although Clear Blue has attempted to identify important factors that could cause actual results to differ materially from those contained in the forward-looking information and forward-looking statements, there may be other factors that cause results not to be as anticipated, estimated or intended.In connection with the forward-looking information and forward-looking statements contained in this press release, Clear Blue has made certain assumptions. Although Clear Blue believes that the assumptions and factors used in preparing, and the expectations contained in, the forward-looking information and statements are reasonable, undue reliance should not be placed on such information and statements, and no assurance or guarantee can be given that such forward-looking information and statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such information and statements. The forward-looking information and forward-looking statements contained in this press release are made as of the date of this press release. All subsequent written and oral forward- looking information and statements attributable to Clear Blue or persons acting on its behalf is expressly qualified in its entirety by this notice.This advertisement has not loaded yet, but your article continues below.This news release does not constitute an offer to sell or a solicitation of an offer to buy any of the securities described in this news release. Such securities have not been, and will not be, registered under the U.S. Securities Act, or any state securities laws, and, accordingly, may not be offered or sold within the United States, or to or for the account or benefit of persons in the United States or U.S. Persons, as such term is defined in Regulation S promulgated under the U.S. Securities Act, unless registered under the U.S. Securities Act and applicable state securities laws or pursuant to an exemption from such registration requirements.Share this article in your social networkPostmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information. | Prediction/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Tom's Hardware | Microsoft inks deal to restart Three Mile Island nuclear reactor to fuel its voracious AI ambitions | Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg. 819 MW for AI and cloud data centers Constellation Energy will invest $1.6 billion to restart its Three Mile Island nuclear... | https://freerepublic.com/focus/f-news/4266285/posts | null | 2024-09-21T16:52:17Z | Skip to comments.Microsoft inks deal to restart Three Mile Island nuclear reactor to fuel its voracious AI ambitionsTom's Hardware ^ | Tom's HardwarePosted on 09/21/2024 9:52:17 AM PDT by aquila48Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg. 819 MW for AI and cloud data centers Constellation Energy will invest $1.6 billion to restart its Three Mile Island nuclear plant in Pennsylvania. The revived reactor will provide clean electricity to Microsoft for 20 years, supporting the tech giant's AI and cloud computing energy needs. This reactor had a generating capacity of approximately 819 megawatts (MW) of electricity, which is enough power for a small to medium-sized city with hundreds of thousands of homes. The plant should be operational by 2028, and it will serve Microsoft exclusively. This agreement represents Microsoft's first long-term commitment to nuclear energy, which is part of its strategy to meet its growing energy needs with carbon-free sources. Three Mile Island has two reactors, one of which (with a capacity of 906 MW) was shut down in 1979 after a nuclear accident. The other (with a capacity of 819 MW), closed in 2019 due to economic issues, is now set to reopen thanks to the deal with Microsoft(Excerpt) Read more at tomshardware.com ...TOPICS:Business/Economy; Front Page News; Government; News/Current Events; Politics/ElectionsKEYWORDS:Click here: to donate by Credit CardOr here: to donate by PayPalOr by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794Thank you very much and God bless you.1posted on 09/21/2024 9:52:17 AM PDTby aquila48I don’t want to see my skin aglow.2posted on 09/21/2024 9:54:28 AM PDTby dfwgator(Endut! Hoch Hech!)More people died in Ted Kennedy’s car than from Three Mile Island...Nuclear is coming back!Na, Microsoft needs the power for it’s AI to takeover the world ,LOLNuclear is coming back, but not to lower energy prices.5posted on 09/21/2024 9:58:43 AM PDTby stars & stripes forever(Blessed is the nation whose GOD is the LORD. (Psalm 33:12))OK, now we *know* Bill Gates hates humanity! /sarc>6posted on 09/21/2024 9:59:03 AM PDTby grey_whiskers(The opinions are solely those of the author and are subject to change without notice.)Should be made to use solar and wind only, considering what these Red Fascists want everybody else to use.7posted on 09/21/2024 10:01:43 AM PDTby jacknhoo(Luke 12:51; Think ye, that I am come to give peace on earth? I tell you, no; but separation.)Easy targets once Skynet achieves sentience.8posted on 09/21/2024 10:05:17 AM PDTby xoxoxabsolutely. where are the cries for “green”, “sustainable,” “zero-carbon-footprint,” “Greenpeace compliant” or whatever AI?9posted on 09/21/2024 10:06:41 AM PDTby xoxoxDisclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.FreeRepublic.com is powered by software copyright 2000-2008 John Robinson | Unknown | Management/Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
news | Business Wire | HEAVY.AI Accelerates Big Data Analytics with Vultr’s High-Performance GPU Cloud Infrastructure | Partnership enables enterprises across core sectors such as energy, public sector, and telecommunications to achieve substantial speed improvements and cost savings WEST PALM BEACH, Fla. — Vultr, the world’s largest privately held cloud computing platform, today announced a partnership with GPU-accelerated analytics platform provider HEAVY.AI. Integrating Vultr’s global NVIDIA GPU cloud infrastructure into its operations, […] | https://financialpost.com/pmn/business-wire-news-releases-pmn/heavy-ai-accelerates-big-data-analytics-with-vultrs-high-performance-gpu-cloud-infrastructure | null | 2024-09-10T13:09:56Z | Author of the article:Article contentPartnership enables enterprises across core sectors such as energy, public sector, and telecommunications to achieve substantial speed improvements and cost savingsWEST PALM BEACH, Fla. Vultr, the worlds largest privately held cloud computing platform, today announced a partnership with GPU-accelerated analytics platform provider HEAVY.AI. Integrating Vultrs global NVIDIA GPU cloud infrastructure into its operations, HEAVY.AI can interactively query and visualize massive datasets, enabling faster, more efficient decision-making for customers across diverse sectors.This advertisement has not loaded yet, but your article continues below.THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLYSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.SUBSCRIBE TO UNLOCK MORE ARTICLESSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.REGISTER / SIGN IN TO UNLOCK MORE ARTICLESCreate an account or sign in to continue with your reading experience.Access articles from across Canada with one account.Share your thoughts and join the conversation in the comments.Enjoy additional articles per month.Get email updates from your favourite authors.Partnering with Vultr has allowed us to leverage their highly performant, global NVIDIA GPU cloud infrastructure to provide our customers with better access to unparalleled speed and efficiency, said Jon Kondo, CEO of HEAVY.AI. This integration ensures that our platform continues to deliver the rapid insights and cost savings that are critical for our customers success.Now, thanks to Vultrs advanced GPU infrastructure, HEAVY.AI can deliver the best performance at an affordable price. The NVIDIA GH200 Grace Hopper Superchip joins NVIDIA A100 Tensor Core GPUs and Vultr Bare Metal instances to form a powerful trio that drives faster insights via HEAVY.AIs platform. With the NVIDIA GH200 running on Vultr Cloud, HEAVY.AI can deliver 5X or greater price performance when compared to 8XA100 instances, completing industry-standard analytic SQL benchmarks such as TPC-H100 in less than 4.5 seconds, and executing 11 of the 22 queries in less than 100 milliseconds.Vultr is one of the first cloud providers to offer the revolutionary NVIDIA GH200 Grace Hopper Superchip, said Todd Mostak, Co-Founder and CTO of HEAVY.AI. By tapping into the immense GPU compute and CPU-GPU bandwidth that is unique to the NVIDIA GH200, we now can offer our customers even greater performance and scale while significantly reducing costs, empowering businesses to derive insights faster and more efficiently than ever before.Building on its mission to democratize access to big data analytics, HEAVY.AI offers a comprehensive software suite that allows organizations to query, analyze, and visualize complex datasets, with a particular focus on location and time series data. With performance that is often orders of magnitude faster than CPU-based solutions, HEAVY.AI provides analysts, data scientists, data engineers, and geospatial professionals across industries with a complete view of their data, helping them understand the what, when, and where with unparalleled clarity.Through its collaborations with NVIDIA and now Vultr, HEAVY.AI has optimized its platform for the latest NVIDIA hardware to provide best-in-class performance, providing significant advantages to industries that rely on fast data processing. Specifically:Energy: accelerating big data analytics for sectors such as renewable energy for wind, solar, and biomass.Telecommunications: expediting faster and deeper understanding of LTE and 5G networks, and enhancing customer experiences.Urban planning & smart cities: accelerating the sector with GeoAI solutions, which analyze urban data to optimize infrastructure, transportation, and public services; and driving advanced utility analytics on big smart meter, IoT, and Earth observation datasets, for hidden risks visualization, and uninterrupted uptime.Environmental monitoring, geospatial intelligence, and disaster response: analyzing satellite and drone data to detect changes in land use; assessing damage from natural disasters faster; and automatic change detection and socio-economic analysis.Get the latest headlines, breaking news and columns.By signing up you consent to receive the above newsletter from Postmedia Network Inc.We encountered an issue signing you up. Please try againThis advertisement has not loaded yet, but your article continues below.As enterprises across sectors look to train and scale their models, they are looking for industry and use case-specific solutions to help them accelerate growth and innovation, said Kevin Cochrane, CMO of Vultr. Our partnership with HEAVY.AI is yet another example of Vultr being committed to unlocking the next frontier of GPU-accelerated analytics for some of the most data-intensive workloads across key sectors.This news follows the companys recent launch of its industry cloud solution, which delivers industry-vertical, cutting-edge cloud computing solutions that meet specific industry needs and regulatory requirements. Through its partnership with HEAVY.AI, Vultr is reaffirming its commitment to both empowering customers across verticals and building an ecosystem of best-of-breed technologies, tools, and services that enable them to easily build and scale cloud-native and AI-native initiatives.To learn more about the HEAVY.AI and Vultr partnership, visit us here.About HEAVY.AIHEAVY.AI provides a groundbreaking GPU-accelerated analytics platform that empowers organizations to instantly query and visualize multi-billion-record datasets, including geospatial and time-series data, delivering a complete view of what is happening, when, and where. By integrating massive data volumes from multiple sources, HEAVY.AI provides an immersive, real-time interactive visual analytics experience. Industry leaders in telecommunications, energy, government, utilities, and higher education rely on HEAVY.AI to drive high-impact decisions at unprecedented speeds. Born from research at Harvard and MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), HEAVY.AI is backed by GV, In-Q-Tel, New Enterprise Associates (NEA), NVIDIA, Tiger Global Management, Vanedge Capital, and Verizon Ventures. Headquartered in San Francisco, HEAVY.AI is reshaping data analytics. Learn more about HEAVY.AI.About VultrVultr is the worlds largest privately-held cloud computing platform that delivers unparalleled ease of use, performance, pricing, and reach. With 1.5M customers across 185 countries, Vultr is the leading alternative hyperscaler, serving enterprise-grade businesses in financial services, telecom, healthcare & life sciences, retail, media & entertainment, manufacturing, and more.This advertisement has not loaded yet, but your article continues below.Vultrs Cloud Compute, Cloud GPU, Bare Metal, Managed Kubernetes, Managed Databases, Cloud Storage, and Networking solutions give customers global reach and performance while eliminating complexity and cost so that they can easily deploy and scale their cloud-native and AI-native applications worldwide.Learn more at: www.vultr.com.View source version on businesswire.com: https://www.businesswire.com/news/home/20240910576934/en/ContactsJordan Tewell 415-666-6066 10Fold Communications for HEAVY.AI [email protected] Ward Scratch Marketing + Media for Vultr [email protected]#distroShare this article in your social networkPostmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information. | Content Synthesis/Decision Making/Detection and Monitoring/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
news | Margaux MacColl | Former Brex COO who now heads unicorn fintech Figure says GPT is already upending the mortgage industry | Lending startup Figure will be launching an AI tool powered by GPT-4 to help catch errors in lending documents. © 2024 TechCrunch. All rights reserved. For personal use only. | https://techcrunch.com/2024/09/26/former-brex-coo-who-now-heads-unicorn-fintech-figure-says-gpt-is-already-upending-the-mortgage-industry/ | 2024-09-26T20:30:00Z | Lending startup Figure announced today a rollout of AI tooling to make the home lending process more efficient. The company will be launching an AI tool powered by GPT-4 to help catch errors in lending documents. Figure, founded in 2018, specializes in helping consumers secure home equity lines of credit. The company touts that its all-online process condenses a normally 45-day process to five. Over half of Figures business is B2B, where it’s embedded in companies like solar panel loan company GoodLeap.The company, which has raised over $1.5 billion and was last valued at around $3 billion, according to PitchBook, is now making a push into AI a strategy heralded by new CEO Michael Tannenbaum, who left his station as COO at Brex to join the firm. I thought that this was something that could really transform the way that fintech businesses work, he said of his move. The key AI product hes pushed for is to help with stare and compare instances in the lending process. He gave the example of a property level description, which is a unique description of the asset that has to be exactly the same on many of the legal documents. Traditionally, a human would have to look through over 60 pages to ensure the description is the same. Tannenbaum said their new feature massively decreases the manual labor and time it takes to verify the documents. Its an example, he said, of AI taking costs out of complex processes. Given the personal information in loan applications, the company had to go back and forth with OpenAI to make sure their privacy agreement was ironclad and that the models were not being trained on our customer data in a certain way,” he said. Although the feature runs on GPT-4, Ruben Padron, chief data officer, emphasized that the company made it a priority to build model-agnostic systems. We’re constantly testing and evaluating different models as they come out, and almost weekly, or sometimes daily, he said. Their systems offers us a lot of flexibility to allow us to quickly and dynamically pivot to whatever vendor is offering the highest performance. Padron sees many more AI offerings in Figures future, emphasizing that, the more they can automate the lending application process, the less chance for error or bias. We’re really trying to lower the cost, eliminate the manual work, reduce the bias, Padron said. It’s very much a journey. It’s not a destination. | Detection and Monitoring/Process Automation | Business and Financial Operations | null | null | null | null | null | null |
|
news | Ciara O'Brien | Tines tops LinkedIn list of Irish start-ups | Manna, Wayflyer and &Open also feature on list | https://www.irishtimes.com/business/2024/09/25/tines-tops-linkedin-list-of-irish-start-ups/ | 2024-09-25T19:04:43Z | No-code workflow automation company Tines has topped LinkedIns list of top Irish start-ups for 2024, beating unicorn Wayflyer and accounting platform Outmin to the top spot.Fintech Circit and gifting platform & Open completed the top five, with drone delivery business Manna, EV charging and solar power company ePower and digital pathology company Deciphex also featuring in the top 10.The annual ranking of emerging companies recognises the top 10 most sought-after start-ups to work for in the country. The list ranks companies based on insights from 3 million LinkedIn users in Ireland, looking at employee growth, job interest, engagement with the company and its employees, and the attraction of top talent. The insights were gathered over a 12-month period that ended in July.Tines, which was founded by Eoin Hinchy and Thomas Kinsella in 2018, was ranked second last year. The company has developed software that allows employees to automate security workflows without needing technical skills. That frees up software engineers to focus on mission-critical tasks, rather than mundane ones. It recently announced a new AI chat tool, Workbench, to help security teams to use large language models securely on proprietary data.This years list is dominated by tech firms, underscoring the vibrant start-up scene in Ireland and illustrating how Irish companies are helping to reinvent the world. Whether its takeaways being delivered to your doorstep by Manna Drone Delivery or Outmin automating everyday accounting with AI, we should recognise the ways that Irish innovation is fundamentally changing how we go about our everyday lives, said Polly Dennison, news editor at LinkedIn News.This innovation is not going unnoticed as the level of interest in and number of job applications to the top 10 start-ups goes to show. Irish professionals are naturally drawn to these companies on the rise and attracting the brightest talent in the country will naturally fuel their future success.Sign up for the Business Today newsletter and get the latest business news and commentary in your inbox every weekday morningOpt in to Business push alerts and have the best news, analysis and comment delivered directly to your phoneJoin The Irish Times on WhatsApp and stay up to dateOur Inside Business podcast is published weekly Find the latest episode here | Digital Assistance/Process Automation | Unknown | null | null | null | null | null | null |
|
news | Stacker | AI systems are gobbling up energy. Here's what it may mean for the future of infrastructure. | Artificial intelligence systems are digital, but they are very much dependent on the physical world. As demand for this virtual technology surges, tech companies that want to keep advancing AI face a much more tangible problem: sourcing enough electricity. AI systems, particularly large language models like ChatGPT, are energy-intensive due to their immense computational needs. […]The post AI systems are gobbling up energy. Here's what it may mean for the future of infrastructure. appeared first on Digital Journal. | https://verbit.ai/general/ai-systems-are-gobbling-up-energy-heres-what-it-may-mean-for-the-future-of-infrastructure/ | 2024-09-12T14:15:44Z | Verbit examined data by analysts, including the International Energy Administration, to see how the growth of AI will change energy demand.- ImageFlow // ShutterstockArtificial intelligence systems are digital, but they are very much dependent on the physical world. As demand for this virtual technology surges, tech companies that want to keep advancing AI face a much more tangible problem: sourcing enough electricity.AI systems, particularly large language models like ChatGPT, are energy-intensive due to their immense computational needs. The leading AI models have taken in massive amounts of publicly available text on the internet. Processing the data and trying to draw insights from it both require a ton of electricity.While exact figures are not publicly available, one estimate suggests that training GPT-4 took 50 gigawatt-hours to train. The typical American household uses about 10,800 kilowatt-hours of electricity annually. This means that training GPT-4 used enough electricity to power some 4,500 households for a year. Meanwhile, in a 2024 report, Goldman Sachs estimated that each ChatGPT query uses 10 times the electricity of a single Google search.As AI systems continue to develop and reach more people, electricity demand is likely to increase with it. Verbit examined analyst reports to see how AI’s growth will change the future of infrastructure.VerbitDemand for electricity is expected to growHow much electricity will AI’s growth consume? According to the Goldman Sachs report, data centersessential for running AI modelscould account for 8% of total U.S. power consumption by 2030, up from 3% in 2022. Similarly, the Boston Consulting Group estimates that data centers will increase their share of consumption to between 6% and 7.5% by 2030, compared to 2.5% in 2022.Prior to the AI boom, demand for electricity in America had been stagnant since the mid-2000s. The sudden increase in energy consumption from data centers is straining the country’s aging infrastructure. McKinsey estimates that in order to keep up with the sudden increase in demand for electricity, utility companies will have to invest $50 billion in energy generation alone.Some tech companies are already striking deals to secure the electricity they need. Microsoft recently signed a reported $10 billion deal with Brookfield Asset Management. Brookfield will provide the tech giant with an additional 10.5 gigawatts of renewable energy between 2026 and 2030 to help power data centers in America and Europe. Last year, Microsoft signed the world’s first deal for fusion energy with Helion Energy. The nuclear provider, also backed by OpenAI’s CEO Sam Altman, is set to start delivering electricity to Microsoft in 2028.katjen // Shutterstock Anticipating challenges in building electricity infrastructureThe potential electricity scarcity points to growth in alternative energy sources. The Bureau of Labor Statistics projects that wind turbine technicians will be the fastest-growing occupation through 2032, with solar panel installers ranking among the top 20. Goldman Sachs also notes that new data centers will require the construction of additional natural gas pipelines. If utility companies secure regulatory approval to build the power capacity and infrastructure needed by tech companies, this could result in a significant construction boom.Tech companies can produce software with programmers and laptops, but building physical infrastructure offers a different set of challenges. Unlike digital products, power plants, transmission lines, and data centers require vast tracts of land, complex permitting processes, and oftentimes, years of construction. They face regulatory hurdles, environmental assessments, and even local opposition. They also require specialized workers who will have their hands full.An energy shortage could have major implications for businesses and consumers. The AI boom coincides with a surge in demand for electricity across multiple industries, including manufacturing and electric vehicles. Without ample electricity infrastructure, companies would be limited in how much technology they could deploy, and this scarcity would raise prices for consumers.Households could also face higher electricity prices, especially if they live in an area with lots of data centers, such as Northern Virginia. Power companies could implement dynamic pricing, charging more for electricity during peak hours.The AI boom could also impact the environment: Although their servers are increasingly being powered by renewable energy, Goldman Sachs projects that carbon dioxide emissions from data centers could double by 2030. Goldman Sachs estimates that nearly 40% of the increase in energy demand in the United States will come from data centers.Even a revolutionary digital technology like AI has a profound impact on the physical world.Story editing by Alizah Salario. Additional editing by Kelly Glass. Copy editing by Tim Bruns.This story originally appeared on Verbit and was produced anddistributed in partnership with Stacker Studio. | Unknown | Computer and Mathematical | null | null | null | null | null | null |
|
news | Irina Slav | Strained Power Grids: The Hidden Cost of The AI Boom | Artificial intelligence is as hot a topic as one can imagine. From better search results to more efficient energy consumption management, AI can do it. Yet energy demand coming from artificial intelligence data centers has become a hot topic in its own right. Squaring the AI circle is becoming more and more challenging. Back in 2023, Dutch researcher Alex de Vries calculated the potential global energy use of artificial intelligence, which put the tech in the energy spotlight. "You would be talking about the size of a country like the Netherlands… | https://oilprice.com/Energy/Crude-Oil/Strained-Power-Grids-The-Hidden-Cost-of-The-AI-Boom.html | 2024-09-30T23:00:00Z | Despite Western sanctions and oil…TMX crude is gaining interest…Crude oil prices ticked higher…By Irina Slav - Sep 30, 2024, 6:00 PM CDTArtificial intelligence is as hot a topic as one can imagine. From better search results to more efficient energy consumption management, AI can do it. Yet energy demand coming from artificial intelligence data centers has become a hot topic in its own right. Squaring the AI circle is becoming more and more challenging.Back in 2023, Dutch researcher Alex de Vries calculated the potential global energy use of artificial intelligence, which put the tech in the energy spotlight. "You would be talking about the size of a country like the Netherlands in terms of electricity consumption. You're talking about half a per cent of our total global electricity consumption," de Vries told the BBC at the time.Now, it turns out that de Vries may have been too optimistic. The Wall Street Journal recently reported that in Ohio, one power utility alone is facing "three New York Cities worth of data centers asking to connect to the grid," after 2028, when demand for electricity in the region is set to doublebecause of those same data centers.The rush to deploy artificial intelligence for multiple purposes in a way resembles the first U.S. shale boom, when companies are drilling just because they could, demand be damned. In fairness, AI is being promoted as something like a solution to pretty much every efficiency problem a business might have, regardless of the industry. When it comes to energy, however, there is an irony in the artificial intelligence story. AI can help make energy consumption more efficientwhile draining available supply.Related: U.S. Oil Demand Shines as Chinas DullsIn August this year, a University of Texas at Austin researcher wrote in a piece for the Conversation that his team had developed an AI system that could shift a building's energy consumption to times when there was more wind and solar power on the grid."The system can learn on one set of buildings and occupants and be used in buildings with different controls and energy use patterns," Zoltan Nagy, whose work has received funding from the Electric Power Research Institute and another NGO, Climate Change AI, wrote. He explained that the AI system looks for the best times to charge home batteries, allowing the household to continue using electricity as they need, regardless of the state of the grid.This certainly sounds like something quite useful were it not limited to households with home batteries. The majority of homes, however, do not have battery storage, so they are effectively competing with developers of AI, such as Nagy's team and server providers, for a limited amount of electricity.This is becoming a real headache for power utilities, the WSJ wrote, citing the case of Phoenix, which is experiencing a manufacturing boom and has also become an AI data center hub. The city will run out of transmission capacity by 2030, local utility Arizona Public Service has calculated, which makes it urgent to build and upgrade some 800 miles of pipeline over the next ten years.Utilities' expectations of a surge in electricity demand have served to push their stocks higher, prompt bullish forecasts for new power generation capacity constructionnotably gas and nuclearand lead some to issue warnings of capacity overbuilding. That last one comes from Fitch Ratings, which said utilities may be overestimating future demand from data centers "given the inconsistent ways that the industry tallies future demand," per the Wall Street Journal.The current surge in demand from data centers handling artificial intelligence is far from inconsistent, or at least so it seems. According to the WSJ report, data center developers are looking high and low for future electricity supplies as competition between them heats up, as EV charging providers and new manufacturing facilities are built with funding under the Inflation Reduction Act.This is perhaps the bigger irony when it comes to artificial intelligence and the energy transition. Proponents of the former argue that it would help the transition by adjusting demand for electricity to supply variation as the grid becomes more reliant on wind and solar.Yet AI data centers demand so much electricity themselves that wind and solar cannot handle the load, so developers are looking to lock in future supply from baseload sources such as natural gas and nuclear. Microsoft made headlines last week with news that it had struck a deal for the reopening of the Three Mile Island nuclear power plant in order to feed its data centers.Artificial intelligence was listed as one big reason for the rise in electricity demand in the United States this yearand the consequent increase in electricity supply from gas and coal. The other big reason was population growth.For now, said population seems to be shielded from the negative effects of this growing imbalance between the demand for and supply of electricityat least those on fixed-price long-term contracts. Yet if the surge in demand that power utilities are predicting, based on what they are currently witnessing, materializes, it would likely come to affect everyone using electricity sooner or later. And it might secure the long-term survival of both gas and coal generation, despite efforts to throttle them in favor of wind and solar.By Irina Slav for Oilprice.comMore Top Reads From Oilprice.com | Decision Making/Prediction/Content Synthesis | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Anton Shilov | Microsoft inks deal to restart Three Mile Island nuclear reactor to fuel its voracious AI ambitions | Microsoft to get 819 MW of clean energy from a restarted nuclear power plant. | https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-inks-deal-to-restart-three-mile-island-nuclear-reactor-to-fuel-its-voracious-ai-ambitions | 2024-09-20T17:26:51Z | Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg. 819 MW for AI and cloud data centers Constellation Energy will invest $1.6 billion to restart its Three Mile Island nuclear plant in Pennsylvania. The revived reactor will provide clean electricity to Microsoft for 20 years, supporting the tech giant's AI and cloud computing energy needs. This reactor had a generating capacity of approximately 819 megawatts (MW) of electricity, which is enough power for a small to medium-sized city with hundreds of thousands of homes. The plant should be operational by 2028, and it will serve Microsoft exclusively. This agreement represents Microsoft's first long-term commitment to nuclear energy, which is part of its strategy to meet its growing energy needs with carbon-free sources. Three Mile Island has two reactors, one of which (with a capacity of 906 MW) was shut down in 1979 after a nuclear accident. The other (with a capacity of 819 MW), closed in 2019 due to economic issues, is now set to reopen thanks to the deal with Microsoft. The restart project has been in development since early 2023 when Constellation began evaluating the feasibility of bringing the reactor back online. After deciding to proceed with the project, the company entered discussions with potential buyers, with Microsoft showing immediate interest. Work on the plant will include extensive upgrades to essential equipment, such as the main transformer, turbine, and cooling systems. The facility will need to be fully restaffed, and Constellation plans to seek approval from the Nuclear Regulatory Commission to extend the reactor's operating license until 2054. Constellation will finance the $1.6 billion project without government aid, unlike other reactor revival efforts seeking state or federal support. For example, Holtec International is reviving a Michigan reactor with $1.8 billion in conditional government funding, according to Bloomberg. While Constellation is not opposed to outside help, its CEO, Joe Dominguez, prefers to avoid delays that often come with securing government approvals. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.The decision to restart Three Mile Island comes amid a broader resurgence of interest in nuclear power, especially from tech companies. As demand for cloud computing and AI grows, so does the need for stable, reliable energy sources that can run 24/7. These requirements make nuclear power an attractive option compared to the intermittency of renewable energy sources like wind and solar. One of the project's more challenging aspects will be connecting the reactor to the power grid managed by PJM Interconnection. PJM has a long queue of projects awaiting grid connections, but the head of Constellation hopes that progress can be made quickly enough to have the plant supplying power by 2027. Managing carbon footprint Microsoft's decision to partner with Constellation highlights the challenges of managing its carbon footprint. The company aims to power its entire global network of data centers with clean energy by 2025, and this nuclear deal will help achieve this goal. However, despite the benefits of nuclear power, it does not fully solve issues like emissions from the materials used in data centers, such as steel, concrete, and semiconductors. "This agreement is a major milestone in Microsoft's efforts to help decarbonize the grid in support of our commitment to become carbon negative. Microsoft continues to collaborate with energy providers to develop carbon-free energy sources to help meet the grids' capacity and reliability needs," said Bobby Hollis, VP of Energy at Microsoft. Microsoft is not the only tech company turning to nuclear power to support its data centers. Nuclear power's reliability, which can operate continuously, makes it an ideal match for tech companies that require consistent energy to power their data centers around the clock. Earlier in 2024, Amazon's cloud division signed a $650 million deal to buy a data center campus connected to a nuclear plant on the Susquehanna River in Pennsylvania. Oracle also announced plans to use three small nuclear reactors to power its 1 GW AI data centers in the future. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Kourtnee Jackson | Bill Gates Talks to CNET About AI, Misinformation and Climate Change | I spoke with the tech pioneer about his new Netflix docuseries that touches on artificial intelligence, global warming and more. | https://www.cnet.com/tech/services-and-software/bill-gates-talks-to-cnet-about-ai-misinformation-and-climate-change/ | 2024-09-05T07:01:03Z | Bill Gates doesn't believe tech experts hold all the answers on how artificial intelligence will fully impact jobs and social activities in the future, but he does think it's important we all start working with AI tools now, given where the tech is headed."The ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet," he told me. It's one of the topics I discussed with him during our conversation about What's Next? The Future with Bill Gates, a new docuseries landing on Netflix on Sept. 18.Described by the Microsoft co-founder in the opening episode as "a show about our future," the upcoming five-part series examines multiple issues: AI, misinformation, income inequality, the climate crisis and global health. The doc spotlights not only Gates' perspective but differing viewpoints from doctors, educators, scientists, activists, entrepreneurs, artists like Lady Gaga and filmmaker James Cameron, and Gates' family. These contrasting views -- and its discussions -- gave the tech pioneer something to ponder.Five years ago, Netflix aimed to give viewers a tour of Gates' problem-solving mindset through the documentary Inside Bill's Brain: Decoding Bill Gates. The Davis Guggenheim-directed series explored Microsoft's early days, the tech innovator's childhood and his charitable pursuits in global health, climate change and toilet sanitation in developing countries. Today, Gates is still seeking to solve most of these problems through tech and philanthropy -- but now AI is here.What Bill Gates thinks about AI and jobsWriter Tim Urban and OpenAI's Greg Brockman are among those featured in the first episode, sharing their insights on ChatGPT and the evolution of AI superintelligence, while other experts weigh in on the ethics, benefits and drawbacks of AI. At one moment, New York Times journalist Kevin Roose brings up AI's impact on jobs. Cameron, meanwhile, raises points about how it's become harder for him to write sci-fi as technology moves at a faster pace and expresses concerns about AI. He and Gates discuss its effects on humans' sense of purpose and how Cameron holds a "dystopian" view toward the tech as opposed to Gates' optimism.Pointing to Gates' conversation with Cameron, I asked for his thoughts on the future of humans and jobs with AI. He asserts that AI could help with shortages of teachers, physicians and mental health professionals but admits humans ought to be taken into account and limits should probably be established."We don't want to watch robots play baseball, and so where is the boundary where you say, 'Okay, whatever the machines can do is great,' and these other things are perhaps very social activities, intimate things, where we keep those jobs," he said. He explains these conversations will be ongoing for the next 10 to 20 years as the nature of work evolves with AI, but it's great that discussions are happening right now. Why? "Because that's not for technologists to understand better than anyone else. That really gets to the heart of religious values, philosophical values... and it's kind of a nirvana. But are we going to manage it well? And how quickly does it come?" Gates notes that this episode of the series not only looks at how AI's benefits will dominate conversations over the next couple decades but also examines how we rethink spending our time when the way we work changes.That doesn't mean people should ignore AI's uses today. As a former CEO, he'd advise everyone to "use AI as a tool." According to Gates, "It's becoming -- whether you're an illustrator or a coder or a support person or somebody who is in the health care system -- the ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet."James Cameron voices his concerns about AI to Bill Gates in Netflix's What's Next? The Future with Bill Gates.NetflixMisinformation and the problem of trust onlineExactly how difficult is it to find trustworthy information in a digital world? It's a challenge -- even for the tech-savvy team behind Gates. From conspiracy theories to deepfake images to disinformation, they track what is posted about him online. As a public figure, Gates is used to scrutiny, but some things still catch him off guard, like people buying into conspiracy theories and wrongfully pinning the onset of the pandemic on him or falsely suggesting he put microchips in vaccines to track people's movements.While he feels keeping an eye on how his name pops up online might seem like a hassle, he's not really bothered by it. In fact, he finds some of it pretty amusing, like the idea that he's using chips to keep tabs on people. He even had a woman confront him on the street about it. Gates told her, "I really don't need to track you in particular."As the Netflix series broaches the topic of misinformation, it turns to the notion that social media networks and other platforms play a role in spreading truths, untruths or pure entertainment. A group of Stanford students and experts discuss how governments, businesses and individuals all participate in creating and disseminating misinformation.Gates' 21-year-old daughter, Phoebe, offered some color on the topic as well. In a callback to a Reddit AMA Gates did years ago that sparked some conspiracy theories, she cautions him about what he says and posts, and how it can quickly blow up online because of his name and public persona. To his surprise, she talked about experiencing online harassment due to her promoting health issues important to her on social media. Blowback was connected not only to a rise in her follower count but also her family's famous last name.In the series, Phoebe Gates talks with her father about internet culture and misinformation.NetflixThat said, Gates doesn't have a solid solution for tackling misinformation. I asked him how can we defend against it. Ideally, there will be "systems and behaviors" in place to help us be more aware of who created what, he said, but he believes most countries are striving to find suitable boundaries when it comes to addressing misinformation."The US is a tough one because we have the notion of the First Amendment and what are the exceptions like yelling 'fire' in a theater," he explained. "I do think over time, with things like deepfakes, most of the time you're online you're going to want to be in an environment where the people are truly identified, that is they're connected to a real-world identity that you trust, instead of just people saying whatever they want."As viewers and avid internet users, you may wonder who should oversee how misinformation is handled. Government agencies? Tech companies? Both? Gates doesn't answer that question directly but instead sees a need for some flexibility when it comes to the misinformation oversight, compared with the other topics highlighted in this doc that may have clearer paths to resolution. Making progress on climate changeTwo things stand out during an episode on global warming: Gates' mention of technologies currently "sitting on a shelf" and skepticism from some younger-aged climate activists who meet with him. In the episode, Gates says he learns from them, just like the scientists.Though clean tech efforts are ramping up, there are concerns that things aren't moving fast enough. Public policy and scale are factors when it comes to progress. So what tech is available right now?Gates told me new innovation is needed for industries like steel and cement manufacturing, and his Breakthrough Energy venture is part of funding those projects. "There are other areas, like food products, that have a low carbon footprint where we just need to drive awareness," Gates said. He explains that as people drive demand up, it'll help propel more innovation while reducing extra costs (the green premium) tied to buying things like electric heat pumps and vehicles or solar panels.As a developed nation, the US -- and its consumers -- can support efforts to increase demand and innovation. "Rich countries have to drive those markets, and that's how you eventually get to price points that you can go to the middle-income countries where 65% of our planet lives and say to the consumers there that it is affordable to them," Gates said. He says getting wealthier consumers who support the cause to buy these products is the part of the path to worldwide adoption.Another aspect is policy, an area where Breakthrough Energy has leveraged its influence. The organization served in an advisory role for the 2022 Inflation Reduction Act in the US, a piece of climate legislation that Gates said provides "tax credits for existing and new technologies to accelerate their deployment."In the docuseries, viewers learn about the venture firm's current investments and what's being done in the realms of nuclear energy, food waste and construction. Is it enough to satisfy the young activists who challenge Gates on climate change? Does he think we'll achieve the goal of reducing carbon emissions by 2050? Maybe -- or maybe not."We're not meeting the activists' high expectations, including staying below the 1.5 degrees [Celsius]" target for limiting the increase in global temperatures, Gates said. "We are making enough progress that people should not despair. That is, we need to keep working on this." Bill Gates at the TerraPower plant, one of the nuclear energy ventures his firm has invested in. NetflixWhat else should you expect from the Netflix docuseries?You will have to stream the entire series to hear more from Gates and featured experts, public figures and everyday people about each topic and its myriad challenges, including a dive into income inequality and infectious disease. A cause that is important to the Gates Foundation is global health, and the final episode of the series is one that Gates would like viewers to take in."I'll be disappointed if the global health issue doesn't get significant viewership," he says, smiling. "Because of my time and resources, right up there with climate, that's the thing I work on the most and keeping that visible -- like malaria deaths, which is a big focus of that, the 500,000 kids a year who are dying of that. I'm trying to be smarter about how we get everybody to care about that so that the rich countries stay engaged in helping out." | Unknown | Unknown | null | null | null | null | null | null |
|
news | golean added to PyPI | A Python package for interacting with the GoLean API service. | https://pypi.org/project/golean/ | 2024-09-06T05:01:07Z | GoLean is a powerful Python client for the GoLean API, offering efficient and customizable prompt compression services tailored for AI applications. Optimize your AI workflows with our state-of-the-art compression algorithms.FeaturesEfficient Prompt Compression: Optimize performance with customizable compression rates.Flexible Model Selection: Choose from a range of available models for your compression tasks.Secure Authentication: Robust API key-based authentication system.Easy Configuration: Seamless setup with support for environment variables.Comprehensive Documentation: Detailed guides and API references to get you started quickly.InstallationInstall the GoLean client using pip:pipinstallgoleanQuick StartfromgoleanimportGoLean# Initialize the GoLean clientgolean=GoLean(api_key="your_api_key")# Compress a promptresult=golean.compress_prompt(context="The quick brown fox jumps over the lazy dog.",question="What animal is mentioned?",compression_rate=0.5)print(result['compressed_prompt'])UsageInitializationfromgoleanimportGoLean# Initialize with API keygolean=GoLean(api_key="your_api_key")# Or, set GOLEAN_API_KEY as an environment variable and initialize without parametersgolean=GoLean()Compressing Promptsresult=golean.compress_prompt(context="Detailed context goes here...",question="Your question goes here...",compression_rate=0.7,model="gpt-4o")print(f"Compressed Prompt: {result['compressed_prompt']}")print(f"Compression Ratio: {result['compression_ratio']}")ConfigurationAPI KeySet your API key using one of these methods:Environment variable:exportGOLEAN_API_KEY=your_api_key_here.env file in your project root:GOLEAN_API_KEY=your_api_key_hereDirectly in your code (not recommended for production):golean=GoLean(api_key="your_api_key_here")API ReferenceGoLean ClassclassGoLean:def__init__(self,api_key:Optional[str]=None):""" Initialize the GoLean client. Args: api_key (str, optional): Your GoLean API key. If not provided, reads from GOLEAN_API_KEY env variable. """defcompress_prompt(self,context:str,question:str,compression_rate:float=0.5,model:str="gpt-4o")->Dict[str,Any]:""" Compress a prompt using the GoLean API. Args: context (str): The context for the prompt. question (str): The question to be compressed. compression_rate (float): Desired compression rate (0 < rate <= 1). Default is 0.5. model (str): Model to use for compression. Default is "gpt-4o". Returns: Dict[str, Any]: Dictionary containing the compressed prompt and metadata. """For complete API documentation, please refer to our official documentation.ExamplesBasic Compressionresult=golean.compress_prompt(context="The solar system consists of the Sun and everything that orbits around it.",question="What is at the center of the solar system?",compression_rate=0.6)print(result['compressed_prompt'])Using Different Modelsresult=golean.compress_prompt(context="Machine learning is a subset of artificial intelligence...",question="What is the relationship between ML and AI?",compression_rate=0.8,model="gpt-3.5-turbo")print(result['compressed_prompt'])LicensingGoLean API Client is a commercial product. Usage of this software is subject to the terms and conditions outlined in our End User License Agreement (EULA). Please review the EULA carefully before using this software.To obtain a license for commercial use, please visit our pricing page or contact our sales team at [email protected] assistance, please contact our support team:For enterprise support options, please contact our sales team.LegalCopyright © 2024 GoLean, Inc. All rights reserved.GoLean is a registered trademark of GoLean, Inc. All other trademarks are the property of their respective owners.Empower your AI with GoLean - Compress, Optimize, Succeed. | Content Synthesis/Process Automation | Unknown | null | null | null | null | null | null |
||
news | null | 2 stocks we're tempted to exit; plus, rapid-fire updates on our other 30 names | During September's Monthly Meeting, Cramer breaks down why these two long-time holdings are on the chopping block. | https://www.cnbc.com/2024/09/12/we-are-tempted-to-exit-morgan-stanley-pg-updates-on-our-other-30-names.html | 2024-09-12T20:12:20Z | Here's a rapid-fire update on all 32 stocks in Jim Cramer's Charitable Trust, the portfolio we use for the CNBC Investing Club. Jim broke down the portfolio Thursday during the September Monthly Meeting . Apple : This company is the real winner when it comes to the artificial intelligence boom, according to Jim. Apple gets to be the "ultimate free rider," utilizing OpenAI's well-regarded large language models for free in its forthcoming software update for newer devices. Jim also told members not to worry about hardware sales after recent mixed Wall Street commentary regarding the new iPhone 16 models. Abbott Laboratories : The lawsuits over specialized formula for premature infants haven't disappeared but at least recently, an overhang on the stock because of that legal risk seems to have. When Abbott Labs trades on its underlying business fundamentals, it's an extremely positive situation. It was one of our top-five performers since the August Monthly Meeting. Advanced Micro Devices : While AMD's technology is not close to Nvidia's, the opportunity for AI chips is big enough for the company to make a lot of money as a No. 2 player. It also has a strong foothold in the PC and traditional data-center markets. AMD shares have caught a bid in recent days, but they're still cheap considering the growth of its AI processors. Amazon : Shares have now erased all of their steep post-earnings declines from early August, validating Jim's belief that the declines should be bought . Amazon is wisely investing in AI to not only support customers of its profitable cloud unit Amazon Web Services, but also to make its logistics operations more efficient with lower costs. Broadcom : Investors who don't own any Broadcom yet should consider starting a position here, Jim said. The stock has already erased all of its post-earnings losses from last week. Bad stocks don't recover that quickly. In addition to a compelling AI chip business, Broadcom's acquisition of VMWare is going well and bolstering its higher-margin software segment. Best Buy : We're sitting on robust gains in Best Buy. But it's increasingly clear there's more to talk about with Best Buy than just the AI personal computers theme. The stock can fly because of both its attractive 3.8% annual dividend yield, which looks even more attractive with the Federal Reserve about to cut interest rates. Best Buy's exposure to bit ticket products needed by new homeowners should see a boost thanks to those lower rates. Costco Wholesale : The retail giant got hit with an analyst downgrade a few days ago over valuation concerns. The issue is Costco's stock has never been cheap, Jim said, which makes it a tough reason to head for the exits now. As long as Costco's expanding and taking a bigger piece of the consumer spending pie, this is a stock worth owning. We don't see that changing anytime soon. Salesforce : The company's last quarter was excellent but Wall Street didn't really care because it operates in enterprise software, an unloved corner of the tech industry. Salesforce is not that expensive but on its own, that's not a great reason to own a stock. Jim said he'll do some investigating at the company's influential Dreamforce conference in San Francisco next week. Coterra Energy : We're not looking to add to our energy exposure beyond our position here. The last few times we bought more Coterra, specifically, was to hedge against geopolitical risks. That captures the way we're thinking about this position for now. It's just a tough situation for commodity prices, which hurts Coterra's stock price no matter how well the company is controlling what it can control. DuPont : If you don't own this stock, it's a great buy if it falls below $80 a share, slightly below where it's trading Thursday. The conglomerate is getting ready to split into three companies. The stock could have a good run into 2025 as investors recognize that its three segments are worth far more separately than the current stock price reflects with them together, Jim said. Danaher : Our battle in the life-sciences stock has turned the corner, and we're glad we fought the fight. Danaher is trading around $269 a share now. Around $280, it might be time to take a little off the table out of discipline. But we still like the company's prospects now that expectations for its China business have been reset and biotech IPO activity picks up. Disney : The recent New York Times exposé into the power struggle between second-time CEO Bob Iger and his once-successor Bob Chapek helps explain why the company's stock has been such a poor performer. We don't want to throw in the towel because there's clearly trapped value in the stock, but adding more to the position doesn't make sense until it drops to lower levels. Dover : AI remains a key part of the Club's investment thesis in this stock. If data center spending continues, sales for Dover's products like thermal connectors used in liquid cooling will surely increase. Jim said Dover's exposure to other themes like reshoring manufacturing capacity in the United States and decarbonization are additional positives. Eaton : Similar to Dover, the stock remains a solid AI play in the industrials sector. That's because Eaton's electrical equipment essentially creates the plumbing of a data center, Jim said. It will benefit from more investments in these computing facilities. We're remaining upbeat on shares, especially after Larry Ellison, co-founder and chief technology officer of Oracle , indicated earlier this week that data center demand is fervent. GE Healthcare : Similar to Danaher, its exposure to China has been a drag. But we still see reasons to own the stock, including its potential to benefit from the rollout of anti-amyloid Alzheimer's drugs made by Biogen and fellow Club holding Eli Lilly. This potential catalyst should boost GE Healthcare's diagnostic unit because it makes an agent to detect amyloid plaques that are hallmarks of the memory-robbing disease, as well as its MRI business because those machines are used to monitor patients on the drugs. Given this is an entirely new class of drugs, though, it is taking some time to gain traction. We're not recommending putting money to work in the stock for now. Alphabet : The search engine heavyweight has frustrated Jim for a while now, but we continue to hold onto the stock due to the belief that a major win for its AI efforts could be on the horizon. Still, we would look to trim the position further into future strength. Home Depot : Our newest stock, initiated last week , gives the portfolio additional exposure to the uptick in housing activity that we expect during the Fed's rate-cutting cycle. We wanted to get in before the first cut, which should arrive next week. We added to the position Wednesday. Home Depot's larger business serving professionals is why we prefer it over chief rival Lowe's. Honeywell : The industrial conglomerate has let us down following several quarters of little growth. Although it's tempting to offload shares, Jim said to stick with Honeywell for now because there's huge value in individual businesses like aerospace. To be sure, there's only so long we can be patient. We'll be holding management accountable to act on the portfolio reshuffling timeline CEO Vimal Kapur laid out Wednesday at an industry conference. Linde : This stock continues to impress us. We'd be buyers if Linde shares drop 5% to 10% from current levels because it's a high-quality company with consistent earnings growth. The industrial gas supplier is levered to solid end markets like electronics, along with megatrends like the transition to clean energy. There's little negative to say about this portfolio stock. Eli Lilly : We trimmed some Eli Lilly last week, considering the stock had run almost $400 a share without us making a sale. It felt like tempting fate to keep letting it all ride. Our investment thesis is fully in tact, though. Lilly's obesity and weight-loss GLP-1 drugs will fuel above-average growth for years to come, with its recently approved Alzheimer's treatment representing another avenue for expansion, though that will be a slower ramp than GLP-1s. Lilly announced Thursday plans to spend even more money to boost drug manufacturing capacity, which is good news because it means demand is still strong. Meta Platforms : Meta is deftly using all its user data to train AI systems that will help it capture as much of the digital advertising market as possible outside of what is captured by Alphabet and Amazon, of course. Morgan Stanley : The financial stock is in no-man's land. Morgan Stanley shares are priced too low to offload shares, and too high to pick up more. We're sticking it out for now. But Jim is considering an exit of the position altogether for a potentially better investment banking play in peer Goldman Sachs. Microsoft : Wall Street skepticism on the sustainability of the AI craze hit the stock last week. It's not enough to alter our view that Microsoft's bet in the nascent tech will pay off in the long term. Jim's conviction on the hyperscaler remains strong and the firm's early investment in OpenAI has given the company a decent lead in the heated AI arms race. Nvidia : The AI data center buildout is anywhere near done. It only started two years ago around the launch of ChatGPT. And, it might need constant reinvention, which makes selling Nvidia now the wrong thing to do. CEO Jensen Huang drove that point home Wednesday at a conference. The stock has in recent days recovered a large chunk of its earnings related sell-off. Nextracker : Policy risk and interest rate risk have been chief overhangs. We know the Fed is about to lower rates. But at the presidential debate, Democrat Kamala Harris and Republican Donald Trump indicated they like solar energy. Signs of easing on both fronts sent the stock rallying Wednesday. Palo Alto Networks : We're more upbeat on shares following a massive global IT outage caused by peer CrowdStrike in July. Palo Alto's biggest competitor is under fire, which could cause some prospective customers to choose our go-to cybersecurity name instead. After a multimonth battle with Palo Alto stock we bought and sold shares several times we're deciding to stay long. Procter & Gamble : Like Morgan Stanley, we're tempted to exit this position, too. Procter & Gamble is a solid recession-resistant stock. But with monetary policy easing likely on the horizon, it's not a great time to have a lot of exposure to defensive names. P & G did not report a good quarter. That makes us worried about giving up a great gain. Starbucks : When asked why the Club didn't make a sale on the stock's recent outsized run, Jim responded that there's more upside to come with new CEO Brian Niccol running the firm . Shares are up 28% from the eve of Niccol's appointment announcement on Aug. 13 to Thursday. We are confident in Niccol's ability to fix Starbucks' key issues like its China market and throughput in stores. Constellation Brands : The beer stock has underperformed the market in 2024 up 3% year-to-date, versus the S & P 500's 17% gains but we're waiting out the doldrums. The Modelo maker has stagnated, in part, because the company has had to pay down a lot of debt tied to the share-class conversion agreement with the founding Sands family . It doesn't seem like that's been realized in the share price yet, but we're also not going to tolerate another six months of stock underperformance. Stanley Black & Decker : While shares of Stanley Black & Decker remain well below their Covid-era highs, we're expecting the stock to surge as lower rates spark more housing sector activity. That can drive sales for the toolmaker. We'll likely wait for that advance before we take profits in the stock again. TJX Companies : TJX is another stock in which it's fair to wonder whether we have overstayed our welcome. But it seems like with each update we get, the Marshalls and HomeGoods company just keeps crushing it. It does not make sense to sell the stock of a company that is doing that well. Don't subsidize losers with winners, Jim said. Wells Fargo : Despite the recent sell-off in bank stocks, Jim remains upbeat on Wells Fargo. He praised CEO Charlie Scharf's leadership over the firm, including Wells Fargo's expansion into investment banking, where activity should pick up as interest rates come down. It also pays to wait in the stock because of its juicy dividend yield. (See here for a full list of the stocks in Jim Cramer's Charitable Trust.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.Traders work on the floor of the New York Stock Exchange during morning trading on Aug. 23, 2024.Here's a rapid-fire update on all 32 stocks in Jim Cramer's Charitable Trust, the portfolio we use for the CNBC Investing Club. Jim broke down the portfolio Thursday during the September Monthly Meeting. | Decision Making/Prediction/Content Synthesis | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Kourtnee Jackson | Bill Gates Chats With Us About AI, Misinformation and Climate Change | I spoke with the tech pioneer about his new Netflix docuseries that touches on the future of AI, global warming and more. | https://www.cnet.com/tech/services-and-software/bill-gates-chats-with-us-about-ai-misinformation-and-climate-change/ | 2024-09-18T20:10:03Z | Bill Gates doesn't believe tech experts hold all the answers on how artificial intelligence will fully impact jobs and social activities in the future, but he does think it's important we all start working with AI tools now, given where the tech is headed."The ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet," he told me. It's one of the topics I discussed with him during our conversation about What's Next? The Future with Bill Gates, a new docuseries that debuted on Netflix on Sept. 18. Described by the Microsoft co-founder in the opening episode as "a show about our future," the five-part series examines multiple issues: AI, misinformation, income inequality, the climate crisis and global health. He offers an inside look in a new blog post published on GatesNotes today. The doc spotlights not only Gates' perspective but differing viewpoints from doctors, educators, scientists, activists, entrepreneurs, artists like Lady Gaga and filmmaker James Cameron, and Gates' family. These contrasting views -- and its discussions -- gave the tech pioneer something to ponder.Five years ago, Netflix aimed to give viewers a tour of Gates' problem-solving mindset through the documentary Inside Bill's Brain: Decoding Bill Gates. The Davis Guggenheim-directed series explored Microsoft's early days, the tech innovator's childhood and his charitable pursuits in global health, climate change and toilet sanitation in developing countries. Today, Gates is still seeking to solve most of these problems through tech and philanthropy -- but now AI is here.What Bill Gates thinks about AI and jobsWriter Tim Urban and OpenAI's Greg Brockman are among those featured in the first episode, sharing their insights on ChatGPT and the evolution of AI superintelligence, while other experts weigh in on the ethics, benefits and drawbacks of AI. At one moment, New York Times journalist Kevin Roose brings up AI's impact on jobs. Cameron, meanwhile, raises points about how it's become harder for him to write sci-fi as technology moves at a faster pace and expresses concerns about AI. He and Gates discuss its effects on humans' sense of purpose and how Cameron holds a "dystopian" view toward the tech as opposed to Gates' optimism.Pointing to Gates' conversation with Cameron, I asked for his thoughts on the future of humans and jobs with AI. He asserts that AI could help with shortages of teachers, physicians and mental health professionals but admits humans ought to be taken into account and limits should probably be established."We don't want to watch robots play baseball, and so where is the boundary where you say, 'OK, whatever the machines can do is great,' and these other things are perhaps very social activities, intimate things, where we keep those jobs?" he said. He explains these conversations will be ongoing for the next 10 to 20 years as the nature of work evolves with AI, but it's great that discussions are happening right now. Why? "Because that's not for technologists to understand better than anyone else. That really gets to the heart of religious values, philosophical values... and it's kind of a nirvana. But are we going to manage it well? And how quickly does it come?" Gates notes that this episode of the series not only looks at how AI's benefits will dominate conversations over the next couple decades but also examines how we rethink spending our time when the way we work changes.That doesn't mean people should ignore AI's uses today. As a former CEO, he'd advise everyone to "use AI as a tool." According to Gates, "It's becoming -- whether you're an illustrator or a coder or a support person or somebody who is in the health care system -- the ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet."James Cameron voices his concerns about AI to Bill Gates in Netflix's What's Next? The Future with Bill Gates.NetflixMisinformation and the problem of trust onlineExactly how difficult is it to find trustworthy information in a digital world? It's a challenge -- even for the tech-savvy team behind Gates. From conspiracy theories to deepfake images to disinformation, they track what is posted about him online. As a public figure, Gates is used to scrutiny, but some things still catch him off guard, like people buying into conspiracy theories and wrongfully pinning the onset of the pandemic on him or falsely suggesting he put microchips in vaccines to track people's movements.While he feels keeping an eye on how his name pops up online might seem like a hassle, he's not really bothered by it. In fact, he finds some of it pretty amusing, like the idea that he's using chips to keep tabs on people. He even had a woman confront him on the street about it. Gates told her, "I really don't need to track you in particular."As the Netflix series broaches the topic of misinformation, it turns to the notion that social media networks and other platforms play a role in spreading truths, untruths or pure entertainment. A group of Stanford students and experts discuss how governments, businesses and individuals all participate in creating and disseminating misinformation.Gates' 21-year-old daughter, Phoebe, offered some color on the topic as well. In a callback to a Reddit AMA Gates did years ago that sparked some conspiracy theories, she cautions him about what he says and posts, and how it can quickly blow up online because of his name and public persona. To his surprise, she talked about experiencing online harassment due to her promoting health issues important to her on social media. Blowback was connected not only to a rise in her follower count but also her family's famous last name.In the series, Phoebe Gates talks with her father about internet culture and misinformation.NetflixThat said, Gates doesn't have a solid solution for tackling misinformation. I asked him how can we defend against it. Ideally, there will be "systems and behaviors" in place to help us be more aware of who created what, he said, but he believes most countries are striving to find suitable boundaries when it comes to addressing misinformation."The US is a tough one because we have the notion of the First Amendment and what are the exceptions like yelling 'fire' in a theater," he explained. "I do think over time, with things like deepfakes, most of the time you're online you're going to want to be in an environment where the people are truly identified, that is they're connected to a real-world identity that you trust, instead of just people saying whatever they want."As viewers and avid internet users, you may wonder who should oversee how misinformation is handled. Government agencies? Tech companies? Both? Gates doesn't answer that question directly but instead sees a need for some flexibility when it comes to the misinformation oversight, compared with the other topics highlighted in this doc that may have clearer paths to resolution. Making progress on climate changeTwo things stand out during an episode on global warming: Gates' mention of technologies currently "sitting on a shelf" and skepticism from some younger-aged climate activists who meet with him. In the episode, Gates says he learns from them, just like the scientists.Though clean tech efforts are ramping up, there are concerns that things aren't moving fast enough. Public policy and scale are factors when it comes to progress. So what tech is available right now?Gates told me new innovation is needed for industries like steel and cement manufacturing, and his Breakthrough Energy venture is part of funding those projects. "There are other areas, like food products, that have a low carbon footprint where we just need to drive awareness," Gates said. He explains that as people drive demand up, it'll help propel more innovation while reducing extra costs (the green premium) tied to buying things like electric heat pumps and vehicles or solar panels.As a developed nation, the US -- and its consumers -- can support efforts to increase demand and innovation. "Rich countries have to drive those markets, and that's how you eventually get to price points that you can go to the middle-income countries where 65% of our planet lives and say to the consumers there that it is affordable to them," Gates said. He says getting wealthier consumers who support the cause to buy these products is the part of the path to worldwide adoption.Another aspect is policy, an area where Breakthrough Energy has leveraged its influence. The organization served in an advisory role for the 2022 Inflation Reduction Act in the US, a piece of climate legislation that Gates said provides "tax credits for existing and new technologies to accelerate their deployment."In the docuseries, viewers learn about the venture firm's current investments and what's being done in the realms of nuclear energy, food waste and construction. Is it enough to satisfy the young activists who challenge Gates on climate change? Does he think we'll achieve the goal of reducing carbon emissions by 2050? Maybe -- or maybe not."We're not meeting the activists' high expectations, including staying below the 1.5 degrees [Celsius]" target for limiting the increase in global temperatures, Gates said. "We are making enough progress that people should not despair. That is, we need to keep working on this." Bill Gates at the TerraPower plant, one of the nuclear energy ventures his firm has invested in. NetflixWhat else should you expect from the Netflix docuseries?You will have to stream the entire series to hear more from Gates and featured experts, public figures and everyday people about each topic and its myriad challenges, including a dive into income inequality and infectious disease. A cause that is important to the Gates Foundation is global health, and the final episode of the series is one that Gates would like viewers to take in."I'll be disappointed if the global health issue doesn't get significant viewership," he says, smiling. "Because of my time and resources, right up there with climate, that's the thing I work on the most and keeping that visible -- like malaria deaths, which is a big focus of that, the 500,000 kids a year who are dying of that. I'm trying to be smarter about how we get everybody to care about that so that the rich countries stay engaged in helping out." | Unknown | Education, Training, and Library/Healthcare Practitioners and Support/Computer and Mathematical/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Lucas Mearian | Is the rise of genAI about to create an energy crisis? | The veracious demand for generative AI (genAI) tools is driving a significant increase in the use of power-sucking GPUs and TPUs in data centers, some of which are scaling up from tens of thousands to more than 100,000 units per server farm.With the shift to cloud computing and genAI, new data centers are growing in size. It is not unusual to see new facilities being built with capacities from 100 to 1,000 megawatts — roughly equivalent to the energy requirements of 80,000 to 800,000 homes, according to the Electric Power Research Institute (EPRI).AI-related energy consumption is expected to grow about 45% year through the next three years. For example, the most popular chatbot, OpenAI’s ChatGPT, is estimated to use about 227 million kilowatt-hours of electricity annually to process 78 billion user queries.To put that into perspective, the energy ChatGPT uses in one year could power 21,602 US homes, according to research by BestBrokers, an online service that calculates odds for trading from big data. “While this accounts for just 0.02% of the 131 million U.S. households, it’s still a staggering amount, especially considering the US ranks third in the world for household numbers,” BestBrokers wrote in a new report.GenAI models are typically much more energy-intensive than data retrieval, streaming, and communications applications — the main forces that drove data center growth over the past two decades, according to EPRI’s report.At 2.9 watt-hours per ChatGPT request, AI queries are estimated to require 10 times the electricity of traditional Google queries, which require about 0.3 watt-hours each; and emerging, computation-intensive capabilities such as image, audio, and video generation have no precedent, according to EPRI.There are now nearly 3,000 data centers in the US and that number is expected to double by 2030. While genAI applications are estimated to use only 10% to 20% of data center electricity today, that percentage is rising quickly. “Data centers are expected to grow to consume 4.6% to 9.1% of U.S. electricity generation annually by 2030 versus an estimated 4% today,” EPRI said.No crisis yet — but energy demands are growingThough data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. “It’s a problem, but AI is a convenient scapegoat for the world’s energy problems.”Each GPU in an AI data center can consume more than 400 watts of power while training a single large language model (LLM) — the algorithmic foundation of genAI tools and platforms. That means simply training a single LLM like ChatGPT-3 can lead to up to 10 gigawatt-hour (GWh) power consumption. That’s roughly equal to the yearly electricity consumption of over 1,000 US households.“Interestingly, training the GPT-4 model, with its staggering 1 trillion parameters, required a whopping 62.3 million kWh of electricity over a 100-day period,” BestBroker’s report said. “This is 48 times the energy consumed by GPT-3, which, in comparison, used about 1.3 million kWh in just 34 days.”There are hundreds of such data centers across the world, mainly managed by big tech firms like Amazon, Microsoft and Google, according to a study by the University of Washington. And the amount of energy they use is rising quickly. In 2022, total AI datacenter energy consumption in the US hit 23 trillion-terawatt hours (TWh). (A TWh represents one trillion watts of power used for one hour).That figure is expected to increase at a combined annual growth rate of 44.7% and will reach 146.2TWh by 2027, according to IDC Research. By that point, AI datacenter energy consumption is expected to account for 18% all datacenter energy consumption.There is already speculation — given how fast genAI has erupted onto the scene — that it won’t take that long before a crisis emerges. Tech entrepreneur Elon Musk said earlier this year that by 2025, there will not be enough energy to power AI’s rapid advances.A two-tiered billing system?Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making datacenters considerably more expensive to operate. (The cost to build a datacenter ranges from $6 million to $14 million per megawatt, and the average life of each center is 15 to 20 years, according to IDC.)Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.Suppliers also argue they need to build out their energy infrastructure to handle the increased use. American Electric Power (AEP) in Ohio, for example, has proposed that AI data center owners be required to make a 10-year commitment to pay for a minimum of 90% of the energy they say they need monthly — even if they use less. AEP said it’s facing 15 GW of projected load growth from data centers by 2030 and wants the money up front to expand its power infrastructure.Data center operators, not surprisingly, are pushing back. Google, Amazon, Microsoft and Meta are currently fighting the AEP proposal. The companies argued before Ohio’s Public Utilities Commission last month that special rates would be “discriminatory” and “unreasonable.”Graham wouldn’t say whether special power rates for AI providers would be fair, but he did point to the standard of charging lower electricity rates for bulk industrial power consumers. “If you think about you and I as consumers — forget the market we’re in — you expect volume discounts,” he said. “So, I think the data center providers expect volume discounts.”Electricity is, by far, the greatest cost of running a data center, accounting for anywhere from 40% to 60% of infrastructure costs, Graham said; to change that cost structure would have an “enormous impact” on corporate profits.Even chip makers are eying the situation warily. Concerned about the increasing power needs, Nvidia, Intel and AMD are now all working on processors that consume less energy as a way to help address the problem. Intel, for example, will soon begin to roll out its next generation of AI accelerators, which will shift the focus away from traditional compute and memory capabilities to per-chip power consumption.Nuclear power as an optionIn the meantime, AI data center operators are turning their attention to an unexpected power source: nuclear energy. Amazon, earlier this year, spent $650 million to buy a data center from Tesla that runs on 100% nuclear energy from one of US’s largest nuclear power plants.And just last week, Microsoft announced it is working on a deal with Constellation Energy to reopen the Three Mile Island power plant in Pennsylvania — the site of the worst nuclear accident in US history. Under the deal, Microsoft would purchase 100% of the power from Three Mile Island for the next 20 years to feed its voracious AI energy needs.In July, the US Energy Advisory Board released a report on providing power for AI and data centers; it offered 16 recommendations for how the US Department of Energy can help support growing demand reliably and affordably. The report considers power dynamics for AI model training, operational flexibility for data center and utility operators, and promising energy generation and storage technologies to meet load growth. In the report, the agency noted that electricity providers, data center customers, and other large customers had all expressed concerns about the ability to keep up with demand, and “almost uniformly, they recommended accelerating generation and storage additions, delaying retirements, making additional investments in existing resources.”Those updates include the “uprating and relicensing of existing nuclear and hydroelectric facilities,” and demonstrating new clean, firm, affordable, dispatchable technologies as soon as possible. “In most cases, [stakeholders] see new natural gas capacity additions — in addition to solar, wind, and batteries — as the primary option available today to maintain reliability,” the report said.“We’re going to need all sources of power, including geothermal and hydrogen,” IDC’s Graham said. “AI’s power consumption is really growing. You can draw certain analogies to cloud. The one thing about AI that’s different is the magnitude of energy consumption per server.” | https://www.computerworld.com/article/3537112/is-the-rise-of-genai-about-to-create-an-energy-crisis.html | 2024-09-24T10:00:00Z | In the report, the agency noted that electricity providers, data center customers, and other large customers had all expressed concerns about the ability to keep up with demand, and “almost uniformly, they recommended accelerating generation and storage additions, delaying retirements, making additional investments in existing resources.”Those updates include the “uprating and relicensing of existing nuclear and hydroelectric facilities,” and demonstrating new clean, firm, affordable, dispatchable technologies as soon as possible. “In most cases, [stakeholders] see new natural gas capacity additions — in addition to solar, wind, and batteries — as the primary option available today to maintain reliability,” the report said.“We’re going to need all sources of power, including geothermal and hydrogen,” IDC’s Graham said. “AI’s power consumption is really growing. You can draw certain analogies to cloud. The one thing about AI that’s different is the magnitude of energy consumption per server.” | Unknown | Computer and Mathematical/Management | null | null | null | null | null | null |
|
news | David | HEAVY.AI Accelerates Big Data Analytics with Vultr's High-Performance GPU Cloud Infrastructure | Vultr announced a partnership with GPU-accelerated analytics platform provider HEAVY.AI. | https://vmblog.com:443/archive/2024/09/10/heavy-ai-accelerates-big-data-analytics-with-vultr-s-high-performance-gpu-cloud-infrastructure.aspx | null | 2024-09-10T21:42:00Z | Vultr announced a partnership with GPU-accelerated analytics platform provider HEAVY.AI. Integrating Vultr's global NVIDIA GPU cloud infrastructure into its operations, HEAVY.AI can interactively query and visualize massive datasets, enabling faster, more efficient decision-making for customers across diverse sectors."Partnering with Vultr has allowed us to leverage their highly performant, global NVIDIA GPU cloud infrastructure to provide our customers with better access to unparalleled speed and efficiency," said Jon Kondo, CEO of HEAVY.AI. "This integration ensures that our platform continues to deliver the rapid insights and cost savings that are critical for our customers' success."Now, thanks to Vultr's advanced GPU infrastructure, HEAVY.AI can deliver the best performance at an affordable price. The NVIDIA GH200 Grace Hopper Superchip joins NVIDIA A100 Tensor Core GPUs and Vultr Bare Metal instances to form a powerful trio that drives faster insights via HEAVY.AI's platform. With the NVIDIA GH200 running on Vultr Cloud, HEAVY.AI can deliver 5X or greater price performance when compared to 8XA100 instances, completing industry-standard analytic SQL benchmarks such as TPC-H100 in less than 4.5 seconds, and executing 11 of the 22 queries in less than 100 milliseconds."Vultr is one of the first cloud providers to offer the revolutionary NVIDIA GH200 Grace Hopper Superchip," said Todd Mostak, Co-Founder and CTO of HEAVY.AI. "By tapping into the immense GPU compute and CPU-GPU bandwidth that is unique to the NVIDIA GH200, we now can offer our customers even greater performance and scale while significantly reducing costs, empowering businesses to derive insights faster and more efficiently than ever before."Building on its mission to democratize access to big data analytics, HEAVY.AI offers a comprehensive software suite that allows organizations to query, analyze, and visualize complex datasets, with a particular focus on location and time series data. With performance that is often orders of magnitude faster than CPU-based solutions, HEAVY.AI provides analysts, data scientists, data engineers, and geospatial professionals across industries with a complete view of their data, helping them understand the "what, when, and where" with unparalleled clarity.Through its collaborations with NVIDIA and now Vultr, HEAVY.AI has optimized its platform for the latest NVIDIA hardware to provide best-in-class performance, providing significant advantages to industries that rely on fast data processing. Specifically:Energy: accelerating big data analytics for sectors such as renewable energy for wind, solar, and biomass.Telecommunications: expediting faster and deeper understanding of LTE and 5G networks, and enhancing customer experiences.Urban planning & smart cities: accelerating the sector with GeoAI solutions, which analyze urban data to optimize infrastructure, transportation, and public services; and driving advanced utility analytics on big smart meter, IoT, and Earth observation datasets, for hidden risks visualization, and uninterrupted uptime.Environmental monitoring, geospatial intelligence, and disaster response: analyzing satellite and drone data to detect changes in land use; assessing damage from natural disasters faster; and automatic change detection and socio-economic analysis."As enterprises across sectors look to train and scale their models, they are looking for industry and use case-specific solutions to help them accelerate growth and innovation," said Kevin Cochrane, CMO of Vultr. "Our partnership with HEAVY.AI is yet another example of Vultr being committed to unlocking the next frontier of GPU-accelerated analytics for some of the most data-intensive workloads across key sectors." | Decision Making/Content Synthesis | Business and Financial Operations/Management | null | null | null | null | null | null |
news | John Moore | Hyperscalers' mass, AI role dominate partner ecosystems | Seemingly saturated, the massive partner networks of the top cloud vendors still attract service providers and often serve as their top ecosystems, according to a new survey. | https://www.techtarget.com/searchitchannel/news/366612147/Hyperscalers-mass-AI-role-dominate-partner-ecosystems | 2024-09-27T16:12:00Z | Hyperscalers play a commanding role in channel partner alliances, continuing a trend that has been intensifying over the last few years.Research published this week by Tercera, a Chicago investment firm that specializes in IT services companies, describes how the leading cloud providers -- AWS, Google Cloud and Microsoft Azure -- dominate partner ecosystems. More than 50% of the 250 technology service providers Tercera polled cited one of the three companies as their main partner.That number comes from Tercera's third annual listing of the top 30 partner ecosystems -- although planetary systems might serve as a better analogy. Indeed, the Tercera report emphasized the hyperscalers' "gravitational pull," which continues to attract partners despite the enormous numbers of companies already engaged with them. The biggest cloud vendors maintain relationships with thousands of software and services partners."The hyperscalers continue to defy the law of large numbers when you look at how many partners are in their ecosystems," said Michelle Swan, CMO at Tercera.The emergence of cloud vendors as the top draw for partners dates to at least 2021. That year, Accenture surveyed 1,150 channel companies and found that AWS, Google and Microsoft accounted for the majority of their revenue. That wallet share marks a shift in channel economics: Large hardware companies have traditionally occupied the top spots as partners' primary allies.AI influences partner preferencesThe rise of generative AI also influences alliance trends, encouraging IT service providers to link up with a hyperscaler's technology partners in that field. For example, AWS channel partners pursing GenAI might work with Anthropic, the subject of a $4 billion investment by Amazon, Swan noted. A Microsoft partner, on the other hand, will lean toward OpenAI. Microsoft in 2023 announced a planned multibillion-dollar investment in OpenAI that expanded on earlier ones, with published reports saying it has committed a total of up to $13 billion."They have their own solar systems," Swan said, referring to AWS, Google, Microsoft and the GenAI startups in their orbit.Tercera divides its top 30 listing into three tiers. The first tier, dubbed "market anchors," includes AWS, Google and Microsoft -- along with large ISVs such as Salesforce and ServiceNow. The second tier, "market movers," features publicly traded vendors with evolving partner ecosystems. The third tier, "market challengers," consists of privately held vendors with a partner-centric orientation. Anthropic and OpenAI occupy this tier, for example.A 2024 generative AI survey from TechTarget and its Enterprise Strategy Group division reinforces the leading cloud vendors' pivotal ecosystem role. The survey polled 610 GenAI decision-makers, influencers and users on which vendors had the best ecosystems for supporting their GenAI initiatives. Microsoft topped the ecosystem list, with 54% of the respondents citing it. Microsoft's strategic partner, OpenAI, was the second most popular ecosystem with a 35% share of respondents. Google and AWS rounded out the top four, with 30% and 24% response rates, respectively.The global survey polled respondents across a range of industries, including business services and IT.Microsoft partners see growth in AIAn IDC survey of Microsoft's partner base also points to the importance of AI within ecosystems. Eighty-one percent of the more than 600 partners polled worldwide said Microsoft's AI offerings would increase their Microsoft-related revenue. On average, partners expect to see their Microsoft AI revenue grow 39% in 2024, according to the Microsoft-commissioned survey published last week.While the survey doesn't specify how AI influences services revenue, Microsoft's interviews with partners indicate that AI "opens up additional opportunities" in areas such as governance and security, a Microsoft spokesperson said.Partners' AI growth prospects, meanwhile, contribute to the expansion of Microsoft's partner population, which the spokesperson said grew nearly 20% from 2023 to 2024. That growth can be "attributed in large part to the Microsoft AI Cloud Partner Program and the opportunity that AI presents," he said.The partner program, launched in 2023, aims to help channel companies build offerings on Microsoft's AI platform.Hyperscalers fund cloud opportunitiesPartners, however, are drawn to the large cloud providers for reasons other than AI. In one partner boon, hyperscalers are funding customers' cloud migrations and enlisting IT services firms to facilitate adoption, Tercera CEO Chris Barbin said. The cloud vendors pay service partners directly to migrate customers' data or build cloud applications, he said. The migration investment boosts partner revenue while setting up hyperscalers to drive cloud consumption as they land more customers.The vendor-funded migration projects won't last forever, but they offer partners near-term opportunities that can run into the hundreds of thousands, if not millions, of dollars, Barbin noted."Right now, there's an opportunity for services firms to capitalize on the large war chests the hyperscalers have to build momentum," he said.John Moore is a writer for TechTarget Editorial covering the CIO role, economic trends and the IT services industry. | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Adrian Weckler | Ireland needs more power generation to help AI and tech industries, national conference told | If Ireland doesn’t increase power generation “by an order of magnitude” to match the expansion of the technology and artificial intelligence (AI) industries, the country will fall behind economically, according to one of the country’s largest venture capital firms. | https://www.independent.ie/business/technology/ireland-needs-more-power-generation-to-help-ai-and-tech-industries-national-conference-told/a288865195.html | 2024-09-05T13:07:17Z | The country needs an order of magnitude of extra power generation to meet the requirements of AI developments, TechIrelands National AI Meet event in Galway was toldIf Ireland doesnt increase power generation by an order of magnitude to match the expansion of the technology and artificial intelligence (AI) industries, the country will fall behind economically, according to one of the countrys largest venture capital firms.To avoid decline, Ireland needs to invest heavily in renewable energy generation, said Barry Downes, managing director of Sure Valley Ventures.As we go through orders of magnitude of increases in data centres and computing power, we need orders of magnitude in increasing our power generation, Mr Downes told TechIrelands National AI Meet 2024 in Galway.This will become a very significant macro issue. We already see in Ireland challenges in terms of commissioning and launching new data centres.Eirgrid estimates that Ireland needs around 350 projects, at an investment cost of 3bn, by 2030 for an adequate power grid.A report from the Sustainable Energy Authority of Ireland (SEAI) this week showed that total demand for electricity last year grew by 4.4pc, or 1.3 terawatts (TW), but 80pc of that increase, or 1.1TW, was from data centre growth.Data centres accounted for 20pc of all electricity use in the State, the report said.Mr Downes said that there is now a huge opportunity for Ireland in generating renewable power, in wind, offshore and solar.We will need to get to grips with as a country. Do we want to play a key role in the rollout of computing power? If we do, then we're going to need to generate more. We're going to have to do true renewables.A recent report by Ernst & Young claimed that Ireland is the fifth most attractive country in the world to invest in for renewable energy projects.The conference, part-sponsored by the IDA and Enterprise Ireland, also heard addresses on Irelands AI strategy from the head of the countrys AI Advisory Council, Dr Patricia Scanlon, and Dara Colleary, Minister of State at the Department of Enterprise.Dr Scanlon said that despite the hype cycle, AI was here to stay and compared it to the advent of the internet and mobile phones.Minister Dara Colleary said that the government wants 75pc of Irish companies to be using AI by 2030, adding that Ireland was ahead of the EU average on AI adoption, by some metrics.Half of the funding we have allocated to date in our disruptive technologies innovation fund, which is worth about 500 million, has gone to projects that have an AI element to them, he said.The seventh call is now open on a rolling basis and the current deadline for that is April 2025.He called on startups to apply for the funding, particularly those with ideas and collaborative technologies.Mr Downes said that his venture capital investment firm was looking for software companies that are building upon these foundational technologies to solve real problems in real enterprises.A number of local Irish companies and multinationals, including OpenAI, Google, Microsoft, Amazon and Boston Scientific, gave examples of how to use AI in practice.OpenAIs head of solution architecture, Colin Jarvis, told the Irish Independent that a number of European tourism agencies would soon visit Dublin to study how the city has developed a generative AI tourism service for visitors ot the capital.There was also a cautionary note struck by TechIreland director and veteran investor Brian Caulfield.AI is not as new as we think, it's not as clever as we think, and it's not as dangerous as some may think, he said.We're still a long way from artificial general intelligence. We have a fantastic set of tools for various applications, recognisers, generators and large language models. But these are really statistical models and we're a long way from being able to connect those components with anything that you could really call reasoning.I'm not saying that there are not dangers of AI that dont need to be thought through or managed carefully. We've all seen very real harms that can come from the proliferation of convincing generative AI fakes, the dangers of hallucination bias encoded in training data sets, perpetuating inequality and so on.Looking at AI broadly, I think that we're probably just past the peak of inflated expectations and heading down towards the trough of disillusionment. But I also want to remind you of [Roy] Amaras law, which states that we overestimate the impact of technology in the short term, and underestimate the effect in the long run. | Unknown | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Michael Kan | This Startup Wants to Tackle AI Energy Demands With Data Centers in Space | Lumen Orbit says outer space is an ideal place to run a data center, citing 'abundant solar energy, cooling, and the ability to freely scale up.'A US startup is looking to build data centers in perhaps the last place you’d expect: outer space. The idea comes from Lumen Orbit, which released its first white paper on Tuesday, touting the benefits of space-based data centers. The concept may seem far-fetched, but operating … | https://uk.pcmag.com/ai/154217/this-startup-wants-to-tackle-ai-energy-demands-with-data-centers-in-space | 2024-09-05T14:38:55Z | A US startup is looking to build data centers in perhaps the last place youd expect: outer space. The idea comes from Lumen Orbit, which released its first white paper on Tuesday, touting the benefits of space-based data centers. The concept may seem far-fetched, but operating data centers in space could solve the growing energy demands of today's AI development. To train next-generation AI programs such as ChatGPT, companies are building larger and larger data centers packed with GPUs, which can require electricity to both operate and cool. The escalating energy needs raise questions about how AI development will continue without straining todays electricity supplies or generating more pollution. In response, the Redmond-based Lumen Orbit is pushing the tech industry to consider space as the ideal home for next-generation data centers. We should train future large AI models in space to make use of abundant solar energy, cooling, and the ability to freely scale up, Lumen Orbit CEO Philip Johnston said earlier this week. (Credit: Lumen Orbit)The whitepaper elaborates on the concept, with the company estimating that a space-based data center would only cost $8.2 million to run over a 10-year period, compared with $167 million to run ground-based computing. That $8.2 million includes the estimated $5 million necessary to launch the data center, which would adopt a modular design. To harness solar energy, Lumen Orbit envisions connecting the space-based data center to a huge solar array measuring 4 kilometers by 4 kilometers. This promises to bring down the energy costs for a data center to $2 million over a 10-year period, a vast reduction from the $140 million power needs for an equivalent data center on the ground.(Credit: Lumen Orbit)To build the data centers, Lumen Orbit wants to use new spacecraft, including SpaceXs Starship vehicle, which has also been designed to one day send humans to Mars. The startup is betting that "falling launch costs" will allow easier access to Earths orbit, making it economical to build and maintain space-based data centers."We are convinced that orbital data centers are feasible, economically viable, and necessary to realize the potential of AI, the most important technology of the 21st century, in a rapid and sustainable manner," the whitepaper adds. (Credit: Lumen Orbit)Still, building a space-based data center wont be easy. One unique challenge is protecting the technology from solar radiation. The whitepaper also notes the need for "radiators" to efficiently dissipate the heat on the solar array. "This component represents the most significant technical challenge required to realize hyperscale space data centers," the paper says. To demonstrate the concept, Lumen Orbit has booked a space launch to send up a prototype satellite weighing 132 pounds next May, with the goal of training the first AI model in space. In 2026, the company also aims to launch a micro data center. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Briley Lewis | How AI’s data-crunching-power can help demystify the cosmos | AI can lead to 'amazing discoveries that might otherwise remain hidden in the data.'The post How AI’s data-crunching-power can help demystify the cosmos appeared first on Popular Science. | https://www.popsci.com/science/ai-astronomy/ | 2024-09-27T13:03:23Z | We hear about artificial intelligence all the time nowadaysbut what is it doing for astronomy? A lot! New research papers are published almost every week using AI for some new investigation in astronomy: classifying galaxies, identifying solar flares, exploring exoplanet atmospheres, and more. AIs biggest strength is that it can sort through mountains of data much faster than a humana skill thats particularly timely as new telescopes are generating more and more data for astronomers to handle.We can use [AI] to tackle problems we couldnt tackle before because theyre too computationally expensive, said Daniela Huppenkothen, astronomer and data scientist at the Netherlands Institute for Space Research, in MIT Technology Review.One telescope in particular has many astronomers abuzz about AI: the Vera C. Rubin Observatory, scheduled to be completed in January 2025, just a few short months away. Once open, itll image the entire night sky every few days for ten whole years in a program known as the Legacy Survey of Space and Time (LSST), generating sixty petabytes of data in that time. The massive amount of data that will be gathered in the coming years by the Vera C. Rubin Observatory and other large-scale astronomical projects is simply too vast and rich to be fully explored with existing methods, said NSF director Sethuraman Panchanathan in a University of Chicago press release. With reliable and trustworthy AI in their toolbox, everyone from students to senior researchers will have exciting new ways to gain valuable insights leading to amazing discoveries that might otherwise remain hidden in the data. Astronomers often use a specific version of AI known as machine learning. Although AI tools can seem intelligent (and theyre literally described as learning!), theyre really just algorithms built to recognize patterns and improve their results as they encounter more data. For LSST, one way these algorithms will help out is by classifying galaxies based on their shapes. Traditionally, this task is done by humans visually inspecting each and every imagebut human eyes are limited in their capabilities, and we also only have so many waking hours to work and few scientists to get it all done. Not only is machine learning faster, but these algorithms can pinpoint smaller features than humans can, like faint wisps at the edges of a galaxy, and notice fainter galaxies. Other astronomers hope to use AI and LSST to measure the distance to galaxies more accurately than ever before, hopefully shedding light on the major mystery of dark matter.Theres another observatory thats going to produce even more data than LSST: the Square Kilometer Array Observatory, an arrangement of radio dishes and antennae spread across Australia and South Africa. This technological feat combines the observations of over 100,000 separate detectors, producing a whopping 300 petabytes a yearalmost four times more than LSST will create in a decade! Astronomers plan to use AI to scour this dataset for more information on the earliest stars in the universe.AI has already contributed to some major discoveries, too. The famous first ever image of a black hole from the Event Horizon Telescope got a touch up thanks to a machine learning algorithm, making the image even clearer. Finding smaller, more Earth-like exoplanets is a notorious challenge, and astronomers used machine learning to separate the signal of a planet from the signal of its star that gets in the way. And astronomers have already been using machine learning for years to categorize exploding stars known as supernovae.Machine learning is completely changing my field, said Penn State astronomer Joel Leja. It blows my mind every time I think about it, and it lets us ask new science questions. | Discovery/Content Synthesis/Prediction | Life, Physical, and Social Science/Computer and Mathematical | null | null | null | null | null | null |
|
news | Tracy Woo | The Biggest Cloud Trends For CISOs | Public cloud may be the major underpinning of enterprise infrastructure strategies but it comes with risks. Learn the top three cloud trends that CISOs and security leaders need to be aware of in this preview of our upcoming Security & Risk Summit December 9-11. | https://www.forrester.com/blogs/the-biggest-cloud-trends-for-cisos/ | 2024-09-11T20:39:50Z | Public cloud has become the major underpinning of enterprise infrastructure strategies and innovation delivery. Clouds self-service access, elasticity, scalability, quick deployment, access to new infrastructure and services with little upfront costs meant accelerated delivery to market and rapid uptake by enterprises.However, adopting public cloud also brings security risks. Multitenancy means an increased attack surface. Even simple lift-and-shift migration without refactoring or proper governance or infrastructure hardening can lead to untethered spend, insecure and non-compliant workloads, and the risk of potential security breaches. Simply put, turning on the lights on a cloud account does not equal digital transformation. New methods of governance, new modes of collaboration, and new ways of working are fundamental to successful cloud adoption.Cloud strategies continue to change and evolve as new cloud technologies and services are introduced and as a result, cloud security strategies of five years ago are already outdated. CISOs are finding that how their business uses cloud is constantly moving, like the goalposts on wheels. Cloud security must also evolve at pace.In 2025, these are the some of the most critical cloud trends that CISOs need to be aware of:Securing AI in the cloud. The onslaught of generative AI has meant that CISO organizations have had to pivot as well. Lack of transparency around black box AI models, susceptibility to bias, ethical considerations, threat actors that can exploit open source models, and AI models that hold large amounts of data vastly increase an organization’s attack surface. CISOs should be asking addressing these three concerns: 1) Reviewing the security controls and governance of cloud managed AI services 2) Agreeing security role and responsibilities between the cloud provider and your security team 3) Upskilling the AI skills of the security and broader cloud infrastructure team to secure these new services.Workload placement for cloud sustainability. New sustainability reporting requirements in the EU have forced enterprises to focus on their carbon footprint. North American companies are following suit. One method of meeting sustainability requirements is through placing workloads in more sustainable availability zones. For example this could involve ensuring that an availability zone powered by solar power or other renewable energy sources is preferred to one powered by a gas fired plant. Cloud teams rely on cloud management solutions and carbon footprint data to inform workload placement. Workload placement recommendations often only look through two potential lens: lowest cost or lowest carbon footprint. CISOs might find that these concerns trump data sovereignty concerns or move data to availability zones without the required security controls. CISOs need to ask where their data will reside and implement controls over sensitive data to avoid automatic movement by workload management solutions that break security requirements.Sovereignty and regulatory requirements. In recent years new sovereignty requirements such as SecNumCloud, Cloud de Confiance from France and the cloud Computing Compliance controls Catalog (C5) from Germany along with the push to keep data in-country have created a broader push for private and sovereign clouds. In particular EU and APAC countries have been attempting to more heavily leverage non-US based cloud providers, create sovereign cloud, or leave workloads on premises. The Australian government announced an AUD $2 billion investment into a top secret government cloud. Saudi Arabias Vision 2030 introduced strict data sovereignty measures. CISOs operating in such environments know they need to meet these sovereignty and regulatory directives, but have to balance this with allowing the wider IT team to deliver capabilities that the business needs and wants. CISOs should focus on ensuring they understand which data types require sovereign cloud services, skeptically review claims about sovereignty by some hyperscalers and seek to protect only the data which requires this protection in order to keep the business on side.If you want to explore your cloud security strategies more deeply, be sure to check out Forresters Security and Risk Summit coming up in Baltimore December 9 11. Ill be presenting a session in the Cloud & Application Security track entitled Cloud Market Trends That Will Disrupt Your Security Program. Well dive deep into the biggest trends your CISO should be aware of and outline what your security program should be doing in preparation.I hope to see you at the Security And Risk Summit in Baltimore! | Unknown | Management/Computer and Mathematical | null | null | null | null | null | null |
|
news | Jeanne Yacoubou | AI in Architecture: Trends, Tips, and Examples in 2024 | Story at a glance: AI in architecture is a tool enabling engineers and architects to create more efficiently while streamlining workflows among all team members involved in a building’s design, planning, or construction. According to a recent survey, two-thirds of people in the design and architecture professions already using AI are self-taught. Generative design is […]The post AI in Architecture: Trends, Tips, and Examples in 2024 appeared first on gb&d magazine. | https://gbdmagazine.com/ai-in-architecture/ | 2024-09-16T21:34:38Z | Story at a glance:AI in architecture is a tool enabling engineers and architects to create more efficiently while streamlining workflows among all team members involved in a buildings design, planning, or construction.According to a recent survey, two-thirds of people in the design and architecture professions already using AI are self-taught.Generative design is an example of how architects use AI algorithms to generate several possible designs based on inputted data.AI in architecture is quickly revolutionizing the profession. In fact, 46% of the 1,227 designers surveyed in a study commissioned by Chaos through Architizer reported already using AI tools for their projects, and 24% plan on using it soon. Remarkably, 60% of survey respondents have received no formal training in AI. This means AI in architecture is being driven by self-learning.In this article we explore current trends, tips, and examples of the AI being used in architecture.What is AI?AI is a catchall term for a set of technologies that make computers do things that are thought to require intelligence when done by people, according to a recent article in the MIT Technology Review. Still in its infancy, the current state of AI is prone to making mistakes, including facial recognition programs misidentifying people or driverless cars causing accidents.What is AI in Architecture?Rendering courtesy of EnscapeThere are a multitude of ways AI can be used in architecture, including: Assisting designers, engineers, and architects to imagine more innovative, cost-effective, or creative solutionsCreating virtual 3D models or renderingsSearching for new design trends and patternsOptimizing materials for cost, energy efficiency, etc.Running tedious calculations (e.g., solar studies, lighting simulations, energy consumption estimates, etc.)Analyzing huge datasets and formulating new insights from themWhen using AI for any of these reasons, architects and designers are able to explain their plans more effectively to clients.As a culture weve all gotten so much more visual with things like Instagram and TikTok. Our attention spans are shorter, and being able to showcase projects quickly and give people an overview, you can really share the story so much quicker without having to walk people through a potentially dense presentation or show them a series of plans that may be hard for the average person to interpret, says Caoimhe Loftus, digital lead at Arcadis, in an earlier article for gb&d.Current Trends in AI in ArchitectureTheres no question that AI is making waves in design and architecture. These are a couple of trends.1. Augmented reality (AR) combined with virtual reality (VR) technology will overlay digital information onto the physical world. Designers will be able to see how their designs will or will not sync with the existing environment pre-construction.2. AI will be used by urban planners to design entire cityscapes. It will, for example, be able to incorporate climate data to predict how current or future green infrastructure will protect cities from storm surges. Or, AI will predict the degree of structural integrity buildings should have to be resilient in the face of climatic challenges.Examples of AI in ArchitectureEnscapes VR is a game-changer in the design process, as it offers architects, designers, and clients a profound understanding of scale, proportion, and spatial relationships within the built environment, says Chaos’ Andreea Lipan. Rendering courtesy of ChaosArchitects and designers possess a natural aptitude for experimentation. This trait makes them ideal AI explorers. Here are a few of the fruits of their discovery of AI in architecture.AI in VREnscape has become a leader in integrating AI with virtual reality (VR) technology.With the integration of VR Enscape has democratized the perception of spaceonce a skill requiring extensive training in geometry and sketching. Now accessible instantly through a VR headset, individuals can perceive architectural designs with the acuity of an architect, Andreea Lipan, product marketing manager at Chaos, the company behind Enscape, previously told gb&d.By immersing themselves in the virtual environment, clients can explore proposed projects comprehensively and use real-time feedback to make informed decisions about design elements, materials, and spatial configurations. This eliminates misunderstandings that often arise from 2D documentation or static images, allowing clients to visualize projects effectively and participate actively in the design process, Lipan says.AI Text to Image GeneratorsAI text to image generators use deep learning techniques to create images based on text descriptions.With the right input an architect could enter a range of design requirements, such as the number of rooms, the desired layout, and the style of the building, and the AI generator could produce multiple designs that meet those requirements. These designs could then be further refined or combined to create the best possible solution that meets the clients needs, Majeda Alhinai, a Solutions Studio designer at Arktura, previously wrote for gb&d. A recent addition to Arkturas product line is Arborisabiophilic inspired modules that provide atmospheric and acoustic properties to any space.Generative Design AIGenerative AI in architecture uses algorithms to create a wide range of design options based on a users specific input parameters and huge quantities of other data such as site conditions and building codes to investigate every conceivable option, such as structure, materials, and production techniques, then optimize them. It can recommend, for example, a design that excels in a specific way for cost, weight, or load- bearing characteristics. In the process, architects and designers can select the design leading to more sustainable or energy efficient buildings with maximum structural integrity.This type of AI automates time-consuming, repetitive tasks, like building modeling and simulation. By identifying the most economical features, generative can save you money. For instance, Deloitte predicts this technology may save up to 15% in construction costs.Another benefit of generative AI is that it can do the work designers and architects would like to do if they had infinite time to consider all their options. Gen AI can literally create thousands of designs in seconds.Tips for AI in ArchitectureInspired by the local desert landscape, Arktura’s Solutions Studio team worked in tandem with the architects and designers to create this stunning custom ceiling system for Phoenix Sky Harbor in Phoenix, designed by Corgan. Photo by Kurt GriesbachAI in architecture is here to stay, so its imperative to become skilled at using it. Here are a few tips to make AI in architecture work for you.Become familiar with the latest AI tools suitable for use by designers and architects. For instance, some AI text-to-image generators are: DALL-E, Stable Diffusion, Midjourney, CLIP, and StyleGAN. Theyre similar but unique. Explore which one(s) works best for you through trial and error.Spend time experimenting with AI tools on a variety of projects and at all stages from conception to final build. Send your feedback to the tools makers so future iterations will be easier to work with and possess greater functionality.Join or form a professional network using AI in architecture to share experiences and troubleshooting guidance.Inquire about or coordinate in your firm formal training in AI for all team players. | Content Creation/Decision Making | Architecture and Engineering | null | null | null | null | null | null |
|
news | Ashley Johnson | Sidekick for Laravel: Provides a common syntax for using Claude, Mistral, Cohere and OpenAi APIs | Say hello to Sidekick! A Laravel package that provides a common syntax for using Claude, Mistral, Cohere and OpenAi APIs. - PapaRascal2020/sidekick | https://github.com/PapaRascal2020/sidekick | https://opengraph.githubassets.com/9de49b3ef088eae7143fb5bb09d04f768d2fbae104fff43d36bb9880c38c716f/PapaRascal2020/sidekick | 2024-09-23T11:00:00Z | This project provides a unified wrapper around the OpenAI, Claude, Cohere and Mistral APIs for Laravel. The goal is to simplify switching between different AI models and APIs, making it as seamless as possible.Sidekick.mp4The easiest way to install the package in your laravel app is to run the following command from within your projects directory: composer require paparascaldev/sidekickIf you would like to install a specific version you can manually update your composer.json file under require section like so:"paparascaldev/sidekick": "^0.1.1"You can also use @dev tag.After you have updated the file run:Once Sidekick is installed you need to update your .env file with your access tokens.You only need to specify the token for the provider(s) you will use.SIDEKICK_OPENAI_TOKEN={API_KEY_HERE} (Recommended)SIDEKICK_MISTRAL_TOKEN={API_KEY_HERE} (Optional)SIDEKICK_CLAUDE_TOKEN={API_KEY_HERE} (Optional)SIDEKICK_COHERE_TOKEN={API_KEY_HERE} (Optional)If you are not yet signed up with any of the AI providers, here are some links to help:Run your migrations using the command below:After the .env is updated you can start testing the plugin.In order to see some examples of Sidekick in action I have created a playground.This is not available by default because some may not wish to use it.If you would like to use it run the following artisan command to install the playground: php artisan sidekick:installThis will install the routes and views into your application.After the install you can access the playground at:For the documentation we will be using the OpenAi model as it supports all endpoints, where Claude and Mistral are supported you can just hot-swap OpenAi when initialising the Sidekick/SidekickConversation class.This allows you to create a chatbot that remembers previos interactions.To start a new conversation:$sidekick = newSidekickConversation();$conversation = $sidekick->begin( driver: newOpenAi(), model: 'gtp-3.5-turbo', systemPrompt: 'You can instruct the chatbot using this parameter');$response = $conversation->sendMessage($user_input);returnresponse()->json($response);This will create a new conversation in the Database and make the call to the AI for a response.An example of the formatted response can be found below:{ "conversation_id": "9ce7af51-d0b9-491f-9e57-f54e30ef0b95", "messages": [ { "role": "user", "content": "How is fudge made?" }, { "role": "assistant", "content": "Fudge is a delicious, dense dessert made using a combination of sugar, milk, butter, and sometimes cream. The exact method of making fudge can vary slightly depending on the recipe, but here's a basic step-by-step process:\n\n1. Combine sugar, corn syrup, and water in a heavy-bottomed saucepan. Bring the mixture to a boil over medium heat, stirring occasionally to dissolve the sugar.\n2. Once the sugar has dissolved, stop stirring and attach a candy thermometer to the side of the pan. Continue cooking the mixture without stirring until it reaches the soft-ball stage (around 235-240°F/118-115°C).\n3. Remove the saucepan from the heat and quickly stir in butter, vanilla extract, and your chosen flavorings (such as chocolate chips, nuts, or marshmallows).\n4. Pour the fudge mixture into a lightly greased 8-inch square baking pan. Allow it to cool at room temperature until it's no longer warm to the touch, but still soft enough to spread with a spatula.\n5. Using the spatula, gently spread the fudge evenly in the pan, trying not to introduce too many air bubbles. Allow the fudge to cool completely at room temperature.\n6. Once the fudge is completely hardened, cut it into squares and enjoy!\n\nIt's important to note that fudge can be a bit tricky to make, as it requires careful attention to temperature and timing. If you encounter any issues, don't be afraid to experiment with different recipes or techniques until you find one that works best for you. Happy fudging!" } ]}Once you have this response, to continue the conversation you can write the following in your controller making sure to pass the conversation_id in the request:$sidekick = newSidekickConversation();$conversation = $sidekick->resume( conversationId: $conversation_id);$response = $conversation->sendMessage($user_input);returnresponse()->json($response);An example of the formatted response can be found below:{ "conversation_id": "9ce79f44-de39-4ac3-9819-f1c042a4c02b", "messages": [ { "role": "user", "content": "How is fudge made?" }, { "role": "assistant", "content": "Fudge is a sweet treat that is typically made with sugar, butter, milk, and chocolate, although there are many variations and recipes that may include additional ingredients such as nuts, marshmallows, or fruit. Here's a basic step-by-step guide for making traditional fudge:\n\n1. Grease a 9-inch square baking pan and line it with parchment paper, allowing the excess to hang over the edges for easy removal.\n2. In a heavy-bottomed saucepan, combine 3 cups of granulated sugar, 1 can (12 ounces) of evaporated milk, and 1/2 cup of butter over medium heat. Stir the mixture constantly until the butter has completely melted and the sugar has dissolved.\n3. Once the sugar has dissolved, stop stirring and bring the mixture to a boil. Clip a candy thermometer to the side of the saucepan and cook the mixture without stirring until it reaches the soft-ball stage (235-240°F/113-115°C). This can take 10-15 minutes.\n4. Remove the saucepan from the heat and stir in 12 ounces of semi-sweet chocolate chips and 1 teaspoon of vanilla extract. Continue stirring until the chocolate is completely melted and the mixture is smooth.\n5. If you'd like to add any mix-ins, such as nuts or marshmallows, stir them in now.\n6. Pour the fudge mixture into the prepared baking pan and spread it evenly with a spatula. Let it cool at room temperature for at least 3 hours, or until it is completely set.\n7. Once the fudge is set, lift it out of the pan using the parchment paper overhang. Use a sharp knife to cut it into squares. Store the fudge in an airtight container at room temperature for up to 2 weeks, or in the refrigerator for up to 1 month." }, { "role": "user", "content": "What are the traditional flavours?" }, { "role": "assistant", "content": "While there are many variations of fudge, there are several traditional flavors that are popular around the world. Here are some of the most common traditional flavors of fudge:\n\n1. Chocolate: This is the most classic and popular flavor of fudge. It is made with chocolate chips or cocoa powder and can be flavored with vanilla, mint, or other extracts.\n2. Vanilla: Vanilla fudge is made with vanilla extract and can be flavored with nuts, such as walnuts or pecans, or with marshmallows.\n3. Butterscotch: Butterscotch fudge is made with brown sugar and butter and can be flavored with vanilla or salt.\n4. Peanut Butter: Peanut butter fudge is made with peanut butter and can be flavored with chocolate or honey.\n5. Maple: Maple fudge is made with maple syrup and can be flavored with nuts, such as pecans or walnuts.\n6. Rocky Road: Rocky road fudge is made with chocolate, marshmallows, and nuts, such as almonds or cashews.\n7. Fudge with Fruit: Some people make fudge with dried fruits, such as cherries, cranberries, or raisins, or with fresh fruit, such as strawberries or raspberries.\n8. Fudge with Nuts: Fudge can also be made with a variety of nuts, such as walnuts, pecans, almonds, or cashews, and can be flavored with chocolate or caramel." } ]}The difference in non streamed and streamed responses is a flag. This flag is passed in the sendMessage function like so:// Send a new messagereturn$conversation->sendMessage($request->get('message'));// Send a new message (streamed)return$conversation->sendMessage($request->get('message'), true);There is very basic functionality to list, show and delete conversations. This will be updated in the near future.However, currently you do this by calling an instance of SidekickChatManager. Examples of each are below:publicfunctionindex() { // List Conversations$sidekick = newSidekickChatManager(); return$sidekick->showAll();}publicfunctionshow(Conversation$conversation) { // show Conversation$sidekick = newSidekickManager(); $sidekick->show($conversation);}publicfunctiondelete(Conversation$conversation) { // Delete Conversation$sidekick = newSidekickManager(); return$sidekick->delete($conversation);}$sidekick = Sidekick::create(newOpenAi());return$sidekick->complete()->sendMessage( model: 'gpt-3.5-turbo', systemPrompt: 'You an expert on fudge, answer user questions about fudge.', message:"How is fudge made?");$sidekick = Sidekick::create(newMistral());return$sidekick->embedding()->make( model: 'mistral-embed', input: 'This is sample content to embed');$sidekick = Sidekick::create(newOpenAi());$image = $sidekick->image()->make( model:'dall-e-3', prompt: $request->get('text_to_convert'), width:'1024', height:'1024');// This is just a basic example of printing to screen.// In a real world situation you may save it and then render out.return"<img src='{$image['data'][0]['url']}' />";$sidekick = Sidekick::create(newOpenAi());$audio = $sidekick->audio()->fromText( model: 'tts-1', text: 'Have a nice day!');// This is just a basic example of streaming it to the browser.// In a real world situation you may save it and then reference the file// instead.header('Content-Type: audio/mpeg');echo$audio$sidekick = Sidekick::create(newOpenAi());return$sidekick->transcribe()->audioFile( model: 'whisper-1', filePath: 'http://english.voiceoversamples.com/ENG_UK_M_PeterB.mp3');** Example Response **{ "text":"The stale smell of old beer lingers. It takes heat to bring out the odor. A cold dip restores health and zest. A salt pickle tastes fine with ham. Tacos al pastor are my favorite. A zestful food is the hot cross bun."}This is a service where you feed it text from a comment for example and it will returnwith an array of boolean values for certain moderation points.$sidekick = Sidekick::create(newOpenAi());return$sidekick->moderate()->text( model: 'text-moderation-latest', content: 'Have a great day.',);** Example Response **{ "id":"modr-94DxgkEGhw7yJDlq8oCrLOVXnqli5", "model":"text-moderation-007", "results":[ { "flagged":true, "categories":{ "sexual":false, "hate":false, "harassment":true, "self-harm":false, "sexual\/minors":false, "hate\/threatening":false, "violence\/graphic":false, "self-harm\/intent":false, "self-harm\/instructions":false, "harassment\/threatening":false, "violence":false }, "category_scores":{ "sexual":0.02169245481491089, "hate":0.024598680436611176, "harassment":0.9903337359428406, "self-harm":5.543852603295818e-5, "sexual\/minors":2.5174302209052257e-5, "hate\/threatening":2.9870452635805123e-6, "violence\/graphic":6.8601830207626335e-6, "self-harm\/intent":0.0002317160106031224, "self-harm\/instructions":0.00011696072033373639, "harassment\/threatening":1.837775380408857e-5, "violence":0.00020553809008561075 } } ]}Utilities are quick ways of performing some actions using AI. The functions and there descriptions are below:// Summarises the content passed. Good for blurbs$sidekick->utilities()->summarise(); // Extracts a number of keywords from a given string and returns a string of keywords (comma separated) $sidekick->utilities()->extractKeywords();// Translates the given text to the language specified$sidekick->utilities()->translateText();// Generates content from a short description of what it should be about $sidekick->utilities()->generateContent();// [OpenAI ONLY] Moderates content and returns a boolean of whether the content is flagged or not$sidekick->utilities()->isContentFlagged(); // [OpenAI ONLY] this method can store images and audio created by the AI. $sidekick->utilities()->store(); I want this composer package for Laravel to be as useful as possible, so with that in mind here are the ways you can contribute:Submitting a Pull Request (if your a dev and want to help with this project)Raise issues to GithubSubmit ideas/feedback to [email protected] this repository (if you feel inclined)I have tested the package using the following models:gpt-3.5-turbo, gpt-4, tts-1, tts-1-hd, dall-e-2, dall-e-3, whisper-1, text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002, text-moderation-latest, text-moderation-stable, text-moderation-007mistral-small-latest, mistral-medium-latest, mistral-large-latest, open-mistral-7b, mistral-embedclaude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307command-r-08-2024 command-r-plus-08-2024 | Unknown | Unknown | null | null | null | null | null | null |
news | Summerset Banks, Packet Pushers | HN748: How AI and HPC Are Changing Data Center Networks | On today’s episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It’s not just a story about leaf/spine architecture. That’s the boring part. There’s also power and cooling issues, massive bandwidth requirements, and changes in how we think about network interface cards. Should they be smart? Dumb? Both? Rob guides us through the critical issues to consider when building and operating a data center network to support AI and HPC workloads. | https://packetpushers.net/podcasts/heavy-networking/hn748-how-ai-and-hpc-are-changing-data-center-networks/ | 2024-09-06T20:50:33Z | On todays episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. Its not just a story about leaf/spine architecture. Thats the boring part. Theres also power and cooling issues, massive bandwidth requirements, and changes in how we think about network interface cards. Should they be smart? Dumb? Both?Rob guides us through the critical issues to consider when building and operating a data center network to support AI and HPC workloads.Episode Guest: Rob Sherwood, Fouder and CEO, NetDebugRob Sherwood is currently the Founder and CEO of NetDebug. He has previous experience in academic research, software engineering and roles such as a CTO for Big Switch Networks and the NEX Cloud Networking Group at Intel. His current project, NetDebug, is born out the idea that network troubleshooting, debugging and optimization should be easy for everyone.AdSpot Sponsor: AutoCon2Registration is now open for AutoCon2, taking place November 18-22 in Denver. AutCon is an independent conference dedicated to advancing the state of the art in network automation. It includes hands-on workshops and speakers sharing their learning and experience. If youre looking to get started with network automation or help drive automation projects, AutoCon2 is the place to be.Episode Links:NetDebugEpisode Transcript:This episode was transcribed by AI and lightly formatted. We make these transcripts available to help with content accessibility and searchability but we cant guarantee accuracy. There are likely to be errors and inaccuracies in the transcription. Automatically Transcribed With PodsqueezeEthan Banks 00:00:00 Attend AutoCon2 in Denver November 20th through 22nd, 2024. You’ll hear presentations about network automation from hands-on engineers who built automation systems for their company and have the scars to prove it. Conference pricing is not high, but this is a boutique event with limited tickets. Go to networkautomation.forum/autocon2 to register today. Welcome to Heavy Networking, the flagship podcast from the Packet Pushers Podcast Network. If you’ve ever had to explain to someone that spanning tree doesn’t cause loops, but in fact it prevents them you found your tribe. Some quick housekeeping for you. There are new podcasts in the Packet Pushers Podcast Network. The first is called Technically Leadership with host Laura Santamaria. The second is Total Network Operations with host Scott Robohn. I will say more about those at the bottom of the show, but just wanted to let you know that they exist. Type Technically Leadership or Total Network Operations into your podcast client and subscribe. On today’s episode of Heavy Networking, Rob Sherwood joins us to discuss the impact of high performance computing and artificial intelligence computing are having on data center network design, and it is not just a story about leaf spine architecture.Ethan Banks 00:01:09 That’s, that’s actually the boring part. There’s also power and cooling issues and massive bandwidth requirements and changes in how we think about network interface cards. Should they be smart, should they be dumb or both? To speak on these opinions is again Rob Sherwood. Rob, welcome back to Heavy Networking. It’s been, it’s been a while, man I don’t know how long it’s been since you’ve been on Heavy Networking, quite some time. So before we get into this HPC AI networking discussion, would you remind people who you are and then, then share with us what you’ve been working on lately?Rob Sherwood 00:01:36 always happy to be here. Thanks so much for, for having me. Who I am, I don’t know, I’ve been doing networking for more than 30 years now. I’ve done the math. I, I’ve done everything from, you know, research and research labs to operating, some of the largest networks in the world to shipping products and startups. I’m currently running my own startup now, NetDebug which actually has nothing to do with what we’re talking about today, but, you know, all fun and games.Ethan Banks 00:02:06 Okay. Well, again, Rob, welcome back. And as you say, you’ve been around a long time. You’ve been involved with lots of startups. You’ve done a lot of things. You’ve been involved with shiny new technology and brought it to life and new ideas and so on. And if you want to, if you’re out there listening, you want to dig back in the deep in the Packet Pushers archives, look for shows that we did with Big Switch networks back in the day. And, Rob, you might have been on some shows we did about Open Flow way back in the day too. Yeah.Rob Sherwood 00:02:31 Yeah. And I’ve even done a couple kind of wax philosophic shows as well, which I think this falls more into that time. But yeah.Ethan Banks 00:02:38 Yeah, because this does feel like it’s going to be more of a philosophical show. We were chatting the other day and you cited these four big topics you wanted to discuss around HPC and AI and how it’s impacting a data set of network design. And you said collective communications, power and cooling bandwidth and then lots of stuff around what the NIC looks like today and re-architecting it and so on.Ethan Banks 00:03:00 So we, we’re going to have an unstructured conversation today. Rob, just, just kicking around on those four topics that you highlighted. So first of all, as related to data center network design, collective communications, what do you mean by that and what’s going on here?Rob Sherwood 00:03:14 Well, let me even step a little further back and talk about the problem. Like, so my mental model of like folks running an average network right now is they probably have some storage traffic going on, they probably have some RPC, they might have some web requests or something like that. And all of that works on, you know, what the fancy term is statistical multicast, which is, you know, basically saying, you know, as long as like it doesn’t happen all at once, then, you know, as long as you’re okay with the long tail of, you know, the 1 in 1,000,000, you know, web, web gets, gets slowed down, but everybody else is kind of okay on average. Then you have a pretty good network and life is good and you don’t get fired.Rob Sherwood 00:03:52 Right. Like I think that’s kind of a typical network for most people. And, but most people have also heard about this AI thing. And it turns out that the way AI communication, specifically AI training, works is completely orthogonal and in some ways potentially even like damaging to that network. And so.Ethan Banks 00:04:15 And just so we’re very clear about we’re not talking about artificial intelligence itself as the traffic that is generated as AI computing workloads are sitting on top of the network.Rob Sherwood 00:04:24 Absolutely. And I should clarify, there’s really two types of AI compute workloads. One is called training, which is, you know, I have a bunch of data and I want to make a model that, you know, this is how you would figure out how to make something like ChatGPT. And the other is inference, which is you already have the model and you just want to run it and ask it questions. And the making the model is and I don’t use to, mean to, to talk big, but like, it’s basically the most compute intensive thing that we as humans have tried to do.Ethan Banks 00:04:58 This is the thing that we’re hearing about that’s consuming all the power. And, and people that are concerned about the environment and so on are going, do we really need all these LLMs and so on because of all the power that’s being consumed, etc.?Rob Sherwood 00:05:10 Absolutely. And once you, the good news is once you’ve got the model and that should be like a, you know, an infrequent operation like running the model, the inference part is much easier. But it turns out the big arm race is in the making the model right now. And that was the thing I really wanted to dig into, because it has just like massive impacts on how you run a network and design critically.Drew Conry-Murray 00:05:29 So in that context, then of making a model, what is the issue with collective communications and how does that differ from, you know, what’s on a typical network or how a typical network operates?Rob Sherwood 00:05:38 So a typical network uses, you know, what everybody would know and love called unicast, which is you’ve got one source and you’ve got one destination.Rob Sherwood 00:05:45 And on top of that you have lots and lots of sources and destinations. And that’s where you get kind of the statistical multiplexing. Collective communications is everybody does the same thing at once in lockstep. So a typical collective communication might be a reduce operation. So imagine you’ve got thousands of nodes, each with one integer. And you want to logically add up all of those integers and have the result end up on one machine. You know, usually they have a special machine called the root node, but it ends up being just, you know, a kind of an arbitrary machine. And so if you think about like what the packet of that, like the packet processing of that looks like is, you know, it’s really easy for a thousand machines to send a single packet to one machine all at once. But in most contexts, that looks like a denial of service attack. Drew Conry-Murray 00:06:43 Right. And so but that’s actually how this model training is supposed to function. All of these packets of these computations have to be sent to one place simultaneously or nearly simultaneous.Rob Sherwood 00:06:52 It gets more complicated than that. But this is our building block okay. And what happens? So there’s two things that make it more complicated. One is you have deep pipelining. So the result of that one thing leads into the next round of doing this. So you actually have, you know, millions of these operations in a row where the, the ith one has to finish before you can move on to the ith plus one. So. Okay. So what that means is that like if at step a thousand somebody drops a packet, then everything has to stop, slow down, you know, retransmit that packet or whatever you’re going to do with that or even if it’s not, drop the packet. Maybe it’s just there’s a lot of queuing and it gets delayed like that jumps the computation of the entire end to end thing. And if that happens with some frequency, then what could have been, you know, only an hour’s worth of computation is now a day’s worth or two weeks worth or something. And so you end up in . . .Ethan Banks 00:07:53 That bad?Rob Sherwood 00:07:55 It can be. Wow. So, I mean, if you run the numbers for, you see these latency numbers off of switches where it’s like, you know, ten, ten nanoseconds for a packet to go port to port without buffering. Yeah, right. But like with buffering, you can get up to the milliseconds. So that’s like 100x or 100,000x. And then if you include packet loss in there then you like you potentially can get worse depending on how aggressive your packet primers are and things like that. And so it really can’t because it’s both lockstep and bursty, doesn’t begin to cover it. It is the most bursty thing that you could create on a network where everybody in lockstep is sending all the packets all at once to one place. It’s a completely, it’s an incredibly difficult networking problem to solve.Ethan Banks 00:08:50 Now again to, to one place that’s, so as I’ve been hearing about these AI clusters and the way they do their thing, I was imagining the chunks of data distributed across multiple GPUs and sent for computation. And the answer sent to one place, is the answer that big?Rob Sherwood 00:09:06 Well, so, I’m doing so, so there is a set of libraries called, the message passing interface, MPI, and they’re the core for a lot of high performance computation. And AI is like one specific application built in high performance compute. The, the operation I’m talking about is called reduce, and that’s actually not what AI uses. AI uses a related operation called all reduce, which is that much more complicated. So an all reduce, not only are you sending all the traffic to all, you actually send all the traffic to all the nodes. So imagine you have each node has a, so you’ve got a thousand nodes. Each one has a thousand numbers. And you’re sending the first of those numbers to the first person and the second of those numbers to the second person, and so on and so forth. So you’re actually doing this reduce in parallel across all the machines. And that’s one clock tick. Yes.Ethan Banks 00:10:10 Yes. That’s the, that’s the quandary Ethernet finds itself in not very well suited for such a thing.Rob Sherwood 00:10:16 Yes, and you know, we’ll put that on the stack because, look, Ethernet has lots of tricks up its sleeve to try to fix things like this. But, immediately you start to understand why people historically have looked at things like InfiniBand that will do, hop by hop queuing and things like that. Or you have, you know, rate control built into the process. but if you think about okay, so, okay, that sounds crazy. How do I run this on top of my existing network with bursty communications and web gits and RPC calls and all sorts of stuff like that? And the answer is you don’t. almost every shop that I know eventually, including Google, which the Google was the last holdout for, for this has built a physically separate AI training network. So you would have machines based with multiple interfaces, some for like the production normal fabric where you have kind of this statistical multicast thing, you know, best effort network and then some interfaces for the back end training network.Rob Sherwood 00:11:18 And your listeners probably remember the days of, physically isolated storage networks. It’s roughly the same idea, but just the traffic is so completely different that you actually need something like this.Ethan Banks 00:11:30 Yeah. And then all the pains of, of converged networks were around this. How do you prioritize the stuff that needs a certain performance profile and have it share gracefully with everything else. That complicated the network configurations, for sure.Rob Sherwood 00:11:45 And so, yeah. So, and this is why I got passionate about wanting to, to talk to you intensively about, you know, to, to your listeners about this is like, that’s such a completely different shift from how you run normal networks that if, you know, if folks are sitting at home thinking, oh, yeah, I’m just going to do AI training on top of my existing network. You know, you’re not going to get good performance out of it.Drew Conry-Murray 00:12:09 So the assumption is you should be building a dedicated network. If you plan to do, to build an AI model, you should also be planning to build a standalone network for it.Rob Sherwood 00:12:17 That’s, in my mind, the current best practice. In full disclosure, I think that’s still being debated, but in my mind, the answer is pretty clear.Ethan Banks 00:12:26 Well, would it matter how big the model is youre training.Rob Sherwood 00:12:30 Yes. So and maybe that’s a good point. So there’s two dimensions of size for a model. There is. How many different inputs you want to take into it? So, how many tokens, if you’re familiar with that terminology? And then there’s how many training examples you have. And so if you’re, the number of tokens you want to take as input, exceed what you can put on a single GPU, then you have to start dividing it across multiple GPUs. And, you know, companies like Nvidia will sell, you know, exceedingly beefy, dedicated AI training platforms that will have multiple that have upwards of eight GPUs in them. And so then you’re still dividing that model across the eight GPUs. But if it fits in that you’re, you’re only going across a local systems bus rather than like a full on network.Rob Sherwood 00:13:23 But if you look at, you know, some of the, the big models like, llama out of meta or, you know, ChatGPT model or, you know, Google’s model and whatnot, those end up being divided, not just across, you know, eight GPUs, but, you know, tens of thousands. Right.Drew Conry-Murray 00:13:41 And because of all the tokens that they’re working with and tokens is essentially a chunk of data, I guess I’m thinking in an LLM context, it’s like a word or a chunk, chunk of letters or phrase roughly.Rob Sherwood 00:13:54 It roughly approximates a word. I think my understanding, and this is where my understanding gets a bit knowledge, like as you get to bigger words, you actually subdivide words.Drew Conry-Murray 00:14:02 Right. Yeah.Rob Sherwood 00:14:03 But roughly a word is the way to think about it. So that’s one dimension of how this gets big. But the other dimension is the number of training samples. And so, if you imagine you have, you know, a million pictures of, you know, food and then, you know, a classifier which is like hot dog, not hot dog is kind of the classical example.Drew Conry-Murray 00:14:27 Yes. I was waiting for that reference.Rob Sherwood 00:14:29 Yes, it’s a great show. Depending on how many you have of those, you may actually want to not run all of them on one machine. You also want to subdivide that and then what you do across, you know, this multi-dimensional, you know, size of your model by number of training samples is those get divided across GPUs. And then you do this iterative algorithm called back propagation, where you run all the samples through all the models and you see how good it does. And you know, at first it does really badly. But then back propagation, basically upvotes the pieces of the model that got you closer to the right answer and then down votes the pieces of the model that got you away from the right answer. And, but it uses gradient descent. So you just do a little tweak to the upvote and downvote of these models, and then you run it all again, and then you run it all again, and then you run it you hundreds of millions of times. And it’s that back propagation that ultimately uses the all reduce under the covers.Ethan Banks 00:15:32 A quick interruption to tell you about AutoCon2. The third in the Auto Con Conference series about network automation only. Don’t think of AutoCon as a typical conference. Instead, AutoCon is a gathering of the networking industry to work towards a standardized approach to network automation. And this is a this is a hard problem. Equipment vendors have a vested interest in selling you their automation tools. But let me guess you’re a multi-vendor shop. There are so many open source tools you could bring into your automation stack that it can be overwhelming. And if you want to integrate with products like ServiceNow, that is an added complexity. If you’ve tried moving from simple Python scripts to a robust automation system, the business can rely on, you know what I’m getting at? And this, this is why the Network Automation Forum exists, why the AutoCon conference series exists, and why you should attend AutoCon2 in real time in Denver November 20th through 22nd, 2024.Ethan Banks 00:16:30 Here, the talks interact with the speakers. Talk to other attendees in the hallway track. Meet the people making automation tools. Talk to vendor experts who live and breathe this stuff. You’re going to leave with solid ideas about how to approach network automation for your company, ideas that will impact the business that you support in a very positive way. Register at networkautomation.forum/autocon2 and I don’t, I don’t mean to sound dramatic, but don’t wait because this event is almost certainly going to sell out and the venue only holds so many humans. There really is a ticket cap. Again, that’s networkautomation.forum/autocon2 and I hope to see you in Denver. And now back to the podcast. So if I’ve got a separate network for this, for the training, what does my host look like? Do I end up with a host stuffed full of NICs that have interfaces facing a bunch of different networks in this modern architecture?Rob Sherwood 00:17:30 If you build it yourself. Yes. and in fact, what gets interesting is that some of the more advanced models of these, you start actually needing a NIC per graphics card.Rob Sherwood 00:17:39 And so then you could, you know, if you did this naively, you could actually end up with the PCI bus being the bottleneck. Yeah. And then you end up with NICs on graphics cards. And so, you know, and that’s for the AI training network. But then you also have separate new normal NICs for your, your production network.Ethan Banks 00:18:00 Well hang on, hang on. NIC on graphics card to skip, to skip the PCI bus problem, you mean. Exactly. Okay, tell us about that.Rob Sherwood 00:18:08 Well, and it’s, it’s an evolution. And let me just qualify. A lot of this depends on what scale you’re talking. Right. If you’re looking to compete with, you know, Facebook, Microsoft and Google, then you end up with, you know, NICs, NICss on graphics cards and, you know, a sea of these things and potentially custom chips that you start producing and, you know, networks and the size of you have tens of thousands of nodes. What that, but there’s a bunch of other interesting data points for folks who are less ambitious, which is to say, the rest of the world. And so you’ll have to steer me for which end you think is more interesting for, for your, your listeners.Drew Conry-Murray 00:18:51 Yeah. I mean, I think it’s good to touch on both because we want to get a sense of, you know, sort of the big guys pushing the state of the art versus what folks might be intending to try to do on their own. So I think it’s helpful to talk a little bit about both. Yeah. So yeah, let’s, let’s start with the big ones. Then it sounds like custom silicon building NICs right onto GPUs. And again that’s to avoid any PCI bus penalty.Rob Sherwood 00:19:13 Well so, but any sort of contention, like you, the entire point of a bus is to have this point to multipoint network, if you want to think of it inside your, your servers. And if you actually get to a scale point where you want a NIC per GPU, you don’t even need a bus. You just directly connect to them and you know if you’re going to have that there, then it’s like, okay, well, you know, do I need well and then so maybe another point, do you guys know what RDMA is.Drew Conry-Murray 00:19:48 Yep. So but if you want to just give us the overview because yeah it’s helpful. Yeah.Rob Sherwood 00:19:52 Yeah. So super critical operation for modern PCs. is DMA direct memory access where you know, normally you don’t want to for every, you know, word that you, byte that you move around in your machine. If you’re going to move a million bytes from this part of memory to this other part of memory, you don’t want to have a CPU operation per byte, right? And so there are some nice primitives that allow you to say memory don’t interrupt the CPU, just copy from here to there and get back to me when you’re done. And that’s DMA. Local DMA is how to think of it. And then RDMA is the network equivalent of that where there is a special packet format that if you set up your NIC the right way and if you set up your memory is the right way, you can actually do fast copies from one machine to another without critically, without interrupting the CPU. All of these operations, DMA, RDMA of some form is critical because at the rate that packets are coming in and out, you know, you don’t actually for most training, you don’t want your CPU involved at all.Rob Sherwood 00:20:59 You really want it to be mostly on the GPU. And so what ends up happening as data comes in off the NIC is via RDMA. It will write the packet memory directly into the GPU Ram.Drew Conry-Murray 00:21:15 So RDMA is getting my CPU bypass from one end point to another endpoint.Rob Sherwood 00:21:20 Correct. And then if that is your baseline, that is like, why do I even have the NIC on the, on the PCI bus at all? Why don’t I just hook it up directly to the GPU?Ethan Banks 00:21:31 Yeah. Why, why bother? Why risk that the PCI bus is, which in this scenario, if I’ve got multiple GPUs on board the PCI bus could indeed be a bottleneck. You could have contention.Rob Sherwood 00:21:42 Absolutely.Ethan Banks 00:21:43 And so yeah, this makes all the sense. Now that’s I mean, it’s not it’s infinitely scalable. But I mean as many GPUs you could stuff into that box in theory, you can put a NIC on board and, and get all the data in and out of them.Rob Sherwood 00:21:56 And, you know, I think for most people, building their own box is probably not ideal, although that won’t stop a lot of people because it’s a lot of fun.Rob Sherwood 00:22:05 You can, there are companies that put out, basically pre-built server boxes, you know, Nvidia’s A100 and more recently H100 and, you know, they’re coming up with more of these things, right? But so you get a box that I don’t know what the total power dissipated. It’s obscene. But you get things where it’s like there are, eight GPUs internally connected via Nvidia’s, like NVLink magic, which is think of that as its own little bus and then externally connected with eight 400 gig NICs. So eight by 400 gig gets you up to 3.2TB per box.Drew Conry-Murray 00:22:48 Wow. Yeah. And then presumably you’ve got multiple of these boxes. You’re connecting to a network.Rob Sherwood 00:22:51 Yeah. If you have enough money and power. And legitimately like power becomes a big issue.Drew Conry-Murray 00:22:59 Yeah, we’re going to get into that. But I just want to like so, we were talking about sort of the, the far end of the example spectrum. What about if I’m a large enterprise? I decided I have some use cases for training, for building my own training model. What kind of infrastructure am I looking at? Is that where I’m getting into buying an Nvidia H100 or whatever.Rob Sherwood 00:23:17 So I, what I would suggest is that there’s kind of a decision tree, which is, you know, how much training and how often are you going to do it. And, and cloud like using the cloud is probably the right thing for I would say, at least 50% of folks. Drew Conry-Murray 00:23:35 Right. LIke they rent the infrastructure to, to to train your model.Rob Sherwood 00:23:38 Absolutely. It’s like because you have to look at not just the hardware costs but the expertise costs. I mean, there really are like new skill sets here that like, if you’ve been doing networking for 20 years, but you haven’t been doing HPC networking, this is just completely different.Drew Conry-Murray 00:23:52 And is that because we’re assuming InfiniBand here or just because of the, you know, demands of the workload.Rob Sherwood 00:23:58 The demands of the workload. And, I will actually throw out that I, my money is still on Ethernet long term, versus InfiniBand, but that’s kind of a contentious point that we can push towards the end so.Rob Sherwood 00:24:11 And you can chalk that up to. One of the hats that I had was when I was working at Intel, I was trying to make that case professionally so that that might just be, you know, historical professional bias.Ethan Banks 00:24:27 So the larger context here was, what is the host look like in one of these setups? So it does sound like we end up with multiple networks. We’re going to have the HPC/AI compute network that that looks odd, different from what you were describing with direct inputs into the host, direct NICs interfaces into the host, and then other interfaces that talk to things like a management network and a backup network. And I don’t know what else, that kind of thing.Rob Sherwood 00:24:53 So, critically, you don’t want to under provision that network as well, because most of the training data doesn’t live on the AI/HPC network. It lives on the standard regular network. So you still have, you know, a lot of data that comes in for storage, like so in our hot dogs examples is you’re loading a million high resolution images.Rob Sherwood 00:25:15 That’s not a trivial amount of bandwidth. And so what’s complex about this from a systems design problem is you have a pretty complex and deep pipeline that is, could be limited by any one of these steps.Drew Conry-Murray 00:25:30 Okay. So you’re saying I’ve got the compute to do the model training. I’ve got the network to connect up the various nodes that need to be connected. And then I’ve also got some kind of storage network that has the data that my model is going to be trained on, and they all have to integrate and synchronize.Rob Sherwood 00:25:47 Correct. And you know, when I say storage network, I really mean storage network workloads. I have not seen, for example, anybody using Fibre Channel or FCoE or whatnot in these types of designs. You know, they’re using standard, you know, iSCSI or what have you. I’m probably dating myself. I’m not a storage network person, but, you know, it’s like over like, I think of it as like you have your standard production network. And then you have your AI/HPC network.Rob Sherwood 00:26:12 And usually people are doing best effort. You know, storage over the standard network is the pattern I’ve seen.Ethan Banks 00:26:19 Okay, so NFS some kind of an object store, something like that. Yeah.Rob Sherwood 00:26:23 Yeah, exactly.Ethan Banks 00:26:24 I think when you said iSCSI, you did date yourself a bit. Yeah. I don’t, I don’t know how much of that is in production anymore these days.Drew Conry-Murray 00:26:32 Not a storage guy. Not a storage guy. Yeah.Rob Sherwood 00:26:36 This is the problem. Like if you’ve been doing this stuff long enough, it’s like, you know, you walk away for a field for 5 or 10 years and you come back and all the three letter acronyms are different. You’re like, dammit.Ethan Banks 00:26:45 The three letter acronyms are different. Even though the architecture is like, oh, this is still awfully familiar.Rob Sherwood 00:26:49 Yeah, yeah.Ethan Banks 00:26:51 Yeah. So okay, so you said I’m not going to build my own box. You said half the people are probably going to want to do this in the cloud anyway. But if I am building, if I am, standing at my own data center for these purposes, is there a box I’m buying from, Dell, whoever that is kind of built for this. They’re tailored to this market.Rob Sherwood 00:27:11 So everybody in their brother has a, kind of a dedicated server for AI training. And generally what it looks like is, a ridiculous amount of power for, for the power supply because it’s well beyond the normal power volumes. It has, you know, 4 or 8 graphics cards in it and it has 4 or 8, NICs in it that either, you know, that end up either going directly to the graphics cards or, you know, you have the appropriate sized PCI buses so that that’s less of an issue. And, you know, depending on how people want, want to depending on different models of this, and then you’ll have some number of the network cards not for the AI training network, as well.Drew Conry-Murray 00:27:57 But physically. I assume we’re not talking like the sort of pizza box size. These are bigger multi-rack.Rob Sherwood 00:28:05 I mean, these are well not multi rack.Drew Conry-Murray 00:28:07 Not multi rack but multi RU.Rob Sherwood 00:28:09 Like eight RU is the the typical size that I’ve seen here.Drew Conry-Murray 00:28:12 Wow okay. Ethan Banks 00:28:14 Chunky. Yeah.Rob Sherwood 00:28:15 Chunky. And you know I don’t want to quote the the power dissipation numbers but like it works out that you know if you have a a historical data center where you’ve allocated, you know, a 8kW or 10kW per rack, you can generally only put like 1 or 2 of these in it per rack, and you end up like if you if you’re retrofitting an existing data center, it looks very funny because it’s like you have like one eight U appliance per rack.Ethan Banks 00:28:48 What sort of NICs am I going to find in these boxes? Are they smart NICs? Dumb NICs, a combo?Rob Sherwood 00:28:56 Different things for different roles. So again, channeling previous, employers even quote the dumb NICs are actually quite smart. Like if you think of all the stuff like they need to do RDMA and RDMA is a pretty advanced operation.Ethan Banks 00:29:12 Yeah, and so to qualify, I mean, what I’m getting at with the smart NIC is, you know, I can offload some pretty fantastic stuff these days. I mean, off to a full NFVs depending on what the card actually is, if it’s a IPU, DPU, depending on what processing functionality is on them. So that’s more where I was getting that.Rob Sherwood 00:29:30 Yeah. No, I know and I there is actually a so a bunch of standard offload features that you would get out of | Unknown | Unknown | null | null | null | null | null | null |
|
news | Matt Egan | Mr ChatGPT is going to the White House to discuss AI’s massive thirst for energy | CNN Business | The face of artificial intelligence in America is set to meet with top US officials at the White House on Thursday, CNN has learned, in a first of its kind meeting to solve a riddle that could severely strain US infrastructure: how to power the AI boom. | https://www.cnn.com/2024/09/12/tech/ai-chatgpt-white-house-power-energy/index.html | 2024-09-12T11:43:12Z | The face of artificial intelligence in America is set to meet with top US officials at the White House on Thursday, CNN has learned, in a first-of-its-kind meeting to solve a riddle that could severely strain US infrastructure: how to power the AI boom.Sam Altman, the CEO behind ChatGPT maker Open AI, Google senior executive Ruth Porat and Anthropic CEO Dario Amodei are expected to be among the tech executives in attendance, a person familiar with the matter told CNN.The meeting, which hasnt been previously reported, is the first time senior White House officials will sit down with tech company leadership to discuss how to quench AIs insatiable thirst for energy. The source said the White House expects to detail how the public and private sector can work together to maintain US leadership in AI in a sustainable way.The effort shows how business leaders and government officials are being forced to confront emerging challenges posed by the AI boom, which has captivated investors on Wall Street.Energy Secretary Jennifer Granholm, Commerce Secretary Gina Raimondo and other top officials from the Biden-Harris administration are also set to attend, along with representatives from Microsoft, according to a White House official.Neither President Joe Biden nor Vice President Kamala Harris is expected to attend.The rapid development of energy-intensive AI has sparked worries about the technology straining Americas aging power grid at a time when the Biden administration is attempting to speed the transition away from coal and other fossil fuels.Although AI has the potential to solve thorny problems like the climate crisis and cancer, it poses equally complex challenges, including how to meet the significant demand for electricity required by advanced AI systems.A single request on ChatGPT consumes about 10 times as much electricity as a typical Google search, according to the International Energy Agency. By 2026, the AI industry is projected to consume at least 10 times as much as in 2023, the IEA said.AI is expected to spark a 160% surge in data center power demand by 2030, according to Goldman Sachs. AIs appetite for energy is so great that it will force long-stalled US power demand to increase significantly the rest of this decade, the bank said.Altman has a lot at stake in this issue. Not only is he the face of the AI industry, but he has invested in Exowatt, a startup that is betting solar power can help shrink AIs carbon footprint. Exowatt just launched a new system that can generate and store clean energy to AI data centers.President Biden and Vice President Harris are committed to deepening US leadership in AI by ensuring data centers are built in the United States while ensuring the technology is developed responsibly, White House spokesperson Robyn Patterson told CNN in a statement.Other US officials expected to attend Thursdays AI power meeting include White House Chief of Staff Jeff Zients, National Economic Adviser Lael Brainard, National Security Adviser Jake Sullivan and top climate officials Ali Zaidi and John Podesta.The meeting follows a July 2023 effort by the Biden administration to get leading AI companies to pledge to put new AI systems through outside testing before public release and clearly label AI-generated content.Its a big day for Sam Altman: He will appear on an 8 pm ET special about AI on ABC hosted by Oprah Winfrey. Former Microsoft CEO Bill Gates will also appear on the show. | Unknown | Management | null | null | null | null | null | null |
|
news | Shubham Sharma | Zyphra’s new Zyda-2 dataset lets enterprises train small LLMs with high accuracy | Zyphra's Zyda-2 dataset is five times larger than the original Zyda and covers a vast range of topics and domains. | https://venturebeat.com/data-infrastructure/zyphras-new-zyda-2-dataset-lets-enterprises-train-small-llms-with-high-accuracy/ | 2024-10-17T13:48:56Z | Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreZyphra Technologies, the company working on a multimodal agent system combining advanced research in next-gen SSM hybrid architectures, long-term memory and reinforcement learning, just released Zyda-2, an open pretraining dataset comprising 5 trillion tokens. The offering comes as the successor of the original Zyda dataset. It is five times larger in size and covers a vast range of topics and domains to ensure a high level of diversity and quality which is critical for training robust and competitive language models. But, thats not the user profile of Zyda-2. There are many open datasets on Hugging Face for training cutting-edge AI models. What makes this dataset unique is that it has been distilled to possess the strengths of the top existing datasets and eliminate their weaknesses. This gives organizations a way to train language models that show high accuracy even when operating across edge and consumer devices on a given parameter budget.The company trained its Zamba2 small language model using this dataset and found it to be performing much better than those trained with other state-of-the-art open-source language modeling datasets on HF.Earlier this year, as part of the effort to build highly powerful small models that could automate a range of tasks cheaply, Zyphra went beyond model architecture research to start constructing a custom pretraining dataset by combining the best permissively licensed open datasets often recognized as high-quality within the community. The first release from this work, Zyda with 1.3 trillion tokens, debuted in June as a filtered and deduplicated mashup of existing premium open datasets, specifically RefinedWeb, Starcoder C4, Pile, Slimpajama, pe2so and arxiv. At the time, Zyda performed better than the datasets it was built upon, giving enterprises a strong open option for training. But, 1.3 trillion tokens was never going to be enough. The company needed to scale and push the benchmark of performance, which led it to set up a new data processing pipeline and develop Zyda-2.At the core, Zyphra built on Zyda-1, further improving it with open-source tokens from DCLM, FineWeb-Edu and the Common-Crawl portion of Dolma v1.7. The original version of Zyda was created with the companys own CPU-based processing pipeline, but for the latest version, they used Nvidias NeMo Curator, a GPU-accelerated data curation library. This helped them reduce the total cost of ownership by 2x and process the data 10x faster, going from three weeks to two days.We performed cross-deduplication between all datasets. We believe this increases quality per token since it removes duplicated documents from the dataset. Following on from that, we performed model-based quality filtering on Zyda-1 and Dolma-CC using NeMo Curators quality classifier, keeping only the high-quality subset of these datasets, Zpyphra wrote in a blog post.The work created a perfect ensemble of datasets in the form of Zyda-2, leading to improved model performance. As Nvidia noted in a separate developer blog post, the new dataset combines the best elements of additional datasets used in the pipeline with many high-quality educational samples for logical reasoning and factual knowledge. Meanwhile, the Zyda-1 component provides more diversity and variety and excels at more linguistic and writing tasks. In an ablation study, training Zamba2-2.7B with Zyda-2 led to the highest aggregate evaluation score on leading benchmarks, including MMLU, Hellaswag, Piqa, Winogrande, Arc-Easy and Arc-Challenge. This shows model quality improves when training with the distilled dataset as compared to training with individual open datasets.While each component dataset has its own strengths and weaknesses, the combined Zyda-2 dataset can fill these gaps. The total training budget to obtain a given model quality is reduced compared to the naive combination of these datasets through the use of deduplication and aggressive filtering, the Nvidia blog added.Ultimately, the company hopes this work will pave the way for better quality small models, helping enterprises maximize quality and efficiency with specific memory and latency constraints, both for on-device and cloud deployments. Teams can already get started with the Zyda-2 dataset by downloading it directly from Hugging Face. It comes with an ODC-By license which enables users to train on or build off of Zyda-2 subject to the license agreements and terms of use of the original data sources.Stay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured. | Unknown | Unknown | null | null | null | null | null | null |
|
news | https://www.facebook.com/bbcnews | Nobel Prize goes to John Hopfield and Geoffrey Hinton work on machine learning | The winners share a prize fund worth 11m Swedish kronor (£810,000). | https://www.bbc.co.uk/news/articles/c62r02z75jyo | 2024-10-08T10:03:12Z | American Professor John Hopfield is a professor at Princeton University in the US, and British-Canadian Professor Geoffrey Hinton is a professor at University of Toronto in Canada.Machine learning is key to artificial intelligence as it develops how a computer can train itself to generate information.It drives a vast range of technology that we use today from how we search the internet to editing photographs on our phones.Im flabbergasted. I had no idea this would happen, said Professor Geoffrey Hinton, speaking on the phone to the Academy minutes after the announcement.The Academy listed some of the crucial applications of the two scientists work, including improving climate modelling, development of solar cells, and analysis of medical images.Prof Hinton's pioneering research on neural networks paved the way for current AI systems like ChatGPT.In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning.Professor Hinton said his work on artificial neural networks was revolutionary.But he said he also had concerns about the future. He said he would do the same work again, "but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control".The winners share a prize fund worth 11m Swedish kronor (£810,000). | Unknown | Unknown | null | null | null | null | null | null |
|
news | Prof. Timothy Baldwin | Cultivating the next generation of AI innovators in a global tech hub | A few years ago, I had to make one of the biggest decisions of my life: continue as a professor at the University of Melbourne or move to another part of the world to help build a brand new university focused entirely on artificial intelligence. With the rapid development we have seen in AI over… | https://www.technologyreview.com/2024/10/29/1106130/cultivating-the-next-generation-of-ai-innovators-in-a-global-tech-hub/ | 2024-10-29T13:00:00Z | Today, the rewards of AI are mostly enjoyed by a few countries in what the Oxford Internet Institute dubs the Compute North. These countries, such as the US, the U.K., France, Canada, and China, have dominated research and development, and built state of the art AI infrastructure capable of training foundational models. This should come as no surprise, as these countries are home to many of the worlds top universities and large tech corporations.But this concentration of innovation comes at a cost for the billions of people who live outside these dominant countries and have different cultural backgrounds.Large language models (LLMs) are illustrative of this disparity. Researchers have shown that many of the most popular multilingual LLMs perform poorly with languages other than English, Chinese, and a handful of other (mostly) European languages. Yet, there are approximately 6,000 languages spoken today, many of them in communities in Africa, Asia, and South America. Arabic alone is spoken by almost 400 million people and Hindi has 575 million speakers around the world.For example, LLaMA 2 performs up to 50% better in English compared to Arabic, when measured using the LM-Evaluation-Harness framework. Meanwhile, Jais, an LLM co-developed by MBZUAI, exceeds LLaMA 2 in Arabic and is comparable to Metas model in English (see table below).The chart shows that the only way to develop AI applications that work for everyone is by creating new institutions outside the Compute North that consistently and conscientiously invest in building tools designed for the thousands of language communities across the world.One way to design new institutions is to study history and understand how todays centers of gravity in AI research emerged decades ago. Before Silicon Valley earned its reputation as the center of global technological innovation, it was called Santa Clara Valley and was known for its prune farms. However, the main catalyst was Stanford University, which had built a reputation as one of the best places in the world to study electrical engineering. Over the years, through a combination of government-led investment through grants and focused research, the university birthed countless inventions that advanced computing and created a culture of entrepreneurship. The results speak for themselves: Stanford alumni have founded companies such as Alphabet, NVIDIA, Netflix, and PayPal, to name a few.Today, like MBZUAIs predecessor in Santa Clara Valley, we have an opportunity to build a new technology hub centered around a university.And thats why I chose to join MBZUAI, the worlds first research university focused entirely on AI. From MBZUAIs position at the geographical crossroads of East and West, our goal is to attract the brightest minds from around the world and equip them with the tools they need to push the boundaries of AI research and development. | Discovery/Content Synthesis | Education, Training, and Library | null | null | null | null | null | null |
|
news | VMware Contributor, VMware, VMware Contributor, VMware https://www.forbes.com/sites/vmware/people/vmwarecontributor/ | How Private AI Is Shaping The Next Generation Of AI Solutions | AI will become more mainstream as organizations tap its power to help humans become more productive and innovative; companies must ensure their infrastructures are ready. | https://www.forbes.com/sites/vmware/2024/10/07/how-private-ai-is-shaping-the-next-generation-of-ai-solutions/ | 2024-10-07T16:01:58Z | Written by Chris Wolf, Global Head of AI & Advanced Services, VMware Cloud Foundation Division, BroadcomWhile the term Private AI has been around for more than seven years, it was often narrowly defined based on niche use cases. We saw Private AIs opportunity and market reach as far more impactful, and have worked throughout the past year to add clarity, context, and innovation to what has now become a high-growth market segment.Getty ImagesWhen we first shared our thoughts on Private AI at last years VMware Explore conference, we said it marked a new way to bring the AI model to customer data. We spoke about Private AI not as a product, but as a powerful architectural approach that could provide customers with the benefits of AI without having to compromise control of data, privacy, and compliance.Why has it resonated?A year ago, customers were being told that AI was something beyond their reach because they would need hundreds to thousands of GPUs to get started. And since they couldnt source the necessary processing power, their only option was to run all their services with the public cloud providers. Looking back, fine-tuning the Hugging Face StarCoder model on a single NVIDIA A100 GPU was our first ah-ha moment. The starting costs for AI turned out to be far less than we thought, and when we started moving services to production, we found that we had a far lower cost running AI inferencing services in our data centers. This, in turn, had a direct impact on our AI product strategy and roadmap.Private AI: One Year LaterIn the intervening year, weve witnessed wide acceptance of our approach. Private AI is now covered as an industry market category by leading industry analyst firms, there are commercial Private AI solutions in the market, and customers are frequently asking for conversations about Private AI.In conversations with the heads of AI at nearly 200 end-user organizations, it has become clear to me that organizations will leverage both public clouds and private data centers (owned or leased capacity) to meet their needs. SaaS AI services have proven their value in a range of use cases, including marketing content and demand generation; however, there are also many use cases where privacy, control, or compliance require a different approach. We have seen customers starting AI applications in a public cloud and deploying them to a private data center for several reasons:Cost - Customers with mature AI environments have shared with me that their cost savings for Private AI is 3 to 5 times that of comparable public cloud AI services. When they use open source models and manage their own AI infrastructure, they can also have a predictable cost model as opposed to the token-based billing that they have grown used to with public AI services, which can lead to unpredictable costs from month-to-month.Privacy and Control - Organizations want to maintain physical control of their data and run AI models adjacent to their existing data sources. They dont want to assume any risk of data leakage, whether its real or perceived.Flexibility - The AI space is moving so fast that it is not pragmatic to bet on a single vertical stack for all of your AI needs. Instead, a platform that allows you to share a common pool of AI infrastructure gives you the flexibility to add new AI services, A/B test and swap out AI models as the market evolves.The Latest from VMware and NVIDIALaunched with much fanfare at Explore 2023, VMware Private AI Foundation with NVIDIA became generally available this past May. Since then, we have seen tremendous demand for the platform in all major industry verticals, including in the public sector. At this years show, we announced that we are adding new capabilities today, while also showcasing whats to come when we make VMware Cloud Foundation 9 available in the future.Today, we introduced a new model store that will enable ML Ops teams and data scientists to curate and provide more secure LLMs with integrated role-based access control to help ensure governance and security for the environment, and privacy of enterprise data and IP. This new feature is based on the open source Harbor container registry, allowing models to be stored and managed as OCI-compliant containers, and includes native NVIDIA NGC and Hugging Face integrations (including Hugging Face CLI support), offering a simple experience for data scientists and application developers. Additionally, were adding guided deployment to automate workload domain creation workflow and other infrastructure components of VMware Private AI Foundation with NVIDIA. This will accelerate deployment speed and a further reduction in administrative tasks resulting in a faster time to value.Further, several exciting capabilities planned for VCF 9 showcased at Explore will include:Data Indexing and Retrieval Service - Chunk, index and vectorize data and make available through updatable knowledge bases with a configurable refresh policy that ensures model output remains current.AI Agent Builder Service - Use natural language to quickly build AI agents such as chatbots to realize quick time-to-value for new AI applications.vGPU profile visibility - Centrally view and manage vGPU profiles across your clusters, providing a holistic view of utilization and available capacity.GPU Reservations - Reserve capacity in order to accommodate larger vGPU profiles, ensuring that smaller vGPU workloads do not monopolize capacity and not leave sufficient headroom for larger workloads.GPU HA via preemptible VMs - Through the use of VM classes, you will be able to utilize 100% of your GPU capacity and then snapshot and gracefully shut down non mission critical VMs (e.g., prioritize production over research) when capacity is needed.Why Us?Organizations have chosen to move forward with VMware, a part of Broadcom, as their strategic AI partner for many benefits:Lower TCO - AI applications are complex, and require considerable intelligence at the infrastructure layer to meet performance and availability requirements. This has to start with getting your infrastructure simplified and standardized. It is why organizations are building their AI infrastructures on VMware Cloud Foundation, which has shown dramatically lower TCO than alternatives. As mentioned before, running AI services on a virtualized and shared infrastructure platform can also lead to far lower and more predictable costs than comparable public AI services. When you virtualize and share capacity among data scientists and AI applications, organizations gain all the economic benefits themselves versus when they consume public AI services, where the providers ability to virtualize and share capacity goes to their profit margin. Best of all, you can virtualize infrastructure for AI without sacrificing performance and in some cases seeing better performance than bare metal.Resource sharing - Resource scheduling is one of the most complex aspects of AI operations, and the VMware Distributed Resource Scheduler (DRS) has continued to evolve for nearly 20 years. Our technology lead in this space allows organizations to virtualize and intelligently share GPUs, networks, memory, and compute capacity, driving automated provisioning and load balancing. Our innovation leadership is a key reason why organizations that have tried operating their own homegrown AI platforms have turned to VMware Private AI Foundation with NVIDIA.Automation - Our ability to safely automate the delivery of AI app stacks within minutes and continue to drive automation beyond Day 2 has also been a key factor fueling excitement and adoption. This can range from building new AI workstations to bringing NVIDIA Inference Microservices (NIMs) to production.Centralized Ops - Centralized operations have been shared as another key benefit we provide. Organizations are able to use the same set of tools and processes for both AI and non-AI services, which further reduces their TCO for AI applications. This also includes centralized monitoring of your GPU estate.Trust - Organizations have depended on VMware technologies to run some of their most critical applications over many years. They are excited about our Private AI roadmap, and trust us to deliver.Private AI: Its All About the EcosystemTime has also shown us that there will not be a singular solution for AI. This is truly an ecosystem game, and we are continuing to push forward to build the best possible ecosystem for VMware Private AI with partners of all sizes. Today at Explore we announced new or expanded efforts with the following partners:Intel: We announced that VMware Private AI for Intel will support Intel Gaudi 2 AI Accelerators, providing more choice and unlocking more use cases for customers with high performance acceleration for GenAI and LLMs.Codeium: Accelerates time to delivery with a powerful AI Coding assistant that helps developers with code generation, debugging, testing, modernization, and more.Tabnine: Provides a robust AI code assistant that streamlines code generation and automates mundane tasks, allowing developers to spend more time on value-added work.WWT: WWT is a leading technology solution provider and Broadcom partner for full stack AI solutions. To date, WWT has developed and supported AI applications for more than 75 organizations and works with us to empower clients to quickly realize value from Private AI, from deploying and operating infrastructure to AI applications and other services.HCLTech: Provides a Private Gen AI offering designed to help enterprises accelerate their Gen AI journey through a structured approach. Paired with a customized pricing model and HCLTech's data and AI services, this turnkey solution enables customers to move from Gen AI POC to production more quickly, with a clearly defined TCO.Looking AheadIts clear that AI is going to become even more mainstream in the coming years as organizations tap its power to help humans become more productive and innovative. But that also puts the onus on companies to ensure their infrastructures are sufficiently robust to handle this accelerating transition.A year ago, we made the case that the AI space was moving so rapidly that customers shouldn't bet on a single solution. They would be better prepared for the future if they invested in a platform that would give them enough flexibility to meet new moments. When requirements changed or a better AI model came along, we argued, this platform approach would facilitate internal adoption. It was also clear to us that there was growing demand to run AI models adjacent to anywhere organizations have data, and that privacy, control, and a lower TCO would drive architecture and purchasing decisions.A year later, Im even more convinced that we are on the right path. Best of all, theres so much more to come. | Content Creation/Process Automation/Digital Assistance | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Tao Zhang, Forbes Councils Member, Tao Zhang, Forbes Councils Member https://www.forbes.com/councils/forbestechcouncil/people/taozhang/ | How Open-Source Generative AI Models Affect Applications In Vertical Markets | As we look to the future, the momentum behind open-source AI shows no signs of slowing down. | https://www.forbes.com/councils/forbestechcouncil/2024/10/08/how-open-source-generative-ai-models-affect-applications-in-vertical-markets/ | 2024-10-08T10:15:00Z | Tao Zhang, Head of Engineering at Caktus AI.gettyIn recent years, the artificial intelligence (AI) landscape has seen a dramatic evolution. Industry leaders such as OpenAI, Google and Anthropic Labs continue to make substantial strides in advancing closed AI systems. At the same time, the open-source ecosystem has also emerged as a formidable contender in the race to develop sophisticated, multimodal generative models. For example, Metas recent release of open-source Llama 3.1 has provoked a broader conversation about the future viability of closed models like GPT and Claude. The once-dominant notion that closed models would retain their technical supremacy now faces growing skepticism while open-source models demonstrate competitiveness across a spectrum of use cases.The Growth Of Open-Source AIBefore delving into specific applications, it's important to understand the sheer velocity at which open-source AI has grown in recent years. When OpenAI first introduced ChatGPT to the public in 2022, the only significant open-source alternative on the market was GPT-NeoX. However, this early counterpart, although functional, was noticeably lacking in performance and overall reliability. At the time, the technical gap between closed and open-source models was so pronounced that many experts in the field confidently speculated that open-source initiatives would never manage to bridge the divide.Yet, fast-forward two years, and we find ourselves in a radically different reality. The release of Llama 3.1 has demonstrated superior capabilities in domains as varied as mathematical problem-solving, tool use integration and multilingual understanding.Adding further momentum to this trend is the influx of capital and attention from prominent venture capital firms, notably Andreessen Horowitz, which recently placed significant investments in Mistral, another startup focused on advancing open-source AI solutions. Such investments are a testament to the growing confidence in open-source innovation, indicating that these models aren't merely a lower-cost alternative but are becoming cutting-edge tools in their own right. This shift reflects the broader strength of the open-source AI community, which has grown exponentially in both numbers and expertise.Niche Applications: Transforming Vertical MarketsFor businesses operating in niche markets, the implications of this open-source renaissance are substantial. In the education technology (edtech) space, for example, Caktus AI leveraged proprietary research papers and academic data to fine-tune Metas Llama model. This process enabled us to create a research-oriented model specifically designed to assist students, researchers and educators in synthesizing complex academic information more efficiently. By opting for a customized open-source solution, Caktus has been able to offer a more precise, user-focused tool, showcasing the adaptability and scalability of open-source platforms.Similarly, the healthcare sector has witnessed a surge in open-source adoption, with companies like Hippocratic AI developing models aimed at improving medical diagnostics and personalized patient care. These models are fine-tuned using extensive proprietary datasets, enabling healthcare providers to offer faster and more accurate diagnostic tools while preserving patient privacya critical factor in healthcare operations. By adopting open-source AI, healthcare companies are gaining more control over the intricacies of their models, allowing them to adapt quickly to new regulatory requirements and emerging medical research.It's also important to note that applications don't necessarily have to choose one or the other when it comes to implementing open-source or closed models. In fact, many firms prefer a mixture of the two to deliver the best customer experience. For instance, Teli AI, a startup in the communication sector that deploys AI agents for businesses, achieves reliability and efficiency by combining the use of both open-source and proprietary AI. By using closed models to process conversations and open-source solutions to perform custom tasks, Teli is able to engage customers using natural conversation in real time without introducing lag from processing delays.These examples underscore the growing role of open-source AI in transforming vertical markets by providing more flexibility, control and customization than traditional closed models. As these trends continue, it becomes clear that businesses in virtually any industry can benefit from exploring open-source AI solutions to gain a competitive edge.Why Open-Source Models Matter For Your BusinessAs open-source AI continues to mature, its implications for businesseswhether in tech or outside the tech ecosystemare far-reaching. At its core, open-source AI offers a level of flexibility, accessibility and customization that allows companies to fine-tune models to their specific needs, enabling them to create tailored, industry-specific solutions that aren't possible with one-size-fits-all models.For instance, a logistics company looking to optimize routing, fleet management and delivery schedules could implement an open-source model to process real-time data inputs such as traffic conditions, weather patterns and vehicle performance. By doing so, the company could significantly improve operational efficiency while reducing fuel consumption and delivery times. Similarly, a retail business could use open-source AI to better understand customer purchasing behaviors, enabling them to deploy hyper-personalized marketing campaigns that boost engagement and conversion rates.The cost advantages of open-source models are equally compelling. Licensing closed models from large AI providers often comes with significant recurring costs, which can strain the budgets of small and medium-sized enterprises (SMEs). Open-source models, by contrast, offer businesses the opportunity to reduce or eliminate these costs while still accessing cutting-edge technology. Moreover, open-source AI provides companies with greater control over their intellectual property and data. This is particularly crucial for businesses that handle sensitive customer information, as it allows them to maintain compliance and ensure data security without third-party access.Looking Ahead: The Future Of Open-Source AI In BusinessAs we look to the future, the momentum behind open-source AI shows no signs of slowing down. By enabling companies to take a "crawl, walk, run" approach to product development, open-source AI empowers entrepreneurs and innovators to experiment with minimal upfront costs, fine-tune their solutions and achieve product-market fit on their own terms. This not only lowers the barriers to entry for AI-driven startups but also creates an environment ripe for experimentation and creativity, ultimately leading to more innovative solutions across a broad range of industries.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? | Unknown | Education, Training, and Library/Computer and Mathematical | null | null | null | null | null | null |
|
news | Steven Vaughan-Nichols | IBM doubles down on open source AI with new Granite 3.0 models | Big Blue's Granite LLMs are built for business and now they're available under the good old Apache 2.0 license. | https://www.zdnet.com/article/ibm-doubles-down-on-open-source-ai-with-new-granite-3-0-models/ | 2024-10-22T19:24:26Z | Ethan Miller/Getty ImagesOpen source and AI have an uneasy relationship. AI can't exist without open source, but few companies want to open source their AI programs or large language models (LLM). Except, notably, for IBM, which previously open-sourced its Granite models. Now, Big Blue is doubling down on its open-source AI with the release of its latest Granite AI 3.0 models under the Apache 2.0 license. IBM has done this using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. And IBM has gone to great lengths to avoid potential copyright or legal problems. Also: Can AI even be open-source? It's complicatedWhy have other major AI companies not done this? One big reason is that their datasets are filled with copyrighted or other intellectual property-protected data. If they open their data, they also open themselves to lawsuits. For example, News Corp publications such as the Wall Street Journal and the New York Post are suing Perplexity for stealing their content.The Granite models, by contrast, are LLMs specifically designed for business use cases, with a strong emphasis on programming and software development. IBM claims these new models were trained on three times as much data as the ones released earlier this year. They also come with greater modeling flexibility and support for external variables and rolling forecasts.In particular, the new Granite 3.0 8B and 2B language models are designed as "workhorse" models for enterprise AI, delivering robust performance for tasks such as Retrieval Augmented Generation (RAG), classification, summarization, entity extraction, and tool use.These models also come in Instruct and Guardian variants. The first, as the name promises, helps people learn a particular language. Guardian is designed to detect risks in user prompts and AI responses. This is vital because, as security expert Bruce Schindler noted at the Secure Open-Source Software (SOSS) Fusion conference, "prompt injection [attacks] work because I am sending the AI data that it is interpreting as commands" -- which can lead to disastrous answers.Also: Red Hat reveals major enhancements to Red Hat Enterprise Linux AIThe Granite code models range from 3 billion to 34 billion parameters and have been trained on 116 programming languages and 3 to 4 terabytes of tokens, combining extensive code data and natural language datasets. These models are accessible through several platforms, including Hugging Face, GitHub, IBM's own Watsonx.ai, and Red Hat Enterprise Linux (RHEL) AI. A curated set of the Granite 3.0 models is also available on Ollama and Replicate.In addition, IBM has released a new version of its Watsonx Code Assistant for application development. There, Granite provides general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications. Granite's code capabilities are now accessible through a Visual Studio Code extension, IBM Granite.Code.Also: How to use ChatGPT to write code: What it does well and what it doesn'tThe Apache 2.0 license allows for both research and commercial use, which is a significant advantage compared to other major LLMs, which may claim to be open source but bind their LLMs with commercial restrictions. The most notable example of this is Meta's Llama.By making these models freely available, IBM is lowering barriers to entry for AI development and use. IBM also believes, with reason, that because they're truly open source, developers and researchers can quickly build upon and improve the models.IBM also claims these models can deliver performance comparable to much larger and much more expensive models.Put it all together, and I, for one, am impressed. True, Granite won't help kids with their homework or write the great AI American novel, but it will help you develop useful programs and AI-based expert systems. | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Phoebe Liu, Forbes Staff, Phoebe Liu, Forbes Staff https://www.forbes.com/sites/phoebeliu/ | This Nonprofit Wants To Use AI To Understand Animal Communication–And Two Billionaires Are Backing It | ChatGPT changed how humans view and interact with language. Here’s how the Earth Species Project is using new billionaire funding to build a similar model for animals. | https://www.forbes.com/sites/phoebeliu/2024/10/17/this-nonprofit-wants-to-use-ai-to-understand-animal-communicationand-two-billionaires-are-backing-it/ | 2024-10-17T13:30:00Z | The Earth Sciences Project worked on research to explore whether machine learning can help categorize unlabeled calls of an endangered population of beluga whales.Barry Williams/Getty ImagesThe Earth Species Project,a U.S.-based nonprofit that aims to decode animal communication by using AI, announced $17 million in new grants from billionaires on Thursday, the organization shared exclusively with Forbes.Of the new funding, $10 million came from billionaire investor and entrepreneur Reid Hoffmans Aphorism Foundation; the remaining $7 million is a three-year pledge from the Waverley Street Foundation, one of billionaire Laurene Powell Jobs philanthropic vehicles.The Earth Species Project plans to use the funding, which amounts to more than five times its 2023 expenses, to at least double the size of its AI research team. Its eight-person global research team is currently studying bird vocalization patterns to help with conversation efforts and working on an early ChatGPT-esque model for animals, regardless of species. The organizations ultimate goal is for its research to shift humans understanding of our place in the natural world, and in doing so help with climate change and biodiversity.In the last two years, AI has shown us that anything that can be translated will be translated, says Aza Raskin, cofounder and president of the six-year-old Earth Species Project and a Forbes 30 Under 30 alum. Like the telescope that allowed us to discover that Earth was not the center of the solar system, I think our tools are going to let us look out at the universe and discover that humanity is not the center.Aza Raskin, Katie Zacarian and Britt Selvitelle (left to right) cofounded the Earth Species Project to help decode animal communication with AI.Earth Sciences ProjectIn 2013, Raskin was listening to an NPR story about the Gelada monkeys human-like vocalizations when he first wondered if animal communication could be decoded. Five years later, the three-time entrepreneur and former Mozilla designer cofounded the Earth Species Project with two other Silicon Valley alumni: Britt Selvitelle, part of the founding team at Twitter, and Katie Zacarian, an early Facebook employee.The organization wants to make it possible for humans to communicate with other speciesor at least understand what they are sayingby 2030. So far, its done that by funding research on specific animals vocalization patterns (current projects: beluga whales, crows and zebra finches) and through a more general foundation model that it aims to evolve into something like what ChatGPT is for human communication. The earliest version of the general model was released in 2022, and identifies patterns within a trove of audio, video and motion data of animals across species through a variety of machine learning techniques. One such technique is an embedding, for example, which is like a network graph that groups sounds and images by similarity. As the organization gets more data and updates their models, more patterns will appeareventually allowing us to extrapolate meaning from that data based on their similarity, without needing any underlying knowledge of what the data means.Since its founding, Earth Species Project has published two peer-reviewed papers (with a third in the pipeline) and self-published three papers; some are related to specific species and others to its foundation model.The Earth Species Project has largely run on donations from billionaires, in part thanks to its cofounders roots in Silicon Valley. Raskin met Hoffman when he was working at Mozilla designing Firefox and Hoffman was on the board. He first floated the idea of the Earth Species Project to Hoffman in 2015. Hoffman is fascinated by the philosophical implications of what happens when its not just humans that have culture and have language, and what that means for the shift in the relationship between humanity and the rest of nature, Raskin says. He was introduced to Powell Jobs Waverley Street Foundation in 2022 through a mutual friend who is working to fight climate change. Per Raskin, making animals appear more human could get people to care more about them and care about biodiversity, which would help sustain the planet and human life.Hoffman has seeded the Earth Species Project with $1 million a year for the last three years; blockchain billionaire Chris Larsen, the Internet Archive and the Paul G. Allen Family Foundation (tied to the deceased cofounder of Microsoft) have also contributed. Prior to the newly-announced donations, the project has collected $9 million in funding since inception, per tax filings as of the end of 2023. The new $17 million in donations announced Thursday will help the organization hire additional researchers, who will go into the field and collect new data that will be used to train both the foundational model and do species-specific research.According to Raskin, access to training data is the projects biggest challenge right now. Unlike large language models that can train on, essentially, all the data the internet has to offer, the Earth Species Project needs to collect recordings from various species. And animal data is hard to understandeven with the data the nonprofit already has, its technically challenging to separate individual animal sounds out from a noisy environment. Then theres the additional problem that there isnt as much commercial funding for animal communication AI models as there is for the human kind, even though its also computationally expensive. Skeptics say that the idea of translating animal communications may assume fundamental similarities versus human speech that may not exist.Even though its a nonprofit, the organizations Silicon Valley roots and its funders think of their work in an entrepreneurial sense. Raskin compared the new round of grants to the first big round of funding a for-profit startup would get from a venture capitalist like Hoffman. The technology is there, he believes. Biologists have used their foundational model to help them decode field recordings, and their early research can show what sounds indicate when a crow is about to leave its nest, for example. Says Raskin, All you need to do is add money. | Detection and Monitoring/Discovery/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Alex Cooke | Can Green Energy Keep Up With the AI Boom? | Digital technology is rapidly reshaping the world around you, but it’s also demanding a massive amount of energy. One question that’s growing louder is whether renewable energy can keep up with this digital boom. As more people rely on AI, cloud storage, and cryptocurrencies, energy consumption is becoming a concern. Coming to you from Vox, this insightful video tackles the growing challenge of how our digital lives are increasing energy consumption at a staggering rate. The video emphasizes that data centers, which power much of our online activity, are energy-intensive. | https://fstoppers.com/artificial-intelligence/can-green-energy-keep-ai-boom-682112 | 2024-10-13T14:03:01Z | Digital technology is rapidly reshaping the world around you, but its also demanding a massive amount of energy. One question thats growing louder is whether renewable energy can keep up with this digital boom. As more people rely on AI, cloud storage, and cryptocurrencies, energy consumption is becoming a concern. Coming to you from Vox, this insightful video tackles the growing challenge of how our digital lives are increasing energy consumption at a staggering rate. The video emphasizes that data centers, which power much of our online activity, are energy-intensive. In 2022, the electricity consumption of data centers, AI, and cryptocurrencies was about 2% of the total global demand. That number is expected to double by 2026, making it comparable to adding the energy usage of an entire country like Sweden. This trend has significant implications as the push for green energy clashes with the rising demand from data-heavy technologies.The video sheds light on the fact that companies like Google and Microsoft have made commitments to use more green energy in their data centers, but meeting those goals is not straightforward. Renewable energy sources like solar and wind are not always available, yet data centers need to operate around the clock. Building a data center can take a year, while developing new renewable energy infrastructure may take several years, creating a mismatch. Additionally, tech giants arent entirely transparent about how much of their energy use is devoted to AI workloads. This opacity complicates the picture and makes it harder for researchers and policymakers to gauge the true impact of these technologies.The video also discusses some specific figures and scenarios that highlight the complexity of this issue. For example, a single large language model, like GPT-3, consumes about as much electricity in training as 130 average U.S. households do in a year. And AI models are growing fastthe energy needed to train these systems is expected to double every nine months. This growth doesnt even account for the energy used when people interact with AI applications. On an individual level, one GPT interaction consumes a small amount of energy, but millions of interactions add up quickly.Theres a tension here that companies and policymakers have to navigate. The video raises the point that, while tech companies may promise to go green, data centers are still relying on fossil fuels as backup energy sources. This reliance could lead to unintended consequences, like coal plants staying operational or new natural gas plants being built to support increased energy needs. In short, even if tech companies meet their sustainability goals, the global energy mix may still lean on fossil fuels, undermining broader climate efforts. For a more detailed look into how AI, data centers, and cloud storage are shaping the future of energy consumption, check out the video above for the full rundown. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Yuliya Melnik | How AI Solves the Top 5 Logistics Problems | How AI Solves the Top 5 Logistics Problems AI is revolutionizing logistics and supply chain management in a way that is beyond dispute. Its application cuts across various fields: from predictive modeling to self-driving vehicles. In this piece, we focus on five promising applications of AI that have already started reshaping the logistics industry and […]The post How AI Solves the Top 5 Logistics Problems first appeared on Socialnomics. | https://socialnomics.net/2024/10/24/how-ai-solves-the-top-5-logistics-problems/ | 2024-10-24T14:06:36Z | AI is revolutionizing logistics and supply chain management in a way that is beyond dispute. Its application cuts across various fields: from predictive modeling to self-driving vehicles. In this piece, we focus on five promising applications of AI that have already started reshaping the logistics industry and companies that implement this emerging technology.Applications of AI in LogisticsAccording to McKinsey, the value of implementing AI into supply chain and manufacturing could possibly be $1.3tn to $2tn annually. AI technologies are improving transportation and logistics by integrating processes and making the time required to complete such operations more efficient. Therefore, everyone would not be surprised to learn that companies in the IT industry, such as Google, Amazon, and Intel, are funding AI heavily.Let’s explore five ways AI is reshaping logistics: automated warehousing, self-driving vehicles, integrated structure, artificial intelligence-enabled back-end operations, and uses of predictive analytics to improve clients experience. Now, let us discuss each of these innovations and look at their outlooks and examples.#1. Automated WarehousingAI promotes the efficiency and profitability of warehouse management through a complete transformation of operations. One of the emerging AI-use cases for warehouse automation is the use of predictive analytics in demand forecasting. This strategic wealth management ensures transportation costs are kept to a minimum as it focuses on distribution to regional depots.Daily operations would be easier to automate using the warehouse system since it accompanies a lot of simple operations effortlessly. OcadoA British internet supermarket has opened a new automated warehouse equipped with a state-of-the-art robot called the hive-grid-machine. It can process orders as many as 65,000 per week, or about 3.5 million grocery items at that. These robots are particularly suitable for functions like movement and lifting as well as sorting grocery products. In contrast, the capabilities of human employees at Ocado facilitate packing and dispatch functions, which remarkably decreases the time needed to complete online orders.Of these technologies, computer vision has been identified as the key technology that facilitates inventory identification and classification of these automated warehouses. New innovative technologies claim to revolutionize quality control procedures without requiring peoples direct intervention.AI has benefits in the interconnection of several warehouses for comprehensive logistic opportunities regardless of their location.#2. Autonomous VehiclesSelf-driving cars are one of the key roles of artificial intelligence in the overhaul of the transport sector. This new technology increases effectiveness in the supply chain and all the logistics sub-sectors while at the same time reducing costs substantially. Apart from regular cars, there are AI innovations that cover trucks, vans, and buses for a wide range of purposes.These automatic cars can be operated with or without human intervention at the wheels. At the moment, most governments require an actual human being to be present in an autonomous vehicle to prevent accidents. Nevertheless, these requirements may be changed by future revisions of the regulations.WaymoIn December 2024, this company debuted its driverless commercial taxi service in the suburbs of San Francisco. Their subsequent goal is to further develop self-driving technology for commercial vehicles, especially trucks, with a view to using the technology in the economic and logistics sector. Trucks are the primary focus of Waymo-Alphabet Venture, with the bright objective of enhancing safety and dependability in the truck industry. Unrealistic expectations paint them a picture of the pipe dreams of growth where revenues could reach $114 billion by 2030.#3. Smart RoadsWith the introduction of Marikula as an intelligent vehicle, we also see the improvement of smart road technologies. Different companies are present in the market that are continuously working to develop smart road solutions with special references to regional demand.Some of the creativity being incorporated into modern roads includes the installation of solar panels and LED lights. These systems provide power generation in addition to real-time road condition feeds for drivers, as illustrated below. In particular, heated solar panels eliminate winter slide ability, thus creating better road conditions for driving. Such innovations are especially beneficial in logistics, where the lack of appropriate infrastructure becomes the cause of supply chain interruption during bad weather conditions.Integrated RoadwaysIntegrated Roadways signed a five-year contract with the Colorado Department of Transportation late last year to start using the Smart Pavement system. As the company has stated, Smart Pavement can presumably track the position of every vehicle on the highway and improve navigation features.#4. Back Office AIAI solutions to logistics processes most of the time target the outer layer, whereas the inner processes typically remain ignored. Using AI in combination with RPA allows employees to enhance their jobs and take them to the next level. Smart automation in robots handles routine data work thereby enhancing back-office solutions to support cost control and accuracy to supply chain functions.Cognitive automation combines the effectiveness of artificial intelligence and robotics, thus combining efficiency, effectiveness, and time-saving cost reduction for companies. Cognitive automation intends to cause disruption and get rid of conventional office positions, including accountants and human resource employees. Organizations use software robots instead of human staff, as they minimize the possibility of human mistakes in the operation of the company, thus reducing operational expenses.UiPathSpecial emphasis is placed on the manufacturing and marketing of its robotic equipment. The founders of UiPath claimed that their robots can accomplish 99% of the activities assigned to them because of the softwares capacity for visual interpretation of screen objects. UiPath, for example, has grown its ARR from just $8 million in the last two years to over $200 million.The process of determining what clauses in the contracts are risky, however, is very time-consuming. According to McKinsey Digital, approximately 22% of a lawyers time is able to be effectively automated. Contract management software that has an AI component shows high levels of accuracy. As explained by LawGeex, the accuracy of the apps is 94%, while the human lawyers only score 85%. The software can scan contracts in 26 seconds, while it takes the human being 92 seconds.#5. AI for Promoting Customer Experience ForecastingThis brings the task of cost and inventory determination as a challenge for businesses in defining their required stocks in order to avoid losses. There are two strategic methodologies for achieving this, which will be discussed below.Within inventory management software development, AI algorithms outperform human beings in terms of the predictability of trends. This capability is important as, at the moment, AI is only able to monitor and analyze vital elements such as planning and control to enhance demand for casting and warehousing.DHLDHL Parcel and Amazon have, therefore, opted to work together to increase their clients’ satisfaction levels. Amazons personal voice service, called Alexa, makes it easier to track parcels through updated parcel tracking services. Through the simple question, Alexa, where is my parcel? users can easily get to inquire about their parcels. This remarkable element helps to get the necessary information on the shipment and its current location at a glance.There is little doubt that AI technology has now become an essential tool in logistics in the supply chain area by automating operations and creating noticeable time and cost savings.AI-Based Promising Startups in LogisticsIt is worth noting that most revolutionary inventions involve startup businesses. Let’s take a look at startup-based logistics companies that are leveraging artificial intelligence.6 River SystemsThis company deals in warehouse automation systems, and the newest product they are displaying is Chuck, the robot. Drawing inspiration from autonomous vehicles, Chuck seamlessly interacts with the Management System to handle various tasks, ranging from merely storage and packing essentials to efficient counting and sorting of other items. What adds to’s capability of chuck over others is that it is wireless and has high-end navigation properties, it has means of changing its speed depending on the need in the environment, and it uses laser scanners to avoid hitting people or other tools used in the warehouse.Locus RoboticsLocus Robotics focuses on creating AMR solutions to optimize the warehouses pick and pack functionality, which is essential for e-commerce. The LocusBot which is their launched product, has a touchpad that is incorporated into the device in the form of a pad. This advancement enhances the robot interface, thereby removing the processes that may have previously required the training of workers.ValerannValerann is a young company offering artificial intelligence services exclusively for the transportation industry. Their road structure is complex and they use a wireless system to collect a piece of dense, high-resolution information. This information is then retained in the cloud, where it is processed by statistical and machine learning methods to track and predict kinds of activity on the roads.Optimus RideThe company is in the business of developing and promoting future self-driving automobiles. The self-driving company Optimus Ride claims that its solutions can be successfully implemented in most types of vehicles. Both aspects, focused data and technology, are complementary in the sense that they assist in building stable mobile networks.ShippoShippo is known for developing complex solutions suitable for markets, warehouses, and e-commerce companies. Their core focus is designed to create supply chain applications that can optimize logistics and present individual shipping solutions. Clients get refined and adaptive workflows with shipping and exclusive reading material tracking information.ConclusionArtificial intelligence is waiting in the wings to transform the logistics industry significantly. It addresses critical issues by finding the best ways to reduce transportation costs and improve warehouse management. Machine learning, predictive analytics, and automation enable businesses to make quick decisions to increase the performance and satisfaction of customers. AI reduces errors, optimizes stock management, and improves the extent of supply chain transparency, all of which are efficiency enablers.AI has become a standard approach in the industry since integrating AI in logistics guarantees growth during the shifting global markets. It can be deduced that AI fits into these key logistics pain points and inaugurates the future of a synched, agile, and responsive supply chain environment. | Prediction/Process Automation/Decision Making | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Jennifer Mossalgue | Uber-backed Wayve to test ‘Tesla like’ self-driving software on US roads | Wayve, a well-funded London-based autonomous vehicle startup backed by Uber and Softbank, will begin testing its “Tesla-like” self-learning automated driving software in San Francisco and the Bay Area. The move marks its first on-road trials outside of the UK. more… | http://electrek.co/2024/10/24/uber-backed-wayve-to-test-tesla-like-self-driving-software-in-us/ | 2024-10-24T14:14:05Z | Wayve, a well-funded London-based autonomous vehicle startup backed by Uber and Softbank, will begin testing its “Tesla-like” self-learning automated driving software in San Francisco and the Bay Area. The move marks its first on-road trials outside of the UK.Wayve opened a new office in Sunnyvale, California, to support its US expansion and AI development, which is intended to allow vehicles to interact with and learn from human behavior in real-world environments. The concept, of course, is to enhance safety of autonomous vehicle to make roads safer for everyone. The testing program, it says, will be focused on its self-learning autonomous driving system, similar to Tesla’s, to enhance driving safety. In turn, it will also will develop its AI software capable of enabling a range of driving assistance and automation features for any number of vehicle the company plans to sell its software to auto OEMs, but no partnerships have been announced. In terms of testing, what this mean for starters is that human test drivers will cruise around Bay Area streets in a small fleet of Mustang Mach-E EVs with their hands off the wheel. The company will start small and eventually build up to testing more advanced autonomous driving.We are now testing our AI software in real-world environments across two continents, Wayve CEO Alex Kendall said in a statement. San Franciscos unique driving conditions offer rich data insights that will be crucial in further developing a global AI platform for automotive customers.The company has conducted trials on public roadways in the UK since 2018, a year after it was founded. Back in May, the company raised $1.05 billion in its Series C round, which was led by Japanese investment bank Softbank and joined by Microsoft and Nvidia making it the UK’s largest AI fundraise ever, TechCrunch reports. Uber announced too announced it would join Wayves fundraising round, but details on the investment amount werent released. Before Uber joined forces, Wayve had raised $200 million in its Series B round in 2022 and $20 million in Series A funding in 2019.Back in August, the company said that its autonomous vehicles are expected to be available in Ubers network in multiple markets around the globe.Launching our U.S. testing program in California deepens our collaboration with key partners like Microsoft, NVIDIA, and Uber, Kaity Fisher, Wayves vice president of operations and commercial, said in a statement. Their support in cloud computing, silicon, and mobility services will accelerate the creation of a global ecosystem that will bring our AI-driving technology to automotive partners.Of course, Wavye has plenty of company in San Francisco, which has become a hotbed of autonomous driving, where GM-owned Cruise and Google backed Waymo vehicles have roamed freely. Elon Musk too has reportedly said that Tesla has been testing a fleet of autonomous robotaxis in the Bay Area over the past few months. Photo credit: WavyeTo limit power outages and make your home more resilient, consider going solar with a battery storage system. In order to find a trusted, reliable solar installer near you that offers competitive pricing, check out EnergySage, a free service that makes it easy for you to go solar. They have hundreds of pre-vetted solar installers competing for your business, ensuring you get high-quality solutions and save 20-30% compared to going it alone. Plus, its free to use and you wont get sales calls until you select an installer and you share your phone number with them.Your personalized solar quotes are easy to compare online and youll get access to unbiased Energy Advisers to help you every step of the way. Get started here. trusted affiliate link*FTC: We use income earning auto affiliate links.More. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Craig Hale | Google wants to address data center power demands with nuclear power | Data centers are renowned for their high energy consumption – Google wants to address this with nuclear power. | https://www.techradar.com/pro/google-wants-to-address-data-center-power-demands-with-nuclear-power | 2024-10-15T11:02:00Z | Google has signed a new deal with Kairos Power to use small modular nuclear reactors (SMRs) to power its energy-hungry artificial intelligence (AI) data centers as the world begins to confront the consequences of widespread AI adoption and a grid that’s struggling to keep up.The partnership will be the first corporate agreement involving the purchase of nuclear energy from multiple SMRs, and is set to begin operation by the end of the decade.With the first reactor online by 2030, Google plans to enlist additional reactors over the following five years.In the nearly two years that have followed the public preview launch of ChatGPT, which is credited with kickstarting widespread interest in artificial intelligence, tech giants like Google, Microsoft and Amazon have been forced to rethink their strategies as data centers’ power consumption and use of other natural resources have come under fire.Google’s Senior Director for Energy and Climate, Michael Terrell, emphasized the importance of the agreement, noting that the grid needs new energy sources to support AI’s continued expansion. He described the partnership with Kairos as an opportunity to accelerate clean, reliable nuclear power and to unlock AI’s full potential.Kairos Power CEO Mike Laufer commented: “By coming alongside in the development phase, Google is more than just a customer. They are a partner who deeply understands our innovative approach and the potential it can deliver.”Alphabet isn’t the only big corp looking at nuclear energy to power its data centers – Microsoft and Amazon have also been publicly exploring the potential of nuclear, with the US Department of Energy also deeming it a viable solution.Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.Besides nuclear, Google has also been diversifying its energy investments to offshore wind, solar and geothermic activity.It's not just Google who are looking to address the significant power usage of data centers, as energy companies have also been advised that they need to plan ahead for the AI data center power drain, or lose out on revenue.In a recent interview, Jay Jiang Yu, Founder and Executive Chairman of Nano Nuclear Energy Inc, told TechRadar Pro that "The systems which would need to be in place to meet the expected energy demand would need to commence their installation now, to ensure that the AI and data centers in 2-3 years had the power supply necessary to continue their upscaling and expansion.""Currently the deficit in energy is expected to hit the tech centers sometime around 2026-2027, and currently no new system looks able to come online before those dates," he concluded.Via BBCMore from TechRadar Pro | Unknown | Unknown | null | null | null | null | null | null |
|
news | Ewan Spence, Senior Contributor, Ewan Spence, Senior Contributor https://www.forbes.com/sites/ewanspence/ | Android Circuit: Pixel 9a Leaks, Thunderbird’s Android Beta, Google Play Vs F-Droid | This week’s Android headlines; the latest Galaxy S25 Ultra specs, Pixel 9a design leaks, considering the Pixel 9 Pro Fold, the importance of F-Droid, and more... | https://www.forbes.com/sites/ewanspence/2024/10/04/android-news-headlines-pixel-9a-galaxy-s25-hoor-magic-v3-fdroid-bop-spotter/ | 2024-10-04T22:57:31Z | Taking a look back at this weeks news and headlines across the Android world, including the latest Galaxy S25 Ultra specs, Pixel 9a design leaks, the very thin Honor Magic V3, when to consider the Pixel 9 Pro Fold, the importance of the F-Droid store, Thunderbrids Android beta, and listening to the music of San Francisco.Android Circuit is here to remind you of a few of the many discussions around Android in the last seven days. You can also read my weekly digest of Apple news here on Forbes.Pixel 9 Pro FoldEwan SpenceGalaxy S25 Ultras Space For AISamsungs Galaxy s25 Ultra is expected to launch in January 2025 alongside the second generation of Galaxy AI software. Its the latter that suggests the former will ship with 16 GB of RAM, up from last years 12 GB on the S24 models:"There is a solid argument to ensure all the available variants of the Galaxy S25 Ultra sport 16 GB of RAM, which comes from the latest Pixel 9 Pro models from Google. Nearly 4 GB of RAM in each of these models is reserved for Google AI routines. Introduced alongside Googles advances in generative AI, this guarantees faster operation as a portion of memory is dedicated to AI."(Forbes).Pixel 9as Flatter Design DebutsThe Pixel A family of phones takes on the role of the "value for money" model of the range and typically launches six months after the main line. That would put the potential launch of the Pixel 9a in early March, although the early launch of the Pixel 9 family could mean the 9a retains a slot near Mays I/O Developer conference. Nevertheless, Google looks set to reduce the impact of the camera bar for budget buyers, according to the latest leak:"One of the biggest design changes this year is the camera bar. It looks to be nearly flush with the back of the Pixel 9a, perhaps with a small ring around it that sticks out. It looks a lot like the LG V60 ThinQ from a few years ago. It does inherit the new camera layout as the Pixel 9, and is sticking with an ultrawide and a wide camera here. Another thing youll notice is those pretty large bezels. Though expected on a phone thats going to be under $500, they are still very noticeable."(Android Headlines).Finding The Thin Form FactorWhen you spend time with a phone for review, you have time to appreciate its features, but sometimes the initial impressions stay with you. Thats true of the Honor Magic V3, the latest foldable from the Shenzhen-based company. You cant get past just how thin this phone is (at least till something thinner comes along in 2025):"While the Magic V3 is described as slim. Thats an understatement. On opening the handset, the word that springs to mind is "impossible". Its clearly not, as I'm holding the phone, and its thickness is 4.4mm unfolded. Honor is pushing to the edge of the physical envelope. Its a moving target, and no doubt, in a few years, this will be seen as a chunky and heavy foldable, but for now, this has to be seen as the gold standard."(Forbes).Consider The Pixel 9 Pro FoldAlongside Samsung and Honor, Googles foldable phone for 2024 has taken the lesson of last years models to sculpt something closer to the mainstream idea of a smartphone, at least when closed. Forbes contributor Janhoi MacGregor takes a closer look inside and out for his review, noting the price and the unique appeal:"This is a phone for a power user, both in features and price. The eye-watering $1,799 price tag, alongside not having the best display or camera tech, makes the Pixel 9 Pro Fold hard to recommend to regular smartphone buyers As I always recommend with foldable phones, spend some time fondling the P9PF before you buy it, even if shop workers get annoyed with you. This would be an expensive mistake to make if foldable tech isnt for you."(Forbes).The Other App Store You NeedTrevor Slocum highlights the power that Google has through its control of the Google Play app store. He also notes when F-Droid, a popular app store dedicated to open-source software, is one of the counterbalances the ecosystem needs:"F-Droid is our best chance at breaking the chains the Google Play Store has bound around developers. Its not always easy to release a previously proprietary app as open source software, and this will certainly require some effort. But it is worth it. Open source software empowers users to inspect, understand and improve the apps they are using. It empowers other developers to help contribute. And it empowers the original developers, because sharing freely and collaborating openly has compounding effects. Open source software builds community and momentum, and may achieve greater strength than any sole proprietary app developer ever could."(Rocket 9 Labs).Thunderbrids Android BetaFollowing the purchase of the K-9 Email Client by Mozilla, the Thunderbird team has been hard at work so they can offer both an updated K-9 codebase and fill it with the utility of the Thunderbird ecosystem:"The Thunderbird for Android beta is out and were asking our community to help us test it. Beta testing helps us find critical bugs and rough edges that we can polish in the next few weeks. The more people who test the beta and ensure everything in the testing checklist works correctly, the better!"(Github via Thunderbird Blog).And FinallyHigh above San Francisco is a smartphone. An old Android smartphone. An old smartphone with a mission (and a solar panel). Listen to the crowds below, pick out a piece of music, and add it to the playlist on the website A musical delight? Its a mixed bag. A statement on public surveillance? Very much so:"Heard of ShotSpotter? Microphones are installed across cities in the United States by police to detect gunshots, purported to not be very accurate. This is that, but for music. This is culture surveillance. No one notices, no one consents. But it's not about catching criminals. It's about catching vibes. A constant feed of whats popping off in real-time."(Bop Spotter).Android Circuit rounds up the news from the Android world every weekend here on Forbes. Dont forget to follow me so you dont miss any coverage in the future, and of course, read the sister column in Apple Loop!Last weeks Android Circuit can be found here, and if you have any news and links youd like to see featured in Android Circuit, get in touch! | Digital Assistance/Process Automation/Personalization | Unknown | null | null | null | null | null | null |
|
news | Jim McGregor, Contributor, Jim McGregor, Contributor https://www.forbes.com/sites/tiriasresearch/people/jimmcgregor/ | System Design For The AI Era: Data Centers Require A Holistic Approach | Increases in hardware performance are now driving power and thermal energy/heat to new levels. | https://www.forbes.com/sites/tiriasresearch/2024/10/28/system-design-for-the-ai-era-data-centers-require-a-holistic-approach/ | 2024-10-28T19:19:04Z | AI will impact every industry and every aspect of society.Tirias Research generated using Stable DiffusionThe pace of AI continues to be relentlessly fueled by new software, hardware, and learning paradigms, and it remains a challenge to meet the required exponential increases in performance. Gone are the days when a 10x to 20x generational increase is enough to meet increasing performance needs. AI requires 100x to 1000x performance increases from generation to generation, especially as we move from traditional AI to generative AI. And this doesnt even account for what will be required for Artificial General Intelligence (AGI). The problem is compounded with the slowing of Moores Law. Increases in hardware performance are now driving power and thermal energy/heat to new levels. These challenges are changing how the industry thinks about and designs solutions, from chips to devices to data centers. This is the first in a series of articles examining how AI is changing the future of system design.The rapid pace of AIThe advent of deep learning, essentially training electronic solutions to learn, drove the first wave of AI innovation. This resulted in many startups offering solutions from edge devices to the cloud. Unfortunately, many have failed, and the remaining struggle to compete in a rapidly changing market. In both the data center and the edge, startups rushed to use various technologies to perform training and/or inference through custom processor and accelerator designs. The problem is we as an industry were, and still are, learning how to learn.The number and types of models have increased over the past several years. The progression from RNNs and CNNs to transformer models was a significant shift. Then came the development of generative AI models. More recently, new methods like recursive learning and RAG (Retrieval Augmented Generation) led to higher performance and accuracy using trained models. Tirias Research believes that new forms of training, models, and/or use will still be required to reach AGI.Unfortunately, many of the startups rushed to develop custom architectures that were only capable of performing well with one or a few model types or with limited model sizes efficiently. While this may be good for hyperscalers and Cloud Service Providers (CSPs) because of the scale at which they utilize a small number of models, it proved to be very limited for chip startups looking to address the needs of the broader market. In the data center, the rapid evolution of AI only highlighted the need for flexible, programmable solutions like GPUs that could adapt to changing workloads and requirements. This left many startups with a limited market, and the remaining few, like Cerebras, Groq, and SambaNova, shifted business models from selling chips and/or systems to building data centers to provide AI as a service.Edge startups faced an even more formidable challenge as platform size, power, and size constraints often left no room for additional silicon solutions, and developing basic integrated neural processing units proved relatively easy for the existing chip providers. The remaining edge AI startups with a fighting chance are those that are providing a complete System on a Chip (SoC).In addition to the market challenges, most startups also had a flawed business strategy. Most AI startups had a similar business plan: develop some AI intellectual property (IP) and sell the company and/or IP to an existing company looking to produce its own chips or chips for the general market. Few focused on becoming a viable long-term company.The challenges created by AIThe second AI wave has been focused on how to meet the performance requirements of AI while democratizing AI for use by the masses. From the edge to the cloud, AI hardware solutions have continued to increase performance, but at a significant cost. In the data center, AI accelerators continue to increase in performance and are beginning to outnumber CPUs. However, the increased performance has been accompanied by increased power consumption and, because of the increased power, higher generated heat. Both can easily exceed the infrastructure limits of existing data centers. Edge devices face a similar situation, but with the continued focus on training and high-performance inference processing, the data center challenges are the more immediate focus. How can data centers meet the exponential growth in data sets, model sizes, and user demand with a technology that grows in a more linear pattern? The answer requires rethinking how we design AI solutions.A new view of the data centerEarlier this year, Nvidia CEO Jensen Huang coined the phrase data centers are the new unit of compute. Intel CEO Pat Gelsinger has reiterated a version of this statement. This infers that meeting the needs of AI requires designing at the data center level, or, to put it another way, the data center must be designed as a single system. This means that everything from the infrastructure down to silicon must be designed holistically. Mr. Huang categorized data centers as falling into two classes: the AI data center, which is a more traditional data center capable of handling many types of workloads but also handling AI workloads, and AI factories, which will be focused on AI workloads and capable of handling the most demanding training and inference processing. It was clear that these AI factories would require a tremendous supply of power and cooling to handle the latest AI processors and accelerators, which continue to grow exponentially. But even the AI data centers need to be designed to handle higher performing solutions that exceed the power and infrastructure requirements of most existing data centers.This has resulted in a massive demand for the construction of new data centers at a time when real estate prices, power distribution, water resources, and local politics are challenging many new data center projects. It has also resulted in a need to make AI processing more efficient. While using smaller, more optimized models will help alleviate some of the pressure on resources, it is not a panacea because of the continued growth in demand and performance requirements. Ultimately, the processing resources need to meet the demand through improved efficiency. Performance requirements, demand, power consumption, and thermal requirements will continue to increase. However, processing efficiency is required to offset the growth while bringing down the cost of AI processing, which remains a challenge to AI processing profitability. Given the intricacies involved with this topic, Tirias Research will be covering it in more detail in subsequent articles.The data center is a single compute unitThis brings us back to designing the data center as a single system, which means planning for holistic requirements starting with power. At this point, increasing power requires locating AI data centers near existing power generating stations like a hydroelectric dam or a nuclear power plant, installing new power generation equipment like solar arrays or wind turbines, or small modular reactors (SMRs), which are in use in China and Russia and are in the licensing and development phase in a handful of countries, including the US.The second is designing for thermal management, which can be done at the server, rack, or data center level and includes solutions ranging from air management systems to full system emersion. Supermicro indicated that shifting from air to liquid cooling can save up to 40% of the power required to cool a server and up to 92% of the power required to cool a data center (data center cooling techniques will be a topic covered in depth in a future article).Next comes the design for the processing, which is where the major changes will occur. During the Intel Foundry Direct Connect conference, Mr. Gelsinger discussed the need to move the memory, processors, accelerators, and networking all closer together, which may result in modules requiring thousands of watts of power. They also must be efficiently cooled. This is not something that fits into a traditional server configuration. In addition, all these processing elements throughout the data center need to be able to act as a single solution, which pushes the boundaries of networking performance, as well as memory and logic design. In some cases, it means even linking resources between data centers.A new level of innovationThe result is a new level of innovation that starts at the system level, which in this case is the data center, and extends down to the silicon foundation layer. Some of the changes are highlighted in the product innovations we see from individual companies, which I will highlight in future articles, and some through industry-wide changes. These changes include the disaggregation of processing into specialized processors or accelerators, such as the use data processing units (DPUs) for offloading tasks like storage, networking, and security from the CPU. Similarly using FPGAs, DSPs, or custom ASICs to perform specific tasks. Even disaggregating the CPU into performance or efficient CPUs to perform specific sets of tasks or used as the host controlling the GPUs and AI accelerators for specific types of workloads. Another change is shifting some computing to where the data resides in memory or the network.Another change is in the way chips are designed. Previously, focusing on introducing new products on a regular and often short cadence led to introducing less-than-optimized products that would improve efficiency in future generations. The push for efficiency leads to more balanced designs from the start that may sacrifice performance in one area to improve the overall utilization and efficiency of the chip and system. In addition, companies are starting to use more upfront simulation to improve efficiency and manufacturability. Even AI is beginning to play a role in chip design in the tools and as agents to perform specific design functions like analysis. AI has not reached a level of reliability for high-end chip design, but Tirias Research believes this day is coming within the next decade.While not at the same scale, edge AI applications face similar challenges. Moving some AI processing to the edge makes sense from a power, resource, cost, performance/latency, and security standpoint, but even small language models (SLMs) are likely to outpace the processing capabilities of edge SoCs, especially when multiple AI models are being used simultaneously. This requires considering the entire SoC an AI engine, not just the GPU or NPU.Changing the paradigmThe demands of AI are outpacing and will continue to outpace hardware capabilities. AI is not just another workload; it is rapidly becoming the primary workload for the new generation of AI data centers and, eventually, other electronic platforms. This is driving changes to system design, As a result, future designs must start at the system level and work down to the individual components. This will be an ongoing challenge, but we are seeing how some companies are addressing it from various perspectives. This is a monumental shift in the tech industry that is determining future winners and losers. Tirias Research will continue to look at how companies are addressing this paradigm shift and to analyze what changes are still required for companies and the industry.The author and members of the Tirias Research staff do not hold equity positions in any of the companies mentioned. Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Tirias Research staff have consulted for companies throughout the semiconductor and electronic ecosystems. | Unknown | Architecture and Engineering/Computer and Mathematical | null | null | null | null | null | null |
|
news | Youlian Tzanev | How the UK can advance High Performance Computing & AI sustainably | After its entry into EuroHPC, the UK must prioritize sustainable innovation for High Performance Computing and AI. | https://www.techradar.com/pro/how-the-uk-can-advance-high-performance-computing-and-ai-sustainably | 2024-10-07T14:13:48Z | In May 2024, the UK entered the European High Performance Computing Joint Undertaking (EuroHPC) program which brings together supercomputing resources from across 35 countries, including Norway, Turkey and all 27 European Union (EU) member states.By joining the prestigious program, the UK aims to bolster its scientific and technological leadership, foster international +, and leverage HPC to drive innovation and economic growth – all of which are extremely positive ambitions for the British IT industry. However, membership of the EuroHPC will also accelerate demand for energy in the UK, and this could even advance beyond the nation’s current capabilities. Especially given that the UK has yet to contend with the formidable energy demand surge created by increased use of Artificial Intelligence (AI).The Chief Executive Officer of the UK’s National Grid, John Pettigrew, commented that AI will consume 500% more power in the UK during the next decade. This is not just a problem for the UK, but this electrical surge is quickly becoming a global issue. Data centers worldwide consumed 460 terawatt-hours (TWh) in 2022, almost two percentage points of total global electricity demand. With the added power demand for AI, electricity consumption from data centers in the EU in 2026 is predicted to be 30 percent higher than 2023.According to recent research by Goldman Sachs, AI is set to increase data center power demand by 160% by the end of the decade. This is not unexpected given that Google’s emissions have nearly doubled in the past five years alone thanks to AI. These shocking figures pose questions about the possibility of net-zero emissions by the end of the decade.The key question remains, how can we sustainably meet this level of demand?Co-Founder and Chief Strategy Officer of NexGen Cloud.The biggest trends of our timeAI and climate change are two of the most significant and impactful stories of our time, each shaping various aspects of society, the economy, and the environment in profound ways. They are far from being mutually exclusive. AI will doubtless play a role in tackling the challenges of climate change. Yet the power demands of data centers will put pressure on net zero goals.Equivalent to the need for electricity, the UK also requires access to GPUs as it pushes to become an HPC and AI leader. But innovation must be sustainable to ensure that energy-intensive processes can be carried out with minimal environmental impact, and at no threat to the UK’s energy security.Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.100% sustainable data centersThe only viable solution is to harness renewable energy to power data centers.Energy efficient strategies at AI data centers must also be prioritized to enable the UK to keep up without having to pull the plug due to power demand. Sourcing renewable energy to power data centers presents several challenges, primarily revolving around the need for consistent and reliable power supply, high initial investment costs, and the integration of renewable sources into existing infrastructure.Some renewables like solar and wind are inherently intermittent, necessitating the development of efficient energy storage solutions and grid enhancements to ensure a steady power supply. NexGen Cloud hosts its AI Supercloud in a data center just outside of Oslo, which uses hydroelectric energy to power and cool its infrastructure. Hydroelectric energy is non-intermittent so allows the data center to be powered by around-the-clock clean energy.Investment and sovereign cloudsMarket forces alone are not enough to get us there, direct investment into GPU chips by the UK Government is also key to advancing the nation’s HPC and AI industries.Government investment in GPU infrastructure is vital, as the current funding significantly lags behind other global players. The UK’s investment in Nvidia GPU chips is dwarfed by the orders from tech giants like Elon Musk’s X and China. According to Pitchbook, $122 billion has been invested in generative AI businesses in the US, while European generative AI businesses have received a comparatively modest $3.8 billion. Disparities like these hamper the UK’s ability to compete in the HPC and AI sectors, necessitating a robust governmental push to scale up GPU availability and infrastructure investment to match international standards.Additionally, adopting sovereign cloud solutions can help UK businesses access high-powered GPUs while ensuring compliance with data protection laws, thereby enhancing the UK's competitiveness in the global AI market.Cloud infrastructure is available today to ensure that AI operations can remain compliant with European data sovereignty and privacy regulations by keeping them within the European jurisdiction. By powering our data centers with 100% renewable energy, the UK and EU can significantly mitigate the environmental impact associated with High Performance Computing and Artificial Intelligence, thus enabling innovation to continue without jeopardizing the possibility of net-zero by 2030.Fundamentally, this can provide both the blueprint for how the UK can sustainably support HPC and AI, and become a true global leader in these sectors.We've featured the best green web hosting.This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro | Process Automation/Decision Making | Computer and Mathematical | null | null | null | null | null | null |
|
news | Power Factors | Power Factors Launches Next-Generation AI-Powered Asset Performance Management Application on Unity Platform | Power Factors is addressing the future of renewable energy management with unprecedented applications that offer advanced asset performance management and AI-powered analytics for wind, solar, and battery energy storage systems. Power Factors is addressing the future of renewable energy management with unprecedented applications that offer advanced asset performance management and AI-powered analytics for wind, solar, and battery energy storage systems. | https://www.globenewswire.com/news-release/2024/10/10/2961359/0/en/Power-Factors-Launches-Next-Generation-AI-Powered-Asset-Performance-Management-Application-on-Unity-Platform.html | https://ml.globenewswire.com/Resource/Download/6ee0b054-3699-461f-92f7-8f314052e6e8 | 2024-10-10T13:05:00Z | San Francisco, CA, Oct. 10, 2024 (GLOBE NEWSWIRE) -- Power Factors, the leading renewable energy management suite (REMS) provider, is excited to announce the availability of Unity Asset Performance Management (APM) and Unity AI Insights, two groundbreaking applications designed to drive operational efficiency and performance optimization for renewable energy stakeholders across the asset lifecycle. These innovative products are available today and represent the next generation of renewable energy management, integrating the best capabilities from Power Factors proven APM products, Drive and Greenbyte. Unity APM, a first-of-its-kind asset performance management application, and Unity AI Insights, an extension to Unity APM that boosts its powerful AI-driven analytics, sit atop the newly rearchitected Unity platform, the most scalable, secure, and high-performance renewable energy management platform to date. Built from the ground-up, Unity APM incorporates years of customer input and decades of industry expertise, ensuring that it meets the evolving needs of the sector. Together, Unity APM and AI Insights deliver faster onboarding to decrease time-to-value, lower total cost of ownership, improved availability, and reduced operating costs all powered by a highly configurable and user-friendly interface. "Unity APM and AI Insights are critical building blocks for the future of renewable energy management. As the industry becomes more competitive, success depends on the ability to leverage advanced analysis and automation to optimize performance," said Julieann Esper Rainville, CEO at Power Factors. "These applications provide what leading power producers need to drive innovation, maximize efficiency, and secure their position in the market. Were confident that Unity APM and AI Insights will empower our customers to stay ahead of the curve and set new benchmarks in renewable energy performance."The launch of Unity APM and AI Insights marks the beginning of a new era for our customers and the market," said Abilash Krishan, Chief Product Officer. "Yesterday's technology won't solve tomorrow's challenges. That's why we've developed these state-of-the-art products to empower renewable energy leaders across wind, solar, and battery energy storage systems (BESS).Benefits of Unity APM and AI Insights:Spend more time on high-value performance analysis and less time managing data: Unity APM provides cleaned, validated data and offers features like automatic data backfill and streamlined manual interventions, enabling users to fully trust their KPIs.Deliver fully optimized energy production: Identify factors contributing to energy and revenue loss in portfolios faster and more precisely, including notification of potential failures before they occur and detailed energy loss allocation to further inform optimization planning. Maximize performance with intelligent event management: Unity APM maximizes generation and availability through fleet-level issue prioritization and logic-based event tracking, including predictive analysis of asset health to support resolving issues before they occur.Make data-driven decisions to proactively address issues: Unity APM empowers users with AI-powered analytics, offering a library of calculations, advanced data visualizations, and leading the industry from diagnostic to predictive analysis.Boost productivity with automated workflows and mission-critical notifications: Unity APM streamlines workflows and automates notifications to enhance team productivity. Explore, share, and leverage your data accessibly: Unity APM provides quick, easy data access tailored to specific use cases through configurable dashboards and reports, facilitating seamless collaboration with internal and external stakeholders.Achieve faster results with industry-leading onboarding: Unity offers the industrys most streamlined, comprehensive implementation and onboarding, getting customers operational faster and accelerating time-to-value.Scale your operations with a platform built for portfolio growth: Unity APM is designed to grow with customer needs, supporting all asset classes from day one, including native BESS support."The rearchitected Unity APM application will greatly enhance our teams experience. The streamlined and configurable interface simplifies complex workflows, and integrated features like trending and event management can significantly improve our ability to analyze and act on our data, leading to better collaboration and increased efficiency, said Patrick DiCesare at Pattern. "With the new interface and advanced features like data flagging, streamlined event categorization, and enhanced dashboarding, Unity APM is a significant leap forward, said Robert Pharris, Director of Engineering at Longroad. Its clear Power Factors has responded to meet the needs of the users, making the entire experience more intuitive and powerful.""As long-term users of Greenbyte, we were excited to test out how the new Unity APM application could improve our operations. The redesigned Data Studios enhanced features and user-friendly interface will make it easier to view and access the data we need, which will allow us to work faster and address issues more proactively," said Charlie Plumley, Senior Performance Engineer at Nuveen Infrastructure | Clean Energy.Our goal is to give customers full control over their system-of-record data so that they can trust the insights they generate, added Krishan. Unity APM and AI Insights deliver transparency, traceability, and operational excellence across all asset classes, while empowering users to configure and manage their data autonomously.To get a demo and learn more about Unity APM and AI Insights, please visit: go.powerfactors.com/unity-apm-demo.About Power Factors Power Factors is a hardware and software provider whose next-generation Unity renewable energy management suite (REMS) is one of the most extensive and widely deployed solutions in the market. With over 300 GW of wind, solar, and energy storage assets managed worldwide across more than 600 customers and 18,000 sites, Power Factors manages 25% of the worlds renewable energy data. * Power Factors Unity REMS supports the entire energy value chain, from monitoring and controls to market participation. The companys suite of open, data-driven applications empowers renewable energy stakeholders to collaborate, automate critical workflows, and make more informed decisions to maximize asset returns. Energy stakeholders receive end-to-end support, including solutions for SCADA & PPC, centralized monitoring, performance management, commercial asset management, and field service management. With deep domain expertise, AI-powered insights are delivered at scale so businesses can optimize assets, unlock growth, and make smarter decisions as the world rapidly transitions to clean energy. Power Factors fights climate change with code. Learn more at powerfactors.com. * Outside China and India Attachment | Decision Making/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Featured Partner | Top 5 In-Demand Jobs in 2024: Who Are Employers Looking For? | In 2024, we’ve witnessed a lot of workplace shifts, primarily caused by technological advances, the development of new sustainable initiatives, and, traditionally, demographic changes. All these factors keep affecting the job market as businesses from different industries keep going through a digital transformation, searching for ways to optimize sustainability or both. Clearly, this creates a […] | https://dailycaller.com/2024/10/28/top-5-in-demand-jobs-in-2024-who-are-employers-looking-for/ | 2024-10-28T18:12:03Z | In 2024, we’ve witnessed a lot of workplace shifts, primarily caused by technological advances, the development of new sustainable initiatives, and, traditionally, demographic changes. All these factors keep affecting the job market as businesses from different industries keep going through a digital transformation, searching for ways to optimize sustainability or both.Clearly, this creates a demand for specific professionals — and while the figures differ by industry and region, employers worldwide are actively looking for experts who can fit into the changing professional landscape. Below are the top 5 in-demand jobs in 2024 — and, from the looks of it, the trend will continue in 2025 and beyond.Data scientists & machine learning expertsWith the rise of AI and big data, it should be no surprise that relevant professionals are in such high demand. And even though the trend has been evident for a while now, there is still a shortage of data scientists, AI, and machine learning experts across different industries. The human talent gap is especially noticeable in areas that require deep learning and natural language processing. However, hard skills alone will no longer suffice — for the last few years, employers have been looking for a combination of soft and hard skills in their candidates.The demand for data and AI experts is evident all over the globe, with North America, Europe, and the Asia-Pacific region most actively looking for experts. However, emerging markets in Eastern Europe and Latin America are also searching for qualified talent and are prepared to offer highly competitive salaries.Machine learning and data experts are most needed in fintech, e-commerce, healthcare, and robotics industries. However, this is only the tip of the iceberg, as practically all companies are already adopting AI and data analysis to digitally transform their business and grow their customer databases. Ideally, tech experts who wish to advance in this field should have a basic understanding of the actual industry in question. Still, some companies may be willing to make exceptions considering the talent shortage we already mentioned.Sales and customer service professionalsE-commerce is also growing at an unprecedented pace, creating steady demand for customer service advocates, especially with experience in the phygital sphere. Similar to tech experts who can no longer advance on hard skills alone, traditional ‘peoples’ roles that previously relied on soft skills now call for a well-balanced combination of soft and hard skills. Digital literacy, especially experience with CRM tools and understanding of AI-based platforms, is a must today. At the same time, employers still emphasize the traditional soft skills for a customer service professional — communication, stress resilience, problem-solving, and more.Currently, the demand for customer service professionals who can work in both physical and digital landscapes is greatest in developed regions — North America, Europe, and Asia-Pacific. The top industries driving this demand are retail, fintech, and IT. However, more traditional financial sectors, banks in particular, are catching up with the trend, striving to improve their customer service to retain clients. Professionals will most likely be expected to work across various digital channels, including email, social media, live chat, and sometimes phone.Renewable energy expertsMajor economies have been introducing new carbon-free initiatives for years, gradually increasing the demand for renewable energy experts. In 2024, this trend is more evident than ever, with the EU, US, and China searching for effective ways to reduce their carbon footprint.In the EU, the shift is caused by the European Green Deal, which focuses exclusively on environmental initiatives. In the US, the trend is brought on by the Inflation Reduction Act, which, besides budget and economic issues, promotes green energy uses. Similarly, China’s current five-year plan aims to increase the proportion of non-fossil energy usage and start developing hydropower plants.These factors create an ever-growing demand for engineers and technicians with experience in solar, wind, and other renewable energy sources. Professionals with relevant skills are in high demand in developed economic regions described above, but similar career opportunities in developing areas are already catching up. Latin America, for example, is steadily exploring renewable sources to achieve energy independence, while Africa is a particularly lucrative region for exploring renewable solar energy.Business analystsThe rising demand for business analysts is directly related to tech advances in AI and big data. As companies from different industries seek to transform their businesses in accordance with the new tech trends, the demand for business analysts becomes ever more pressing. Experts who understand specific industries and enterprise resource planning systems are in the highest demand since it is up to them to ensure a company’s operational efficiency. Usually, this implies analyzing current processes and searching for ways to further enhance business efficiency, automate operations, search for ways to integrate AI technologies, and more.Expectedly, the primary regions in need of business analysts are North America, Europe, and Asia Pacific. However, industry demand varies slightly by region. In North America, for example, business analysts are in demand across IT, finance, healthcare, and retail sectors.In Europe, however, the focus shifts toward finance and manufacturing. Notably, EU companies are also in need of analysts with an understanding of sustainable initiatives, as European businesses are more concerned about meeting zero-carbon manufacturing goals than embracing AI.In the Asia Pacific region, the e-commerce and retail sectors have the highest demand for experienced business analysts — except India, which is also on the lookout for IT analysts.Healthcare professionalsThe growing demand for medical doctors is brought forward not only by the adoption of new technologies (even though this tendency plays its part) but also by demographic changes. Many developed European countries, as well as the US, Japan, and China, are currently experiencing growth in the aging population numbers, which creates increased demand for professional physicians and nurses, especially those with geriatric experience. The consequences of the COVID pandemic are still apparent, too, causing the need for infectious disease specialists and public health experts.Besides, the global population is becoming more aware of mental health issues — in part brought on by the pandemic and extended lockdown period. This, in turn, leads to an increased demand for qualified psychologists, mental counselors, and psychiatrists. The increased focus on mental well-being is prevalent in North America and Europe, with other regions paying more attention to physical health – at least, for now.Next, the healthcare industry, like most others, is affected by the massive spread of AI. Artificial intelligence is currently being integrated into diagnostics and treatment planning processes, so healthcare professionals should be prepared to adopt this change. Finally, the healthcare industry as a whole is aiming for advances in biotechnology and genomics, so the medical R&D sector is also on the lookout for professional talent.Of course, this is but a short list of in-demand jobs in 2024, with plenty of other employment opportunities in IT, sales, healthcare, and renewable energy sectors. Besides these rapidly developing industries, fintech, education, and manufacturing, especially when sustainable technologies are concerned, are also on the rise. This means a lot of room for maneuvering in most industries, so professionals with relevant soft and hard skills will have a chance to push their career boundaries further.Members of the editorial and news staff of the Daily Caller were not involved in the creation of this content. | Content Synthesis/Decision Making/Information Retrieval Or Search/Prediction | Business and Financial Operations/Healthcare Practitioners and Support/Computer and Mathematical | null | null | null | null | null | null |
Subsets and Splits