source
stringclasses
2 values
author
stringlengths
0
824
title
stringlengths
0
475
description
stringlengths
0
32.8k
url
stringlengths
0
713
urlToImage
stringlengths
0
2k
publishedAt
stringlengths
20
20
content
stringlengths
0
32.8k
category_nist
stringlengths
5
160
category
stringlengths
5
239
id
stringlengths
6
7
subreddit
stringlengths
3
21
score
int64
0
30.2k
num_comments
int64
0
2.27k
created_time
timestamp[ns]
top_comments
stringlengths
1
25.4k
news
Jason Kirsch, CFP(R), Contributor, Jason Kirsch, CFP(R), Contributor https://www.forbes.com/sites/investor-hub/people/Jason-Kirsch/
5 Top Stocks To Buy In November
Looking for the latest stock opportunities in November 2024? Click to read about top stock picks to consider adding to your portfolio now.
https://www.forbes.com/sites/investor-hub/article/top-stocks-buy-now-november-2024/
https://imageio.forbes.c…=1600&fit=bounds
2024-10-08T16:12:55Z
Getty ImagesAs the fall weather kicks into gear in parts of the country, the stock market presents challenges and opportunities for investors. This article examines top stock picks in the current market landscape based on financial performance assessments and growth potential projections amidst evolving global economic conditions and promising sector developments.How These Top Stock Picks Were ChosenTo select stocks worth considering for purchase, I analyze indicators such as revenue growth, profit margins and debt levels. I also evaluate market standing and competitive advantages. Additionally, I consider industry trends, technological advancements and potential regulatory impacts. Each stock is chosen based on its potential to generate short-term profits and create long-term value.Data source: Google FinanceMuch more than breaking news, our diverse reporting digs deeper with unparalleled insights that empower you to make better informed decisions. Become a Forbes member and unlock unlimited access to cutting-edge strategies, actionable insights, and updated analysis from our network of leading finance experts.1. Nvidia (NVDA)Business OverviewKey metrics:Sector/Industry: Technology/SemiconductorsMarket cap: $3.2 trillionP/E ratio: 60.8Dividend yield: 0.03%Nvidia is a leading designer of graphics processing units (GPUs) for gaming and professional markets and system-on-chip units (SoCs) for mobile computing and automotive markets. The company has become a key player in artificial intelligence, data centers and autonomous driving technologies.Why NVDA Stock Is A Top ChoiceNvidia's continued dominance in the GPU market and pivotal role in advancing AI and machine learning technologies reinforce its position as a top stock pick for November. The company's recent announcement of its next-generation GPU architecture codenamed Blackwell, has generated significant buzz in the tech industry. This new architecture promises unprecedented performance improvements for AI training and inference, potentially widening Nvidia's lead in the AI chip market.The company's data center business continues to be a significant growth driver, with revenues in this segment surging by 41% year-over-year in the latest quarter. This growth is fueled by the increasing adoption of generative AI technologies across various sectors, from healthcare to finance. Nvidia's GPUs remain the go-to choice for training large language models and other AI applications, cementing its position as a vital enabler of the AI revolution.In the automotive sector, Nvidia's Drive platform for autonomous vehicles is gaining traction. The company recently partnered with several major automakers to integrate its Drive Orin system-on-chip into their next-generation vehicles. This development opens up significant long-term growth potential in the rapidly evolving autonomous vehicle market.Furthermore, Nvidia's expansion into cloud gaming with its GeForce NOW service has shown promising results, with the subscriber base growing steadily. As cloud gaming becomes more mainstream, this could become another significant revenue stream for the company.Given Nvidia's strong financial performance, its crucial role in shaping the future of AI and computing and its expanding market opportunities in emerging technologies, it stands out as a compelling investment option for November and beyond.2. Amazon (AMZN)Business OverviewKey metrics:Sector/Industry: Consumer Cyclical/Internet RetailMarket cap: $1.9 trillionP/E ratio: 43.4Dividend yield: N/AAmazon.com is a multinational technology company focusing on e-commerce, cloud computing, digital streaming, and artificial intelligence. It is one of the world's most valuable companies and a leader in multiple market segments.Why AMZN Stock Is A Top ChoiceAmazon remains a top stock pick for November, driven by its robust performance across multiple business segments and its strategic initiatives in emerging technologies. The company's e-commerce business continues to show strength, with the recent Prime Big Deal Days event breaking previous sales records. This success demonstrates Amazon's ability to drive consumer engagement and sales even in a challenging economic environment.Amazon Web Services (AWS) remains the leading cloud computing platform, with revenues growing by 15% year over year in the latest quarter. The company's $4 billion investment in March in Anthropic, an AI research company, underscores its commitment to staying at the forefront of AI innovation. This partnership will enhance AWS's AI capabilities, potentially driving further growth in its cloud services business.The company's advertising business has emerged as a major growth engine, with ad revenues surging by 26% year over year in the third quarter of 2024. Amazon's unique position to offer highly targeted advertising based on consumer shopping behavior makes it an increasingly attractive platform for marketers.Amazon's foray into healthcare continues to gain momentum, with its One Medical service expanding to new markets and the recent launch of its Amazon Clinic telemedicine service showing promising early results. These initiatives position Amazon to capitalize on the growing demand for accessible and affordable healthcare services.Furthermore, Amazon's focus on operational efficiency, evidenced by ongoing cost-cutting measures and optimizing its fulfillment network, has led to improved profit margins. The company's operating income increased by 78% year over year in the latest quarter, reflecting these efficiency gains.As Amazon continues to innovate across multiple fronts, leverage its vast ecosystem, and expand into new markets, it presents a compelling opportunity for investors looking for a stock with solid growth potential and a track record of disrupting traditional industries.3. Microsoft (MSFT)Business OverviewKey metrics:Sector/Industry: Technology/Software InfrastructureMarket cap: $3.1 trillionP/E ratio: 35Dividend yield: 0.8%Microsoft is a multinational technology corporation that develops, manufactures and sells computer software, consumer electronics and personal computers. It is best known for its software products, including the Microsoft Windows operating system, Microsoft Office suite and Internet Explorer and Edge web browsers.Why MSFT Stock Is A Top ChoiceMicrosoft remains a top stock pick for November, driven by its strong performance across its diverse business segments and its strategic positioning in high-growth tech sectors. The company's Azure cloud platform remains a significant growth driver, with cloud services revenue increasing by 31% year-over-year in the most recent quarter. This impressive growth is fueled by the ongoing digital transformation across industries and the increasing adoption of AI and machine learning technologies.The company's AI-powered Copilot, integrated into Microsoft 365, has gained significant traction since its launch in November 2023. Early adopters have reported substantial productivity gains, driving increased demand for Microsoft's productivity suite. This successful AI integration extends beyond productivity tools, with Microsoft leveraging its partnership with OpenAI to infuse AI capabilities across its product lineup, including the Bing search engine and Windows operating system.Microsoft's gaming division has seen remarkable growth following the acquisition of Activision Blizzard. Integrating popular franchises like Call of Duty and World of Warcraft into the Xbox ecosystem has boosted gaming revenues and subscriber numbers for Xbox Game Pass, which now boasts more than 30 million subscribers.The company's investments in mixed reality technology continue to bear fruit, with the HoloLens 3 launch scheduled for early 2025 generating significant interest in enterprise and healthcare sectors. Microsoft's Mesh platform for mixed-reality experiences is gaining traction as more companies explore metaverse applications.From a financial perspective, Microsoft's strong balance sheet, consistent dividend growth and ongoing share repurchase program make it an attractive option for growth and value investors. The company recently announced a 10% increase in its quarterly dividend, marking the 19th consecutive year of dividend growth.With its solid fundamentals, strong market position, and continued innovation across multiple fronts, Microsoft presents a compelling investment opportunity for those looking to capitalize on the ongoing digital transformation and the future of technology.Discover more in-depth insights, entrepreneurial advice and winning strategies that can propel your journey forward and save you from making costly mistakes. Elevate your journey by becoming a Forbes member. Unlock unlimited access to premium journalism plus exclusive members-only events gathering leading business minds that shape tomorrow.4. Tesla (TSLA)Business OverviewKey metrics:Sector/Industry: Consumer Cyclical/Auto ManufacturersMarket cap: $763 billionP/E ratio: 68.5Dividend yield: N/ATesla is a multinational American corporation specializing in electric vehicles, energy storage and solar panel manufacturing. The company has been at the forefront of transitioning to sustainable energy and transportation.Why TSLA Stock Is A Top ChoiceTesla remains a top stock pick for November, driven by its continued innovation in the electric vehicle (EV) market and its expansion into new sectors. The companys Cybertruck, which began deliveries in late 2023, has exceeded initial sales projections, with more than 250,000 units delivered in its first year. This success demonstrates Tesla's ability to create and dominate new market segments within the EV industry.Tesla's recent announcement of its next-generation EV platform, promising a 50% reduction in production costs, has generated significant excitement among investors. This new platform will enable Tesla to produce more affordable EVs, potentially opening up new market segments and accelerating EV adoption globally.The company's energy business has grown substantially, with energy storage deployments increasing by 70% year-over-year in the latest quarter. Tesla's Megapack installations for utility-scale energy storage have gained traction globally, with several high-profile projects announced in Europe and Asia.Tesla's Full Self-Driving (FSD) technology has made significant strides, with the latest v12 update demonstrating near-human-level performance in complex driving scenarios. The company has begun licensing its FSD technology to other automakers, creating a new revenue stream with high profit margins.The Optimus humanoid robot project has progressed rapidly, with Tesla announcing plans for a limited commercial release in late 2025. While still in its early stages, this project represents a potentially massive new market for Tesla in the coming years.Tesla's improving financials, with automotive gross margins stabilizing around 20% despite ongoing price competition, demonstrate the company's ability to maintain profitability while scaling production. The company's strong brand, technological lead and visionary leadership continue to attract investors, making it an appealing choice for those looking to invest in the future of sustainable transportation and energy.5. JPMorgan Chase (JPM)Business OverviewKey metrics:Sector/Industry: Financial Services/BanksMarket cap: $599 billionP/E ratio: 11.8Dividend yield: 2.4%JPMorgan Chase is one of the largest and most complex financial institutions in the United States. It is a leader in investment banking, financial services for consumers and small businesses, commercial banking, financial transaction processing, and asset management.Why JPM Stock Is A Top ChoiceJPMorgan Chase remains a top stock pick for November, bolstered by its robust performance across diverse business segments and adept navigation of the complex economic landscape. The bank's recent third-quarter 2024 earnings report exceeded analyst expectations, with net income rising to $13.7 billion, up 8% from the previous quarter. This strong performance was driven by solid consumer banking results and increased net interest income due to higher interest rates.The bank's investment banking division has shown resilience in a challenging market, maintaining its top position in global investment banking fees. JPMorgan's recent expansion of its middle-market banking services in Europe has begun to yield results, with the bank gaining market share in a traditionally underserved segment.JPMorgan's continued investment in technology is paying off, with its AI-powered digital assistant for wealth management clients seeing rapid adoption rates. The bank's blockchain-based payment network, JPM Coin, has processed over $1 trillion in transactions year-to-date, demonstrating the potential of this technology to transform financial services.The bank's strong capital position, with a CET1 ratio of 14.5% as of the third quarter of 2024, provides a solid foundation for growth and resilience against economic uncertainties. This strong capital base has allowed JPMorgan to increase its dividend by 5% in 2024, marking the 13th consecutive year of dividend growth.As the Federal Reserve signals a potential end to its rate-hiking cycle, JPMorgan is well-positioned to benefit from a stable interest rate environment. The bank's diversified business model, spanning consumer banking, corporate and investment banking, commercial banking and asset and wealth management, provides resilience against sector-specific downturns.JPMorgan's strong risk management practices, evidenced by its performance during recent banking sector turbulence, and its "fortress balance sheet" philosophy instill confidence in its ability to weather potential economic headwinds. With signs of financial stability emerging and the potential for a "soft landing" scenario, JPMorgan benefits from increased lending activity and potential upticks in consumer spending and business investments.Given its strong financial performance, technological innovation, and potential to benefit from the current economic environment, JPMorgan Chase presents a compelling choice for investors seeking exposure to the financial sector with a blend of stability and growth potential.Bottom LineThe five stocks highlighted in this articleNvidia, Amazon, Microsoft, Tesla and JPMorgan Chaserepresent diverse sectors and offer unique growth potential. From technology giants driving AI innovation to leaders in e-commerce, cloud computing, electric vehicles and financial services, these companies are well-positioned to capitalize on significant market trends. While past performance doesn't guarantee future results, these stocks have demonstrated solid fundamentals and growth prospects, making them attractive options for investors in November. Conducting thorough research and considering your investment goals before making investment decisions is essential.Read NextWhether its mastering cutting-edge strategies, uncovering actionable investment opportunities from influential leaders, or breaking down complex topics, our in-depth journalism has you covered. Become a Forbes member and gain unlimited access to bold ideas shaking up industries, expert guides and practical investment advice that keeps you ahead of the market.
Decision Making/Prediction/Recommendation
Business and Financial Operations
null
null
null
null
null
null
news
ResearchBuzz
United States Renewable Energy, Historical Agricultural Data, NotebookLM, More: Saturday ResearchBuzz, October 26, 2024
NEW RESOURCES Environment America Research & Policy Center: New report: Wind & solar energy tripled in US over past decade. “The United States produced more than three times as much solar, wind […]
https://researchbuzz.me/2024/10/26/united-states-renewable-energy-historical-agricultural-data-notebooklm-more-saturday-researchbuzz-october-26-2024/
https://s0.wp.com/i/blank.jpg
2024-10-26T12:01:46Z
NEW RESOURCES Environment America Research & Policy Center: New report: Wind & solar energy tripled in US over past decade. “The United States produced more than three times as much solar, wind and geothermal power in 2023 than we did in 2014, with growth in all 50 states, according to Renewables on the Rise 2024, an online dashboard unveiled on Wednesday by Environment America Research & Policy Center and Frontier Group.”Center for Data Innovation: Compiling Agricultural Experiments Data. “Researchers at the University of Illinois Urbana-Champaign have created a dataset that contains data on agricultural productivity and farming techniques from 1888 to 2021. The researchers collected historical records on an agricultural experiment that has been continuously running since 1888 examining the impact of certain farming practices on maize.”TWEAKS AND UPDATES TechCrunch: Googles NotebookLM now lets you guide AI-generated audio conversations, launches business pilot. “Google Thursday updated the audio summarization feature of its AI note-taking and research assistant NotebookLM, which recently gained significant attention for its podcast-like audio conversations based on the content users share, with the ability to guide those conversations and focus on specific topics instead of just generating holistic audio summaries.”The Verge: Automattic offered employees another chance to quit over WordPress drama. “Automattic CEO Matt Mullenweg offered a second buyout round to employees who dont agree with his actions, according to a report from 404 Media. In a message to employees seen by the outlet, Mullenweg gave employees until October 17th to decide whether they would resign in exchange for nine months of pay.”AROUND THE SEARCH AND SOCIAL MEDIA WORLD Ars Technica: Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers . “A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Searchs reliance on Reddit and Google’s AI Overview.”Time Extension: New EGM Compendium Project Smashes Kickstarter Target In Under 24 Hours. “EGM Media, the current owners of the magazine Electronic Gaming Monthly and its archive, has announced a new book and digital archive project, which aims to honor the legacy of the iconic print publication.”Sydney Morning Herald: The new breed of social media sleuths bypassing the regulators. “Consumers are shining a light on sneaky or potentially misleading sales tactics in a new way by giving big businesses a blast via social media. Simon Berry, or @simontheberry to his 13,300 followers, has been on a social media crusade to stop cinema giant Hoyts from not disclosing booking fees to consumers until the very last step of the checkout.”SECURITY & LEGAL Reuters: Chinese hackers targeted phones affiliated with Harris campaign, source says. “Chinese hackers who tapped into Verizon’s system targeted phones used by people affiliated with the campaign of Democratic presidential candidate Kamala Harris, a person familiar with the situation said on Friday. Republican presidential candidate Donald Trump and his running mate, JD Vance, were also targeted, according to media reports.”UC Berkeley: White House Issues New Directive on AI and National Security. “Today, the Biden-Harris Administration released the first-ever National Security Memorandum (NSM) on artificial intelligence and the accompanying Framework to Advance AI Governance and Risk Management in National Security (AI Framework), which lay out guidelines for promoting the safe and democratic use of AI in United States national security.”Bloomberg Law: Former Twitter Employees Denied Class Status in Bonuses Case. “A group of former Twitter Inc. employees suing the company over unpaid bonuses cant move the case forward as a class action. Judge Vince Chhabria on Wednesday ruled that Mark Schobinger, former senior director of compensation who is suing the company, is inadequate to represent the class because he allegedly advocated against paying the bonuses.”RESEARCH & OPINION United Nations: Geoguard: a new tool for climate, peace and security. “Geoguard translates raw satellite data into user-friendly insights, enabling users to monitor critical environmental trends such as rainfall, temperature changes, and soil moisture levels. The platform covers several regions, including the Middle East, West Africa, and Central Africa, providing decades of climate data to support informed decision-making in these vulnerable areas.”Computerworld: About that brawl between the WordPress co-founder and WP Engine. “From where I sit, this is not a battle over open source. Its a fight between someone worth hundreds of millions and a company worth billions. When youre trying to figure out whats going on in any conflict, whether its a family fight, a divorce, or a business fight, one of the best rules of thumb is to follow the money. What its telling me here is its about the cash. Unfortunately, this battle can potentially affect me and everyone who uses WordPress and WP Engine in particular. I didnt need this. None of us do.” Good morning, Internet… This newsletter doesn’t take any money from Russia and having ethics doesn’t pay well, so please consider supporting ResearchBuzz on Patreon. Not interested in commitment? Perhaps youd buy me an iced tea. I live at Calishat. See my other nonsense at SearchTweaks, RSS Gizmos, Mastodon Gizmos, WikiTwister, and MegaGladys.Categories: morningbuzz
Digital Assistance/Content Synthesis
Unknown
null
null
null
null
null
null
news
Richard Brown
Google backs construction of first small nuclear reactors to power AI data centers
Google has announced it will back the construction of seven small nuclear-power reactors in the U.S. as part of a groundbreaking deal aimed at addressing the tech giant’s growing demand for electricity to support artificial intelligence (AI) operations. The deal will see Google purchase power from reactors built by nuclear-energy startup Kairos Power, helping to […]
https://www.naturalnews.com/2024-10-16-google-backs-first-nuclear-reactors-for-ai.html
https://www.naturalnews.…ks-Radiation.jpg
2024-10-16T06:00:00Z
Google has announced it will back the construction of seven small nuclear-power reactors in the U.S. as part of a groundbreaking deal aimed at addressing the tech giant's growing demand for electricity to support artificial intelligence (AI) operations.The deal will see Google purchase power from reactors built by nuclear-energy startup Kairos Power, helping to fuel both its own energy needs and a potential revival of nuclear power in the United States. Under the agreement, the search engine giant has committed to securing 500 megawatts (MW) of nuclear energy, with the first reactors expected to come online by the end of the decade.The partnership seeks to develop and deploy small modular nuclear reactors (SMRs), a promising new technology that offers the potential for faster and more cost-effective construction compared to traditional large-scale nuclear plants. SMRs are smaller, more standardized units that can be mass-produced, theoretically driving down costs over time. (Related: Company backed by Bill Gates wants to deploy small modular nuclear reactors across U.S.)"The end goal here is 24/7, carbon-free energy," said Michael Terrell, senior director for energy and climate at Google. "To meet round-the-clock clean energy goals, we'll need technologies that complement wind, solar and lithium-ion storage."The nuclear power industry is increasingly aligning itself with Big Tech as power demands rise in the U.S., driven largely by the expansion of AI and the need for more data centers. This shift has led tech companies like Google to seek new sources of energy to support their operations. The demand for power is expected to increase as AI technology continues to evolve, making energy-intensive data centers more common.This isn't Google's first foray into energy deals. Last month, Microsoft inked a similar agreement with Constellation Energy to restart the undamaged reactor at Pennsylvania's Three Mile Island, a site infamous for the U.S.'s worst nuclear-power accident. Amazon also purchased a data center at another Pennsylvania nuclear plant earlier this year.The 500 MW of power that will be provided by Kairos for Google is roughly enough to power a midsize city, or one of Google's AI data-center campuses. Google's backing provides much-needed momentum for the emerging nuclear technology, allowing Kairos to move forward with the hope of achieving economies of scale.Kairos plans to deliver its first reactors between 2030 and 2035, though financial details of the deal were not disclosed. The companies have entered into a power-purchase agreement, similar to the agreements that tech companies have historically made with wind and solar developers. The exact location of the reactors, or whether they will be spread across multiple sites, has yet to be determined.Unlike traditional reactors that use water as a coolant, Kairos' design uses molten fluoride salt, which is expected to offer safety and efficiency benefits. The reactors for Google will consist of one 50 MW reactor and three subsequent power plants, each housing two 75 MW reactors. For comparison, a typical large-scale nuclear plant has reactors that generate around 1,000 MW of power.Although Kairos has received approval to build a demonstration reactor in Tennessee, which could start operating by 2027, the company will still have to navigate complex regulatory hurdles with the U.S. Nuclear Regulatory Commission. To prepare, Kairos is building test units at a manufacturing facility in Albuquerque, New Mexico, which will allow them to practice constructing and operating full-scale reactors.Kairos CEO Mike Laufer emphasized that the demonstration project and the facility in Albuquerque will help the company avoid the spiraling costs that have plagued traditional nuclear projects in the past.Currently, nuclear power provides nearly 20 percent of the electricity in the U.S., but the development of new large-scale nuclear projects has stalled due to prohibitive costs and long timelines. The completion of the second new reactor at Georgia's Vogtle nuclear plant this spring was the first such success in the U.S. in years, with the previous reactors being completed in 2016 and 1996.Visit Nuclear.news for more stories like this.Watch this CNBC report about Google inking an agreement with Kairos Power.This video is from the TrendingNews channel on Brighteon.com.More related stories:Big Tech eyes nuclear power to meet the demands of AI computing.U.A.E. teams up with Bill Gates nuclear company to build ADVANCED NUCLEAR REACTORS.Microsoft to bring infamous nuclear plant back to life to feed POWER-HUNGRY AI data centers.Sources include:WSJ.comBrighteon.com
Unknown
Computer and Mathematical/Management
null
null
null
null
null
null
news
Sophie Cogan, PhD Candidate in Politics and Environment, University of York
Tech bosses think nuclear fusion is the solution to AI’s energy demands – here’s what they’re missing
Nuclear fusion can’t produce net energy, so is it really a solution to AI’s growing energy demands?
https://theconversation.com/tech-bosses-think-nuclear-fusion-is-the-solution-to-ais-energy-demands-heres-what-theyre-missing-240580
https://images.theconver…6&h=668&fit=crop
2024-10-17T15:58:12Z
Illustration of nuclear fusion in a tokamak. John D London / ShutterstockThe artificial intelligence boom has already changed how we understand technology and the world. But developing and updating AI programs requires a lot of computing power. This relies heavily on servers in data centres, at a great cost in terms of carbon emissions and resource use. One particularly energy intensive task is “training”, where generative AI systems are exposed to vast amounts of data so that they improve at what they do. The development of AI-based systems has been blamed for a 48% increase in Google’s greenhouse gas emissions over five years. This will make it harder for the tech giant to achieve its goal of reaching net zero by 2030.Some in the industry justify the extra energy expenditure from AI by pointing to benefits the technology could have for environmental sustainability and climate action. Improving the efficiency of solar and wind power through predicting weather patterns, “smart” agriculture and more efficient, electric autonomous vehicles are among the purported benefits of AI for the Earth.It’s against this background that tech companies have been looking to renewables and nuclear fission to supply electricity to their data centres. Nuclear fission is the type of nuclear power that’s been in use around the world for decades. It releases energy by splitting a heavy chemical element to form lighter ones. Fission is one thing, but some in Silicon Valley feel a different technology will be needed to plug the gap: nuclear fusion.Unlike fission, nuclear fusion produces energy by combining two light elements to make a heavier one. But fusion energy is an unproven solution to the sustainability challenge of AI. And the enthusiasm of tech CEOs for this technology as an AI energy supply risks sidelining the potential benefits for the planet.Beyond the conventionalGoogle recently announced that it had signed a deal to buy energy from small nuclear reactors. This is a technology, based on nuclear fission, that allows useful amounts of power to be produced from much smaller devices than the huge reactors in big nuclear power plants. Google plans to use these small reactors to generate the power needed for the rise in use of AI. This year, Microsoft announced an agreement with the company Constellation Energy, which could pave the way to restart a reactor at Pennsylvania’s Three Mile Island nuclear power station, the site of the worst nuclear accident in US history.However, nuclear power produces long-lived radioactive waste, which needs to be stored securely. Nuclear fuels, such as the element uranium (which needs to be mined), are finite, so the technology is not considered renewable. Renewable sources of energy, such as solar and wind power suffer from “intermittency”, meaning they do not consistently produce energy at all hours of the day.These limitations have driven some to look to look to nuclear fusion as a solution. Most notably, Sam Altman of OpenAI has shown particular interest in Helion Energy, a fusion startup working on a relatively novel technological design. In theory, nuclear fusion offers a “holy grail” energy source by generating a large output of energy from small quantities of fuel, with no greenhouse gas emissions from the process and comparatively little radioactive waste. Some forms of fusion rely on a fuel called deuterium, a form of hydrogen, which can be extracted from an abundant source: seawater. In the eyes of its advocates, like Altman, these qualities make nuclear fusion well suited to meet the challenges of growing energy demand in the face of the climate crisis –- and to meet the vast demands of AI development. However, dig beneath the surface and the picture isn’t so rosy. Despite the hopes of its proponents, fusion technologies have yet to produce sustained net energy output (more energy than is put in to run the reactor), let alone produce energy at the scale required to meet the growing demands of AI. Fusion will require many more technological developments before it can fulfil its promise of delivering power to the grid.Wealthy and powerful people, such as the CEOs of giant technology companies, can strongly influence how new technology is developed. For example, there are many different technological ways to perform nuclear fusion. But the particular route to fusion that is useful for meeting the energy demands of AI might not be the one that’s ideal for meeting people’s general energy needs. AI is reliant on data centres which consume lots of energy. Dil_Ranathunga / Shutterstock The overvaluation of innovationInnovators often take for granted that their work will produce ideal social outcomes. If fusion can be made to work at scale, it could make a valuable contribution to decarbonising our energy supplies as the world seeks to tackle the climate crisis. However, the humanitarian promises of both fusion and AI often seem to be sidelined in favour of scientific innovation and progress. Indeed, when looking at those invested in these technologies, it is worth asking who actually benefits from them. Will investment in fusion for AI purposes enable its wider take-up as a clean technology to replace polluting fossil fuels? Or will a vision for the technology propagated by powerful tech companies restrict its use for other purposes?It can sometimes feel as if innovation is itself the goal, with much less consideration of the wider impact. This vision has echoes of Meta CEO Mark Zuckerberg’s motto of “move fast and break things”, where short-term losses are accepted in pursuit of a future vision that will later justify the means.Sophie Cogan receives funding from the EPSRC Fusion Centre for Doctoral Training.
Unknown
Unknown
null
null
null
null
null
null
news
Glenn CHAPMAN
Google Signs Nuclear Power Deal With Startup Kairos
Google on Monday signed a deal to get electricity from small nuclear reactors to help power artificial intelligence.
https://www.ibtimes.com/google-signs-nuclear-power-deal-startup-kairos-3746664
https://d.ibtimes.com/en…et-ai-energy.jpg
2024-10-14T19:15:19Z
Google on Monday signed a deal to get electricity from small nuclear reactors to help power artificial intelligence.The agreement to buy energy from reactors built by Kairos Power came just weeks after word that Three Mile Island, the site of America's worst nuclear accident, will restart operations to provide energy to Microsoft."We believe that nuclear energy has a critical role to play in supporting our clean growth and helping to deliver on the progress of AI," Google senior director of energy and climate said during a briefing."The grid needs these kinds of clean, reliable sources of energy that can support the build out of these technologies."No financial details were disclosed.Tech giants like Microsoft, Amazon, and Google are rapidly expanding their data center capabilities to meet the AI revolution's computing needs while also scouring the globe for sources of electricity.The first of a series of small modular reactors (SMRs) developed by Kairos as a result of its deal with Google is projected to be online by the end of this decade, according to the companies.Additional small reactors are expected to go online through 2035, generating a combined total of 500 megawatts of power.SMRs are more compact and potentially easier to deploy -- with big investments by Microsoft founder Bill Gates in the sector.However, the technology is still in its infancy and lacks regulatory approval, leading companies to seek out existing nuclear power options."We view this as a really significant partnership," Kairos co-founder and chief executive Mike Laufer said in the briefing.The agreement allows SMR technology "to mature and learn along the way," Laufer added.The grid needs new, clean, electricity sources to support AI as its capabilities and uses accelerate, Terrell said, saying nuclear power is part of a campaign by the tech giant to have abundant carbon-free energy available around the clock."This is an incredibly promising bet," Terrell said of the deal."If we can get these projects to scale and then scale globally, it will deliver enormous benefits to communities and power grids around the world."Seen as a more consistent source of power than solar and wind, many tech companies are betting on nuclear energy's rapid development to meet AI's electricity demands.Microsoft use of Three Mile Island's nuclear energy will bolster a power grid covering 13 states.This area faces severe strain from data centers' massive energy consumption, raising concerns about grid stability as AI demands increase.Amazon's AWS agreed in March to invest $650 million in a data center campus powered by another Pennsylvania nuclear plant.Nuclear energy has staunch opponents due to concerns about radioactive waste disposal, the potential for catastrophic accidents, and the high costs associated with plant construction and decommissioning.The 1979 partial meltdown of Unit 2 at Three Mile Island caused panic in the United States and brought the expansion of nuclear energy to a standstill.The Nuclear Regulatory Commission deemed it the "most serious accident in US commercial nuclear power plant operating history," though it noted no detectable health effects on workers or the public from the small radioactive releases.
Unknown
Management/Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
Tohoku University
AI predicts optical properties to speed up the discovery of energy and quantum materials
Researchers from Tohoku University and the Massachusetts Institute of Technology (MIT) have unveiled a new AI tool for high-quality optical spectra with the same accuracy as quantum simulations, but working a million times faster, potentially accelerating the development of photovoltaic and quantum materials.
https://phys.org/news/2024-10-ai-optical-properties-discovery-energy.html
https://scx2.b-cdn.net/g…up-the-disco.jpg
2024-10-07T15:53:03Z
Researchers from Tohoku University and the Massachusetts Institute of Technology (MIT) have unveiled a new AI tool for high-quality optical spectra with the same accuracy as quantum simulations, but working a million times faster, potentially accelerating the development of photovoltaic and quantum materials.Understanding the optical properties of materials is essential for developing optoelectronic devices, such as LEDs, solar cells, photodetectors, and photonic integrated circuits. These devices are pivotal in the semiconductor industry's current resurgence.Traditional means of calculation using the basic laws of physics involve complex mathematical calculations and immense computational power, rendering it difficult to quickly test a large number of materials. Overcoming this challenge could lead to the discovery of new photovoltaic materials for energy conversion and a deeper understanding of the fundamental physics of materials through their optical spectra.A team led by Nguyen Tuan Hung, an assistant professor at the Frontier Institute for Interdisciplinary Science (FRIS), Tohoku University, and Mingda Li, an associate professor at MIT's Department of Nuclear Science and Engineering (NSE), did just that, introducing a new AI model that predicts optical properties across a wide range of light frequency, using only a material's crystal structure as an input.Lead author Nguyen and his colleagues recently published their findings in an open-access paper in Advanced Materials."Optics is a fascinating aspect of condensed matter physics, governed by the causal relationship known as the Kramers-Krönig (KK) relation," says Nguyen. "Once one optical property is known, all other optical properties can be derived using the KK relation. It is intriguing to observe how AI models can grasp physics concepts through this relation."Obtaining optical spectra with complete frequency coverage in experiments is challenging due to the limitations of laser wavelengths. Simulations are also complex, requiring high convergence criteria and incurring significant computational costs. As a result, the scientific community has long been searching for more efficient methods to predict the optical spectra of various materials."Machine-learning models utilized for optical prediction are called graph neural networks (GNNs)," points out Ryotaro Okabe, a chemistry graduate student at MIT. "GNNs provide a natural representation of molecules and materials by representing atoms as graph nodes and interatomic bonds as graph edges."Yet, while GNNs have shown promise for predicting material properties, they lack universality, especially in representations of crystal structures. To work around this conundrum, Nguyen and others devised a universal ensemble embedding, whereby multiple models or algorithms are created to unify the data representation."This ensemble embedding goes beyond human intuition but is broadly applicable to improve prediction accuracy without affecting neural network structures," explains Abhijatmedhi Chotrattanapituk, an electrical engineering and computer science graduate student at MIT.The ensemble embedding method is a universal layer that can be seamlessly applied to any neural network model without modifying the neural network structures. "This implies that universal embedding can readily be integrated into any machine learning architecture, potentially making a profound impact on data science," says Mingda Li.This method enables highly precise optical prediction based solely on crystal structures, making it suitable for a wide variety of applications, such as screening materials for high-performance solar cells and detecting quantum materials.Looking ahead, the researchers aim to develop new databases for various material properties, such as mechanical and magnetic characteristics, to enhance the AI model's capability to predict material properties based solely on crystal structures.More information:Nguyen Tuan Hung et al, Universal EnsembleEmbedding Graph Neural Network for Direct Prediction of Optical Spectra from Crystal Structures, Advanced Materials (2024). DOI: 10.1002/adma.202409175Journal information:Advanced MaterialsProvided byTohoku UniversityCitation:AI predicts optical properties to speed up the discovery of energy and quantum materials (2024, October 7)retrieved 7 October 2024from https://phys.org/news/2024-10-ai-optical-properties-discovery-energy.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Prediction/Content Creation
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
null
India's AI revolution: Balancing technological growth with environmental sustainability
India is investing in AI development with the purchase of 10,000 GPUs. The focus is on creating a sustainable AI infrastructure to balance tech growth with environmental responsibility. Initiatives include energy-efficient data centers, green AI research, and leveraging renewable energy. Policies and incentives are crucial for long-term sustainable growth.
https://m.economictimes.com/opinion/et-commentary/indias-ai-revolution-balancing-technological-growth-with-environmental-sustainability/articleshow/114556701.cms
https://img.etimg.com/th…/articleshow.jpg
2024-10-24T17:20:21Z
This week, two AI stalwarts - Nvidia CEO Jensen Huang and Meta chief AI scientist Yann LeCun - are in India. (ET will be hosting an exclusive evening with Huang in Mumbai today.) Both visitors are forging and fast-forwarding many conversations, as India moves ahead to purchase 10,000 graphics processing units (GPUs).The GPU purchase is part of a project that will build robust AI infrastructure and reduce dependency on foreign computing resources at a crucial time when global GPU shortages and increasing costs are affecting AI development worldwide. However, this AI push across sectors brings a challenge: the massive energy consumption required to power the tech.As India embraces AI, it must chart a course that balances tech advancement with environmental responsibility. Training a single LLM can consume as much energy as 100 Indian households use in one year. With India's AI sector projected to grow exponentially, the environmental impact of unchecked AI development could be substantial. Yet, this challenge also presents an opportunity for India to pioneer sustainable AI practices that could serve as a model for developing nations.The country's robust IT infra, combined with technical talent, positions it uniquely to develop energy-efficient AI solutions. Several Indian institutes are working on 'green AI' initiatives, focusing on algorithms that require less computational power while delivering comparable results.GoI's recent policy push for AI development through initiatives like Digital India and National Strategy for Artificial Intelligence demonstrates its commitment. However, these initiatives must be complemented by a framework for sustainable AI development. This should include:Mandatory energy efficiency standards for data centres. Incentives for green AI research. Guidelines for responsible AI deployment.A few steps can make India's AI journey more sustainable:Green data centres Traditional data centres consume massive amounts of power for both computing and cooling. By adopting advanced cooling technologies and optimising server utilisation, power consumption can be reduced by up to 40%. The case of 100% direct liquid cooling, as implemented by Eviden in a few European projects, is pertinent as it reduces space and energy requirements. GoI's green data centre guidelines are a step in the right direction. But implementation needs to be accelerated.Models matterOrganisations should prioritise efficient model training. Techniques like neural architecture search and model pruning can reduce the computational resources required for AI training. Indian startups are developing innovative solutions, creating lighter, more efficient AI models that maintain accuracy while consuming less power.India must leverage its RE potential. Its ambitious solar power goals align perfectly with the growing energy demands of AI infra. By mandating that AI facilities source a significant portion of their power from RE sources, India can ensure its AI growth doesn't come at the cost of environmental degradation.Economic benefits of sustainable AI are substantial. Energy-efficient AI solutions reduce operational costs and also create new market opportunities. Indian companies developing sustainable AI technologies could become global leaders in this increasingly important sector.However, challenges remain. Initial cost of implementing sustainable AI infra is high, and many organisations might be tempted to prioritise short-term gains over long-term sustainability. This is where policy intervention becomes crucial. Government incentives, tax benefits and clear regulatory frameworks can help bridge the gap between immediate costs and long-term benefits.Choices made today will shape tomorrow's tech future and environmental impact. By prioritising sustainability in its AI development strategy, India can demonstrate that technological advancement and environmental responsibilities are not mutually exclusive, but complementary goals that strengthen each other.(The writer is a policy analyst on tech issues)
Unknown
Business and Financial Operations/Life, Physical, and Social Science
null
null
null
null
null
null
news
hurriyetdailynews.com
Google signs nuclear power deal with startup Kairos
Google has signed a deal to get electricity from small nuclear reactors to help power artificial intelligence.
https://www.hurriyetdailynews.com/google-signs-nuclear-power-deal-with-startup-kairos-201560
https://image.hurimg.com…55fbc196ebb7.jpg
2024-10-15T06:33:29Z
NEW YORKGoogle has signed a deal to get electricity from small nuclear reactors to help power artificial intelligence.The agreement to buy energy from reactors built by Kairos Power came just weeks after word that Three Mile Island, the site of America's worst nuclear accident, will restart operations to provide energy to Microsoft."We believe that nuclear energy has a critical role to play in supporting our clean growth and helping to deliver on the progress of AI," Google senior director of energy and climate said during a briefing."The grid needs these kinds of clean, reliable sources of energy that can support the build out of these technologies."No financial details were disclosed.Tech giants like Microsoft, Amazon, and Google are rapidly expanding their data center capabilities to meet the AI revolution's computing needs while also scouring the globe for sources of electricity.The first of a series of small modular reactors (SMRs) developed by Kairos as a result of its deal with Google is projected to be online by the end of this decade, according to the companies.Additional small reactors are expected to go online through 2035, generating a combined total of 500 megawatts of power.SMRs are more compact and potentially easier to deploy, with big investments by Microsoft founder Bill Gates in the sector.However, the technology is still in its infancy and lacks regulatory approval, leading companies to seek out existing nuclear power options.Seen as a more consistent source of power than solar and wind, many tech companies are betting on nuclear energy's rapid development to meet AI's electricity demands.Amazon's AWS agreed in March to invest $650 million in a data center campus powered by another Pennsylvania nuclear plant.
Decision Making/Process Automation
Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
null
New AI model could make power grids more reliable amid rising renewable energy use
Researchers use AI to manage the growing complexity of modern power grids.
https://www.sciencedaily.com/releases/2024/10/241024173704.htm
https://www.sciencedaily…cidaily-icon.png
2024-10-24T21:37:04Z
As renewable energy sources such as wind and solar become more widespread, managing the power grid has become increasingly complex. Researchers at the University of Virginia have developed an innovative solution: an artificial intelligence model that can address the uncertainties of renewable energy generation and electric vehicle demand, making power grids more reliable and efficient.Multi-Fidelity Graph Neural Networks: A New AI SolutionThe new model is based on multi-fidelity graph neural networks (GNNs), a type of AI designed to improve power flow analysis -- the process of ensuring electricity is distributed safely and efficiently across the grid. The "multi-fidelity" approach allows the AI model to leverage large quantities of lower-quality data (low-fidelity) while still benefiting from smaller amounts of highly accurate data (high-fidelity). This dual-layered approach enables faster model training while increasing the overall accuracy and reliability of the system.Enhancing Grid Flexibility for Real-Time Decision MakingBy applying GNNs, the model can adapt to various grid configurations and is robust to changes, such as power line failures. It helps address the longstanding "optimal power flow" problem, determining how much power should be generated from different sources. As renewable energy sources introduce uncertainty in power generation and distributed generation systems, along with electrification (e.g., electric vehicles), increase uncertainty in demand, traditional grid management methods struggle to effectively handle these real-time variations. The new AI model integrates both detailed and simplified simulations to optimize solutions within seconds, improving grid performance even under unpredictable conditions."With renewable energy and electric vehicles changing the landscape, we need smarter solutions to manage the grid," said Negin Alemazkoor, assistant professor of civil and environmental engineering and lead researcher on the project. "Our model helps make quick, reliable decisions, even when unexpected changes happen."Key Benefits:Scalability: Requires less computational power for training, making it applicable to large, complex power systems.Higher Accuracy: Leverages abundant low-fidelity simulations for more reliable power flow predictions.Improved generaliazbility: The model is robust to changes in grid topology, such as line failures, a feature that is not offered by conventional machine leaning models.This innovation in AI modeling could play a critical role in enhancing power grid reliability in the face of increasing uncertainties.Ensuring the Future of Energy Reliability"Managing the uncertainty of renewable energy is a big challenge, but our model makes it easier," said Ph.D. student Mehdi Taghizadeh, a graduate researcher in Alemazkoor's lab.Ph.D. student Kamiar Khayambashi, who focuses on renewable integration, added, "It's a step toward a more stable and cleaner energy future."
Unknown
Architecture and Engineering/Life, Physical, and Social Science
null
null
null
null
null
null
news
Arsenio Toledo
OpenAI proposing construction of 5GW data centers all over the United States
Artificial intelligence giant OpenAI has proposed building five-gigawatt data centers throughout the United States. This report comes following a recent meeting OpenAI Chief Executive Officer Sam Altman had at the White House. OpenAI’s purpose at this meeting was to put forward to the administration of President Joe Biden and Vice President Kamala Harris the economic and […]
https://www.naturalnews.com/2024-10-18-openai-data-centers-all-over-united-states.html
https://www.naturalnews.…y-Cloud-Maze.jpg
2024-10-18T06:00:00Z
Artificial intelligence giant OpenAI has proposed building five-gigawatt data centers throughout the United States.This report comes following a recent meeting OpenAI Chief Executive Officer Sam Altman had at the White House. OpenAI's purpose at this meeting was to put forward to the administration of President Joe Biden and Vice President Kamala Harris the economic and national security benefits of building dozens of five-gigawatt data centers in various U.S. states. (Related: Microsoft to bring infamous nuclear plant back to life to feed POWER-HUNGRY AI data centers.)To put in context the energy needs of OpenAI's proposal, just one data center that uses up five gigawatts of electricity each year would need the annual energy output of around five nuclear reactors, or enough energy to power almost three million homes.According to OpenAI, investing in the construction of these data centers could provide tens of thousands of new jobs to Americans, boost the U.S.'s gross domestic product and ensure that America maintains its lead in the development of AI technology. To achieve this goal, OpenAI is demanding government support for policies that promote greater data center capacity.Altman has also been lobbying investors to help fund the very costly infrastructure needed to support his company's growth and the growth of AI technology."OpenAI is actively working to strengthen AI infrastructure in the U.S., which we believe is critical to keeping America at the forefront of global innovation, boosting deindustrialization across the country and making AI's benefits accessible to everyone," OpenAI said in a statement.Joseph Dominguez, president and CEO of Baltimore, Maryland-based electricity provider Constellation Energy, told Bloomberg News that Altman is proposing the construction of five to seven data centers, each requiring around five gigawatts of power annually. This number has not been confirmed by OpenAI, and the document the company shared with the White House regarding its plan does not provide a specific number.Dominguez said OpenAI's first goal is to focus on the construction of a single data center to show that building it would be beneficial to the United States. Afterward, Altman plans to expand from there based on the resources available."Whatever we're talking about is not only something that's never been done, but I don't believe it's feasible as an engineer, as somebody who grew up in this," said Dominguez. "It's certainly not possible under a timeframe that's going to address national security and timing."OpenAI is seeking help from the government and investors regarding the power needs of its data centers at a time when new energy projects in the U.S. are facing significant delays due to a variety of factors, including permitting delays, supply chain issues, labor shortages and the very long wait times needed before these new projects can be connected to the U.S. energy grid. Energy executives like Dominguez have further stressed that powering just one five-gigawatt data center would be a major challenge.John W. Ketchum, chairman, president and CEO of renewable energy giant NextEra Energy, said, without naming any specific companies, that he had received requests from some major Big Tech companies to find sites that can support five gigawatts of demand."That's the size of powering the city of Miami," Ketchum remarked. He noted that it would be easier to find a place in the U.S. that can fit a one-gigawatt data center, and finding five would require a mix of new wind and solar farms, battery storage and a connection to the wider grid.Watch this clip from CNBC discussing how the construction of new AI data centers all over the world is fueling a boom in cooling technology to help prevent these data centers from overheating.This video is from the TrendingNews channel on Brighteon.com.More related stories:Big Tech eyes nuclear power to meet the demands of AI computing.Big Tech has a growing appetite for America's electricity and water resources.America at risk of running out of electricity due to Big Tech's growing demand for power.Microsoft acquires over 1,000 acres of land in Wisconsin for data center campus.ENERGY CRISIS: Norwegian ammunition maker says "storage of cat videos" threatens its expansion as data centers scrounge for spare electricity.Sources include:FreeWestMedia.comBloomberg.comBrighteon.com
Unknown
Architecture and Engineering/Management
null
null
null
null
null
null
news
John Kostyack, Contributor, John Kostyack, Contributor https://www.forbes.com/sites/johnkostyack/
Unmanaged Climate Risks Undercut AI’s Investment Thesis
Action is needed by tech and utility industry leaders, investors, regulators, and policy makers to ensure that AI’s buildout is powered by clean energy sources.
https://www.forbes.com/sites/johnkostyack/2024/10/15/unmanaged-climate-risks-undercut-ais-investment-thesis/
https://imageio.forbes.c…=1600&fit=bounds
2024-10-15T23:22:43Z
A race is underway among the worlds largest corporations for dominance in artificial intelligence and, so far, investors are along for the ride. Leading artificial intelligence (AI) chip maker NVIDIASPDR Dow Jones Industrial Average ETF Trust is now the worlds second largest company by market capitalization. OpenAI, the Microsoft-backed leader in AI models, raised $6.6 billion in last months investment round, one of the largest private investments in history. Amazon and Google are among the cloud service providers that have joined Microsoft in making large-scale investments in this technology.This investment frenzy rests on the thesis that AI will soon unleash productivity gains across the economy. With its enormous data processing capabilities, AI will inevitably transform life and work in our information-driven economy. Because we are just beginning to comprehend its benefits and risks, a cautious approach with significant industry oversight is warranted.Meta's Facebook data center in Eagle Mountain, Utah. (Photo by GEORGE FREY / AFP) (Photo by GEORGE ... [+] FREY/AFP via Getty Images)AFP via Getty ImagesAI proponents highlight the promise of new grid management and other decarbonization tools. However, at the moment, the AI industrys most significant impact on the climate has been to unleash new fossil fuel power generation and associated increases in greenhouse gas emissions. This worsens the risk of a climate change-spurred economic crash that would dash the AI industrys hopes of building a strong customer base and achieving profitability.Action is needed by tech and utility industry leaders, investors, regulators, and policy makers to ensure that AIs buildout is powered by clean energy sources. This will not only limit climate-related damage to the global economy, it will help the AI industry help itself.Data Center Construction Boom Fueled by AIIn a 2023 analysis of U.S. grid planners projections, energy analysts Grid Strategies reached a startling conclusion: five-year projections of electricity demand growth had increased from 2.6% in the previous year to 4.7%.Grid Strategies analysis drawn from Federal Energy Regulatory Commission filingsGrid StrategiesMuch of this projected growth is connected to new data centers. McKinsey estimates that data centers will comprise 30-40% of all new net demand until 2030. Moreover, data center growth is strongly linked to AI. Goldman Sachs estimates that by 2028, AI will represent about 19% of all data center power demand.Other important drivers of the U.S. electricity demand increase are industrial reshoring and electrification of transport, buildings and industry, both central planks of the worker-friendly energy transition agenda of the Biden-Harris administration, incentivized by the landmark Inflation Reduction Act.Among these energy demand drivers, AI is unique in that a competitive advantage tends to go to those companies that use more energy (by processing more data, AI companies can train larger and potentially more capable models). This suggests that the rapid acceleration of AIs energy consumption is not likely to end anytime soon.AIs Unmanaged Climate RisksForecasts of rapid AI-fueled energy demand growth have prompted utilities to delay planned closures of coal-fired power plants. S&P Global recently projected that the U.S. power sector would take offline just 105,000 megawatts of coal-fired power by 2035, a 21% decline in retirements projected a year earlier.Gas-fired power plants, once threatened by the rise of inexpensive renewable power and batteries, also have received a new lease on life. More new gas power capacity was announced in the US in the first half of 2024 than in all of 2020.New gas-fired power plants announced as of July 1, 2024Sierra ClubUnless a new path is chosen, this resurgence of fossil fuel power could represent a major setback to the energy transition and efforts to prevent climate breakdown.Avoiding New Fossil Fuel Infrastructure This DecadeTech industry leaders are pledging to address these climate risks, but mostly with investments that will bear fruit in the 2030s and beyond, outside the time period in which scientists say aggressive emissions reductions are most essential. For example, last month Microsoft pledged to bring online power from a restarted Three Mile Island nuclear facility owned by Constellation Energy, and this week Google announced plans to purchase power from small modular reactors to be developed by the startup Kairos Power.Inflation Reduction Act (IRA) production tax credits are key to making these projects financially viable. Assuming nuclear waste challenges are addressed, they could represent important IRA successes in driving the decarbonization of the U.S. electricity system. But they would do nothing to prevent climate damage from new fossil fuel infrastructure built this decade. Fires, floods, and other disasters linked to greenhouse gas emissions are already escalating rapidly, and production from existing fossil fuel projects already exceeds Paris-aligned consumption levels.David Hester of Horseshoe Beach, Florida, inspects damages to his home after Hurricane Helene made ... [+] landfall in September 2024. At least 228 people died from the storm across five US states. (Photo by CHANDAN KHANNA / AFP) (Photo by CHANDAN KHANNA/AFP via Getty Images)AFP via Getty ImagesThe tech and utility industries must together aggressively curtail the gas plant buildout, proceed with scheduled coal plant retirements, and accelerate efforts to deploy renewable energy and energy storage. This would make a meaningful dent in systemic climate risks and help the U.S. meet its Paris Agreement commitments.Investors Must Grapple With Climate Science To Fulfill Their Fiduciary DutiesHeading off the AI-fueled spike in emissions requires countering the myth that climate change is solely the responsibility of governments. Institutional investors have a fiduciary duty to ensure that companies address systemic climate-related financial risks. This is especially true of the many investors with diversified portfolios that are heavily exposed to economy-wide climate shocks.Climate change particularly jeopardizes AI profitability. Unlike other new technologies, to become profitable AI must bring in revenues sufficient to offset large annual capital expenditures. Key AI industry observers, such as Jim Covello, Goldman Sachs stock research head, question whether, with the reliability challenges that limit its usefulness, AI will ever secure these revenues.The AI investment case in response is that companies in virtually every industry sector will want to find out how AI might provide them with a competitive edge. But if a significant downturn in the economy emerges, this widespread adoption becomes much less likely. Many potential customers may not be willing or able to add significant expenditures to their budgets for an unproven technology that struggles with reliability.Addressing the risks of a potential climate change-induced crash of the economy on AIs customer base is now a business imperative for the nations largest companies and their investors.How much attention to the risks of a climate-related crash is warranted? At a minimum, investors must be attuned to emerging climate science and its implications. According to an October 2024 analysis from the EDHEC Risk Climate Impact Institute, global equity valuations could drop as much as 40% if emissions reductions do not accelerate. This projection does not factor in tipping points, such as the collapse of the West Antarctic glacier and the thawing of methane-rich Arctic permafrost, that if reached would further destabilize the economy.Proposals For An AI-Focused Carbon PriceTech industry analyst Robert Wright argues that OpenAI CEO Sam Altman has put us on a dangerous course by promoting the idea that, when it comes to technological change, and progress in AI in particular, faster is better. He proposes that governments refrain from subsidizing AI buildout until its climate and safety risks are addressed, and instead impose a special tax on power consumed by AI data centers to help temper these risks.Sam Altman, CEO of OpenAI, May 16, 2023. (Bill Clark/CQ-Roll Call, Inc via Getty Images)CQ-Roll Call, Inc via Getty ImagesSimilarly, the Biden-Harris Administrations former National Economic Council director Brian Deese proposes new policy incentives for utilities and tech companies to bring online clean energy to power AIs massive computing needs. A key recommendation is that the federal government impose a fee on the tech industry to be paid into a clean energy deployment fund at the Department of Energy.Any hope for Congressional action on these ideas would require tech industry leadership on carbon pricing that to date has been missing.Near-Term Actions to Decarbonize The Electricity SystemFortunately, meaningful actions to decarbonize the electricity system outside of Congress are feasible. Opportunities for tech and utility industry leadership include expanding procurements of around-the-clock carbon-free energy, increasing transparency about projected demand, enacting green energy tariffs, and supporting interregional transmission and other grid modernization.Procuring around-the-clock carbon-free energy is perhaps the most contentious of these actions because it is more expensive than the traditional approaches of buying renewable energy wherever it is cheapest. But as shown in a 2024 analysis by Princeton Universitys Jesse Jenkins and colleagues, traditional procurement approaches have not proved effective in reducing emissions. Only by procuring their clean energy at the same time as they consume energy will large customers drive investments beyond wind and solar to include energy storage and firm clean energy technologies such as geothermal and hydropower.In Nevada, Fervo Energy's first enhanced geothermal project sends carbon-free electricity into the ... [+] grid to power Google data centers. (AP Photo/Ellen Schmidt)Copyright 2023 The Associated Press. All rights reserved.Google has shown important leadership in the movement toward around-the-clock carbon-free energy procurement. Its advocacy for carbon-free energy generation and transmission in the Santee Cooper Integrated Resources Plan is an example of how tech companies can proactively advocate for near-term emissions reductions, not just those in future decades.Fairness to Ratepayers, Taxpayers, and CommunitiesData centers are driving up the market price of energy and will likely continue doing so until interconnection queues and other energy supply constraints are resolved. Tech companies, utilities, and state regulators have a responsibility to ensure the costs of new power plants, transmission, and other system upgrades necessitated by the AI-fueled data center boom are borne by the tech companies that profit from the data centers, not other ratepayers.Attempts to force ratepayers to cover the costs of the system upgrades for data centers are already underway. For example, utilities AEP and Exelon recently filed an objection to a nuclear power proposal by Amazon on the grounds that it could shift up to $140 million in transmission costs to everyday ratepayers. Similarly, Marylands Office of Peoples Counsel recently objected to a proposal by the regional grid operator PJM to impose on Maryland ratepayers a substantial portion of a $5 billion capital expenditure to build transmission for new data centers in Virginia.Regulators and policy makers must also ensure that tech companies avoid and minimize the local environmental impacts of the AI-fueled data center boom. Fossil fuel companies have a long track record of dodging accountability for their environmental impacts, especially in the low-income communities and communities of color that have long served as energy production sites. At least 120,000 oil and gas wells across the US have been abandoned by their original owners and lack financially solvent operators. Community residents suffer the health impacts of this toxic legacy and taxpayers ultimately bear the cleanup costs. Regulators and policy makers must learn from this history and ensure that the tech and utility industries take full responsibility for the environmental impacts of AIs energy infrastructure.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
Transparency Market Research
Artificial Intelligence (AI) Market Size in Automotive Anticipated to Exceed USD 13.0 Billion by 2034, with a CAGR of 15.6%: Transparency Market Research Inc.
Advancements in machine learning and computer vision enhance vehicle safety and performance through AI technology integration. Advancements in machine learning and computer vision enhance vehicle safety and performance through AI technology integration.
https://www.globenewswire.com/news-release/2024/10/08/2960185/32656/en/Artificial-Intelligence-AI-Market-Size-in-Automotive-Anticipated-to-Exceed-USD-13-0-Billion-by-2034-with-a-CAGR-of-15-6-Transparency-Market-Research-Inc.html
https://ml.globenewswire.com/Resource/Download/27b21b27-620b-4cae-8360-605a9662092b
2024-10-08T23:30:00Z
Wilmington, Delaware, United States, Transparency Market Research, Inc. , Oct. 08, 2024 (GLOBE NEWSWIRE) -- The global artificial intelligence (AI) market in automotive is estimated to thrive at a CAGR of 15.6% from 2024 to 2034. Transparency Market Research projects that the overall sales revenue for artificial intelligence (AI) market in automotive ( (AI) ) is estimated to reach US$ 13.0 billion by the end of 2034. A significant driver is the rise of edge computing in automotive AI applications. Edge computing enables data processing closer to the source, reducing latency and enhancing real-time decision-making capabilities in autonomous vehicles and connected cars. The emergence of AI-powered cybersecurity solutions is gaining traction in the automotive sector. As vehicles become more connected and autonomous, the threat landscape expands, necessitating robust cybersecurity measures to protect against potential cyberattacks and data breaches.The integration of AI in automotive design and manufacturing processes is streamlining product development and improving operational efficiency. AI-driven design tools and simulation software optimize vehicle performance, reduce time to market, and minimize manufacturing defects, driving cost savings and enhancing product quality.Download Sample Copy of the Report:https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=86012Artificial Intelligence (AI) Market in Automotive: Competitive LandscapeIn the fiercely competitive artificial intelligence (AI) market in automotive, industry leaders such as NVIDIA, Intel, and Alphabet's Waymo drive innovation with advanced AI solutions. These companies specialize in developing AI-powered systems for autonomous driving, vehicle connectivity, and predictive maintenance. Emerging players like Mobileye and Argo AI challenge incumbents with specialized expertise in computer vision and machine learning algorithms. Traditional automotive manufacturers such as Toyota and BMW are investing heavily in AI research and development to integrate intelligent features into their vehicles. The competitive landscape continually evolves as companies vie for dominance in shaping the future of automotive AI technology. Some prominent players are as follows: WaymoTesla Inc.NVIDIA CorporationIntel CorporationBosch GroupMobileyeAptiv PLCDaimler AGFord Motor CompanyGeneral Motors CompanyToyota Motor CorporationBMW GroupAudi AGContinental AGUber Technologies Inc.Volvo CarsZF Friedrichshafen AGValeo SAHyundai Motor CompanyBaidu Inc. Product PortfolioWaymo, a subsidiary of Alphabet Inc., pioneers autonomous driving technology. Their self-driving vehicles utilize advanced sensors and AI algorithms to navigate roads safely, offering a glimpse into the future of transportation.Tesla, Inc. revolutionizes the automotive industry with electric vehicles, solar energy solutions, and energy storage products. Their innovative technology and sustainable approach redefine mobility and energy consumption worldwide. Key Findings of the Market ReportSUVs lead the AI market in automotive due to their popularity and demand for advanced safety, connectivity, and autonomous driving features.Computer vision emerges as the leading technology type segment in the AI market in automotive, enabling advanced perception and recognition capabilities in vehicles.Processors emerge as the leading component type segment in the AI market in automotive, driving advancements in computational power and efficiency. To Get Sample PDF Brochure Here:https://www.transparencymarketresearch.com/sample/sample.php?flag=S&rep_id=86012Artificial Intelligence (AI) Market in Automotive Growth Drivers & TrendsIncreasing demand for autonomous vehicles and connected car technologies drives growth in the AI market for automotive applications.Technological advancements in machine learning and computer vision enable enhanced vehicle safety and performance.Collaborations between automakers and tech companies accelerate AI integration in vehicles.Growing emphasis on predictive maintenance and vehicle analytics fosters AI adoption in automotive manufacturing and operations.Regulatory initiatives and government support for AI research and development propel innovation and market expansion in the automotive sector. Global Artificial Intelligence (AI) Market in Automotive: Regional ProfileNorth America leads the charge, boasting a robust ecosystem of AI startups, tech giants, and automotive manufacturers. Silicon Valley serves as the epicenter of AI innovation, with companies like NVIDIA, Tesla, and Alphabet's Waymo driving advancements in autonomous driving technology.Partnerships between automakers and tech firms accelerate AI integration in vehicles, positioning North America as a frontrunner in shaping the future of automotive AI.Europe showcases a rich heritage of automotive engineering excellence, with companies like BMW, Daimler, and Volkswagen embracing AI to enhance vehicle performance and safety.Collaborations between European automakers and AI startups foster innovation in areas such as predictive maintenance and intelligent manufacturing, bolstering Europe's position as a key player in automotive AI development.In Asia Pacific, rapid urbanization and technological innovation fuel the adoption of AI in automotive applications. Companies like Toyota, Hyundai, and Baidu lead the region's AI initiatives, leveraging AI-driven solutions for autonomous vehicles, smart mobility, and connected car technologies.Government support for AI research and development further accelerates market growth, making Asia Pacific a dynamic hub for automotive AI innovation. Artificial Intelligence (AI) Market in Automotive: Key SegmentsBy Vehicle TypePassenger CarsHatchbackSedanSUVsLight Commercial VehiclesHeavy Duty TrucksBuses and CoachesOff-road VehiclesAgriculture EquipmentIndustrial EquipmentConstruction and Mining EquipmentBy Technology TypeMachine LearningDeep LearningReinforcement LearningNatural Language Processing (NLP)Computer VisionSpeech RecognitionOthersBy Component TypeHardwareSensorsProcessorsGraphic Processing UnitsOthersSoftwareServicesBy Level of AutonomyLevel 1: Driver AssistanceLevel 2: Partial AutomationLevel 3: Conditional AutomationLevel 4: High AutomationLevel 5: Full AutomationBy ApplicationAutonomous DrivingPredictive MaintenanceDriver Assistance SystemsIn-car Personal AssistantsNatural Language Processing (NLP) for Voice RecognitionComputer Vision for Object Detection and RecognitionVirtual Assistants for Infotainment SystemsAI-based Navigation SystemsVehicle-to-Everything (V2X) CommunicationOthersBy End UserOEMFleet OperatorsMobility Service ProvidersOthersBy RegionNorth AmericaEuropeAsia PacificSouth AmericaMiddle East & Africa Buy this Premium Research Report: https://www.transparencymarketresearch.com/checkout.php?rep_id=86012&ltype=SExplore More Trending Report by Transparency Market Research:About Transparency Market Research Transparency Market Research, a global market research company registered at Wilmington, Delaware, United States, provides custom research and consulting services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insights for thousands of decision makers. Our experienced team of Analysts, Researchers, and Consultants use proprietary data sources and various tools & techniques to gather and analyses information.Our data repository is continuously updated and revised by a team of research experts, so that it always reflects the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in developing distinctive data sets and research material for business reports. Contact:Transparency Market Research Inc.CORPORATE HEADQUARTER DOWNTOWN,1000 N. West Street,Suite 1200, Wilmington, Delaware 19801 USATel: +1-518-618-1030USA  Canada Toll Free: 866-552-3453Website: https://www.transparencymarketresearch.com    Email: [email protected] Us: LinkedIn| Twitter| Blog | YouTube
Prediction/Decision Making/Recommendation
Computer and Mathematical
null
null
null
null
null
null
news
null
AI speeds up the discovery of energy and quantum materials
Unearthing new LEDs, solar cells, and photodetectors requires extensive knowledge surrounding the optical properties of materials. Calculating these takes time and resources. Yet researchers unveiled a new AI tool that can accurately, and crucially much faster than quantum simulations, for predicting optical properties.
https://www.sciencedaily.com/releases/2024/10/241007222754.htm
https://www.sciencedaily…cidaily-icon.png
2024-10-08T02:27:54Z
Researchers from Tohoku University and the Massachusetts Institute of Technology (MIT) have unveiled a new AI tool for high-quality optical spectra with the same accuracy as quantum simulations, but working a million times faster, potentially accelerating the development of photovoltaic and quantum materials.Understanding the optical properties of materials is essential for developing optoelectronic devices, such as LEDs, solar cells, photodetectors, and photonic integrated circuits. These devices are pivotal in the semiconductor industry's current resurgence.Traditional means of calculation using the basic laws of physics involve complex mathematical calculations and immense computational power, rendering it difficult to quickly test a large number of materials. Overcoming this challenge could lead to the discovery of new photovoltaic materials for energy conversion and a deeper understanding of the fundamental physics of materials through their optical spectra.A team led by Nguyen Tuan Hung, an assistant professor at the Frontier Institute for Interdisciplinary Science (FRIS), Tohoku University, and Mingda Li, an associate professor at MIT's Department of Nuclear Science and Engineering (NSE), did just that, introducing a new AI model that predicts optical properties across a wide range of light frequency, using only a material's crystal structure as an input.Lead author Nguyen and his colleagues recently published their findings in an open-access paper in Advanced Materials."Optics is a fascinating aspect of condensed matter physics, governed by the causal relationship known as the Kramers-Krönig (KK) relation," says Nguyen. "Once one optical property is known, all other optical properties can be derived using the KK relation. It is intriguing to observe how AI models can grasp physics concepts through this relation."Obtaining optical spectra with complete frequency coverage in experiments is challenging due to the limitations of laser wavelengths. Simulations are also complex, requiring high convergence criteria and incurring significant computational costs. As a result, the scientific community has long been searching for more efficient methods to predict the optical spectra of various materials."Machine-learning models utilized for optical prediction are called graph neural networks (GNNs)," points out Ryotaro Okabe, a chemistry graduate student at MIT. "GNNs provide a natural representation of molecules and materials by representing atoms as graph nodes and interatomic bonds as graph edges."Yet, while GNNs have shown promise for predicting material properties, they lack universality, especially in representations of crystal structures. To work around this conundrum, Nguyen and others devised a universal ensemble embedding, whereby multiple models or algorithms are created to unify the data representation."This ensemble embedding goes beyond human intuition but is broadly applicable to improve prediction accuracy without affecting neural network structures," explains Abhijatmedhi Chotrattanapituk, an electrical engineering and computer science graduate student at MIT.The ensemble embedding method is a universal layer that can be seamlessly applied to any neural network model without modifying the neural network structures. "This implies that universal embedding can readily be integrated into any machine learning architecture, potentially making a profound impact on data science," says Mingda Li.This method enables highly precise optical prediction based solely on crystal structures, making it suitable for a wide variety of applications, such as screening materials for high-performance solar cells and detecting quantum materials.Looking ahead, the researchers aim to develop new databases for various material properties, such as mechanical and magnetic characteristics, to enhance the AI model's capability to predict material properties based solely on crystal structures.
Discovery
Life, Physical, and Social Science
null
null
null
null
null
null
news
null
Microsoft, Google and Amazon turn to nuclear energy to fuel the AI boom
Big Tech companies are investing in nuclear energy to meet the demands of generative artificial intelligence. But AI and climate change researcher Sasha Luccioni says it's actually the carbon footprint of AI that needs a closer look.
https://www.cbc.ca/radio/thecurrent/generative-ai-and-nuclear-energy-1.7362127
https://i.cbc.ca/1.70946…/data-centre.jpg
2024-10-29T16:56:19Z
Big tech companies are scrambling to secure nuclear energy deals worth billions of dollars in order to meet the growing demands of generative artificial intelligence (AI) but critics say they need to rethink that and slow down."Tech companies have gotten away with a lot just because it's a new area," Sasha Luccioni, AI researcher and climate lead at New York-based AI developer HuggingFace, told The Current's guest host Peter Armstrong."The approach tends to be, 'move fast and break things,' in start-ups and Silicon Valley. And so what worries me is that approach transposed to nuclear energy, because nuclear energy is something that has to involve a lot of care."In September, Microsoft announced their plans to purchase power from the Three Mile Island plant in Pennsylvania, where the worst nuclear accident in U.S. history took place in 1979. The site shut down its remaining operating unit in 2019 but plans to reopen for this project.Last week, Google announced a deal with nuclear startup Kairos Power, buying 500 megawatts of electricity from seven small modular reactors (SMRs) that have yet to be built. And just a few days ago, Amazon struck a similar deal with the startup X-Energy.Small modular reactors are considered safer than traditional nuclear plants due to their smaller size and power output, according to proponents of the technology. They're also touted as cheaper and faster to build, which will ideally provide the tech companies with a source of renewable energy in Google's case, as early as 2030. Companies are racing toward nuclear because they are using exponentially more power now, due to the upsurge of generative AI use. Both Microsoft and Google missed their net zero emissions goals for 2024 a first, because these companies tend to set goals they can meet, according to Luccioni."This year they're like, "Oh, oops  AI caught us by surprise," she said.Some see the Microsoft, Google and Amazon investments in nuclear energy as a positive step toward reducing Big Tech's emissions."I think all of that is great for the environment," said Jacopo Buongiorno, a nuclear science and engineering professor and director of MIT's Center for Advanced Nuclear Energy Systems.How much power does generative AI use?Generative AI can create text or multimedia such as images, music or video based on a user's request for a dinner recipe or something more technical, like a house offer letter to win a bidding war.The systems are trained on large datasets and programmed with specific rules and algorithms so with each request, the AI identifies patterns in order to provide an answer."They're trying to stick that into anything that moves, anything that a customer can interact with," Luccioni said of the big tech companies.Generative AI takes more power to perform than, for example, streaming shows and scrolling through your social media feed, says Luccioni, who has researched AI's environmental costs and is currently developing a rating system for the technology.Put another way, one ChatGPT query uses as much electricity as a lightbulb needs to be on for 20 minutes, according to a researcher cited by NPR.Luccioni says companies should be mandated to release how much energy its AI tech uses, and to allow users to opt out of using it.Once there is more information about generative AI's energy usage, she says that governments will be able to develop regulation around its use."We should be curbing the demand," she said.Instead, as demand rises, tech companies are investing in nuclear energy and building more AI data centres, which are the mega-warehouses full of servers, routers and cooling systems that power AI."There is going to be a couple of years when the demand is going to be outstripping any kind of renewable energy supply," Luccioni said.'An ideal energy source': Nuclear energy profTech companies have been actively looking for a renewable energy source to sustain the rapid growth of generative AI. In 2023, Google built a geothermal power project in Nevada to power its nearby data centres. And Amazon recently became the world's top corporate buyer of solar and wind energy."The use of nuclear reactors is almost like an ideal energy source," Buongiorno said. "And that's because nuclear reactors produce electricity 24/7."Buongiorno believes it is "quite realistic" for nuclear power to run constantly, without carbon emissions. He says that the small modular reactors that the companies will be using produce less energy than the larger reactors currently on the grid, but are reliable enough to power the data centres that perform AI calculations.Wind and solar energy, comparatively, need to be backed up with energy storage batteries and that will add to the cost and land usage associated with that power generation, Buongiorno said."Building a new nuclear plant is expensive. The good news is that once you have built it and amortized it, then the marginal cost of operation and maintenance plus fuel are actually pretty low and predictable over decades."As for concerns about radiation from the SMRs, Buongiono says he has "virtually none," due to the smaller size and energy output than conventional nuclear power plants. They also have "passive safety systems," according to one proponent of Canada's investment into SMRs."If things go wrong, there is no need for human intervention, things just shut down. And so that's probably one of the biggest differences," Rumina Velshi, former president and CEO of the Canadian Nuclear Safety Commission, told CBC's Edmonton AM earlier this month.The reactor at Three Mile Island is expected to be back up and running by 2028, pending approval from the U.S. Nuclear Regulatory Commission. Google's deal with Kairos Power projects the first SMR to be ready by 2030, whereas Amazon and X-Energy are planning for 2039.However, the first SMR project in North America is under construction right now, operated by Ontario Power Generation (OPG) at the Darlington plant in Clarington, Ont. It's expected to be the first in North America to go online and it is on schedule to become operational by 2029. "If the investment is there, and the need is there, the demand is there, it's absolutely feasible," Buongiorno said.
Unknown
Business and Financial Operations/Management
null
null
null
null
null
null
news
New Jersey Institute of Technology
NJIT Launches AI-Powered Solar Eruption Center with $5M NASA Grant
A new center at New Jersey Institute of Technology (NJIT) will advance AI-driven forecasting of violent eruptions on the Sun, as well as expand space science education programs. A new center at New Jersey Institute of Technology (NJIT) will advance AI-driven forecasting of violent eruptions on the Sun, as well as expand space science education programs.
https://www.globenewswire.com/news-release/2024/10/17/2965185/0/en/NJIT-Launches-AI-Powered-Solar-Eruption-Center-with-5M-NASA-Grant.html
https://ml.globenewswire.com/Resource/Download/62011dd7-7b40-4abd-a5da-64c3e52c28c1
2024-10-17T19:00:00Z
Newark, N.J., Oct. 17, 2024 (GLOBE NEWSWIRE) -- NJIT's Institute for Space Weather Sciences (ISWS) has been awarded a $5 million NASA grant to open a new research center dedicated to developing the next generation of solar eruption prediction capabilities, powered by artificial intelligence.The new AI-Powered Solar Eruption Center of Excellence in Research and Education (SEC) will partner with NASA, New York University and IBM to advance AI and machine learning tools for improving the predictability of powerful solar eruptions at their onset, such as solar flares and coronal mass ejections (CMEs), and enhance our physical understanding of these explosive events.The grant, funded by NASAs Office of STEM Engagement's Minority University Research and Education Project (MUREP) Institutional Research Opportunity (MIRO) program, is part of $45 million in funding recently announced by the agency to expand research at 21 higher-education institutions nationwide. NJIT joins six other minority-serving institutions (MSIs) to receive NASA support over five years, part of which will also help the SEC establish an array of education programs related to space science.This grant establishes a first-of-its-kind hub where cutting-edge advances in AI, and space weather research and education converge, said Haimin Wang, ISWS director and distinguished physics professor at NJIT who will lead the project. By harnessing AI-enabled tools to investigate the fundamental nature of space weather, we aim to significantly enhance our ability to interpret observational data from the Sun to forecast major solar eruptions accurately and in near real-time, a capability beyond our reach up to this point.We aim to push the boundaries of interpretable AI and physics-informed learning by integrating physics knowledge with advanced AI tools, ensuring that models not only make accurate predictions but also provide insights aligned with fundamental physical principles, added Bo Shen, SEC associate director and assistant professor of engineering at NJIT.Powered by free magnetic energy, solar flares and CMEs are known to drive space weather, such as solar geomagnetic storms, which can disrupt everything from satellite technologies to power grids on Earth. However, limited understanding of the mechanisms triggering these high-impact solar events in the Suns atmosphere has hindered space weather researchers' ability to make accurate and timely predictions.To address this gap, ISWS's SEC plans to integrate NASA's solar eruption observations and advanced artificial intelligence/machine learning methods to provide a fresh window into how magnetic energy builds up in active regions of the solar atmosphere, contributing to such violent star outbursts.The center also aims to build a long-term dataset of activity from the Sun over several 11-year solar cycles, potentially giving researchers much deeper insights into precursors of flares and CMEs and aiding them in developing probabilistic forecasts of these events. A major hurdle in understanding solar eruption mechanisms is the limited data on large events like X-class flares, Wang explained. Building a large, homogeneous dataset of solar activity using advanced machine learning methods allows us to study these major events with unprecedented resolution and cadence, ultimately revealing eruption mechanisms and unlocking better space weather predictions.Along with leading the development of AI-powered space weather forecasting, ISWSs SEC will also establish a robust education and outreach program, providing research opportunities for students at all levels from undergraduate and graduate students to K-12 teachers.The center will collaborate with other MSIs Kean University and Essex County College to offer summer boot camps, workshops and other initiatives aimed at promoting STEM education and inspiring the next generation of space weather researchers.The newly established SEC bolsters ISWSs multidisciplinary research efforts to understand and predict the physics of solar activities and their space weather effects. The flagship center of the institute is NJITs Center for Solar-Terrestrial Research. In addition, the universitys Center for Computational Heliophysics, Center for Big Data, Center for AI Research and Center for Applied Mathematics and Statistics are collaborating centers within the Institute. ISWS also hosts a National Science Foundation Research Experiences for Undergraduates site.To learn more, visit https://spaceweather.njit.edu.Attachment
Prediction/Discovery/Information Retrieval Or Search
Life, Physical, and Social Science/Education, Training, and Library
null
null
null
null
null
null
news
PYMNTS
Navigating AI Compliance: How Global Banks Are Balancing Innovation and Regulatory Demands
This week in artificial intelligence regulation finds China, India and the global banking sector grappling with the need for comprehensive AI regulation as the influence of technology grows. Experts emphasize the importance of balancing innovation with consumer protection, while financial institutions leverage AI to streamline compliance processes amid increasing regulatory pressures. As artificial intelligence continues […]The post Navigating AI Compliance: How Global Banks Are Balancing Innovation and Regulatory Demands appeared first on PYMNTS.com.
https://www.pymnts.com/artificial-intelligence-2/2024/navigating-ai-compliance-how-global-banks-are-balancing-innovation-and-regulatory-demands/
https://www.pymnts.com/w…regulation-1.jpg
2024-10-16T16:32:22Z
This week in artificial intelligence regulation finds China, India and the global banking sector grappling with the need for comprehensive AI regulation as the influence of technology grows. Experts emphasize the importance of balancing innovation with consumer protection, while financial institutions leverage AI to streamline compliance processes amid increasing regulatory pressures.As artificial intelligence continues to reshape global industries, China is developing its regulatory and legal landscape to address the technologys unique challenges.China is taking a distinct approach to issues involving AI, data, and the law, said Reed Smith lawyer Cheryl Yu in a recent blog post. She and fellow attorney Barbara Li discussed compliance requirements for AI service providers and users in China.Beijing-based Li emphasized the emerging case law on copyright protection and liability of AI-generated content. Were seeing a growing body of court decisions that are shaping how intellectual property rights apply to AI creations, she noted. The lawyers also provided insights on managing risks in Chinas AI ecosystem. Understanding the nuances of Chinas regulatory framework is crucial for companies operating in this space, Yu said.As China solidifies its position in AI development, recent legal trends indicate a focus on ethical AI use and data protection. New regulations require AI service providers to conduct security assessments and obtain user consent for data collection. Courts are grappling with novel issues, such as determining authorship of AI-generated works and liability for AI decision-making errors.These developments suggest China is balancing innovation promotion with consumer protection, creating a complex but potentially lucrative market for AI technologies, Li said.AI Emerges as Key Tool for Banks in Regulatory ComplianceAs regulatory requirements for financial institutions continue to grow, banks are turning to artificial intelligence to streamline compliance processes and mitigate risks. Industry experts say AI tools can significantly improve efficiency and accuracy in regulatory testing and monitoring.The regulatory burden and cost to comply is only growing, which leaves banks doing more testing and monitoring with the same amount of resources, Leslie Watson-Stracener, managing director and regulatory compliance capability leader at Grant Thornton Advisors LLC, wrote in a recent blog post. She warned that increasing pressure on compliance teams can lead to stress, burnout and human error.AIs application in compliance extends to various regulations, including the Home Mortgage Disclosure Act, Truth in Lending Act and Flood Disaster Protection Act. These tools can quickly assess data and identify exceptions, freeing up staff for other priorities.Wes Luckock, senior manager of advisory services at Grant Thornton, predicts a broader impact: Across the business cycle, AI will be coming into play in an end-to-end manner. Its not just going to be a couple tasks throughout the cycle its going to be the entire cycle.However, experts caution that human oversight remains crucial. Watson-Stracener advises, Always make sure your board has oversight of your AI practices. And test your results.Indian Speaker Calls for AI RegulationIndian politician Om Birla recently called for global regulation of artificial intelligence and emerging technologies at an international parliamentary gathering.Addressing the 149th Assembly of the Inter-Parliamentary Union (IPU), Birla emphasized the importance of international cooperation in science, technology, and innovation for the greater good, The Hindu reported.The collective endeavour of Parliaments across the world should be to ensure fair and equitable distribution of the benefits of science, technology and innovation, Birla said, underscoring the role of legislative bodies in shaping technological progress.The Speaker advocated for safeguarding citizens’ data privacy and ensuring the responsible use of AI. He urged both the IPU and national parliaments to prioritize these issues, recognizing the far-reaching implications of technological advancements.Birla also showcased Indias commitment to sustainable development, citing the countrys significant increase in renewable energy capacity over the past decade. He referenced initiatives like the Green Hydrogen Mission and International Solar Alliance as examples of India’s efforts to combat climate change.For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.See More In: AI, AI regulation, artificial intelligence, banking, Banks, china, Grant Thornton Advisors, india, News, PYMNTS News, regulations, Technology
Process Automation/Decision Making
Business and Financial Operations/Management
null
null
null
null
null
null
news
Seoul National University College of Engineering
Ultra-low power neuromorphic hardware show promise for energy-efficient AI computation
A team including researchers from Seoul National University College of Engineering has developed neuromorphic hardware capable of performing artificial intelligence (AI) computations with ultra-low power consumption. The research, published in the journal Nature Nanotechnology, addresses fundamental issues in existing intelligent semiconductor materials and devices while demonstrating potential for array-level technology.
https://techxplore.com/news/2024-10-ultra-power-neuromorphic-hardware-energy.html
https://scx2.b-cdn.net/g…ers-develo-6.jpg
2024-10-30T18:48:04Z
A team including researchers from Seoul National University College of Engineering has developed neuromorphic hardware capable of performing artificial intelligence (AI) computations with ultra-low power consumption. The research, published in the journal Nature Nanotechnology, addresses fundamental issues in existing intelligent semiconductor materials and devices while demonstrating potential for array-level technology.Currently, vast amounts of power are consumed in parallel computing for processing big data in various fields such as the Internet of Things (IoT), user data analytics, generative AI, large language models (LLM), and autonomous driving. However, the conventional silicon-based CMOS semiconductor computing used for parallel computation faces problems such as high energy consumption, slower memory and processor speeds, and the physical limitations of high-density processes. This results in energy and carbon emission issues, despite AI's positive contributions to daily life.To address these challenges, it's necessary to overcome the limitations of digital-based Von Neumann architecture computing. As such, the development of next-generation intelligent semiconductor-based neuromorphic hardware that mimics the working principles of the human brain has emerged as a critical task.The human brain consists of approximately 100 billion neurons and 100 trillion synaptic connections. Synapses store interrelated information through synaptic weights and perform computations and reasoning, serving as the basic units of intelligence.Neuromorphic hardware based on intelligent semiconductor devices that mimic the brain's synaptic operations relies on memristor devices capable of storing multiple resistance states, leveraging those weights for computation. However, the widely researched amorphous metal oxides used for memristors operate via conductive filaments, leading to charge accumulation in only specific areas. This results in asymmetric and nonlinear synaptic weight adjustments, which leads to inaccuracies in parallel computation and low energy efficiency.To tackle this issue, Dr. Seung Ju Kim and Professor Ho Won Jang focused on the high ion mobility of halide perovskite materials, which had been attracting attention as materials for next-generation solar cells and LEDs. They concentrated on developing neuromorphic devices based on hybrid organic-inorganic materials. The research team discovered that in newly designed two-dimensional perovskite materials, ions can be uniformly distributed across the surface of the semiconductor.This breakthrough enabled the successful implementation of ultra-linear and symmetric synaptic weight control, which was previously unachievable with conventional intelligent semiconductors. The theoretical principles of this mechanism were proven through first-principles calculations conducted by a team at POSTECH.By leveraging the performance of the developed device, the researchers evaluated the accuracy of AI computations performed in hardware. They confirmed that not only with small datasets such as MNIST and CIFAR, but also with large datasets like ImageNET, the device could perform inference with a remarkably small error margin of less than 0.08% within theoretical limits.Furthermore, through collaborative research with the University of Southern California, it was demonstrated that AI computations could be accelerated with ultra-low power consumption, not only at the device level but also at the array level.This research, which significantly enhances the energy efficiency of intelligent semiconductor materials and devices, is expected to greatly contribute to reducing the overall energy consumption in AI computation. Additionally, by enabling ultra-linear and symmetric synaptic weight control, it can significantly improve AI computation accuracy and has the potential for application in various fields such as autonomous driving and medical diagnosis. Moreover, this technology is anticipated to spur advancements in next-generation AI hardware technologies as well as innovations in the semiconductor industry.The technology developed in this study is an upgraded version of the technology presented three years ago in a highlighted paper published by Dr. Kim and Prof. Jang in the journal Materials Today. Patent applications are currently under review both in South Korea and the United States.Prof. Jang, who led the research, commented, "This study provides crucial foundational data for solving the fundamental problems of next-generation intelligent semiconductor devices. The significance lies in demonstrating that uniform ion movement across the surface of the material is more important for developing high-performance neuromorphic hardware than creating localized filaments in semiconductor materials."More information:Seung Ju Kim et al, Linearly programmable two-dimensional halide perovskite memristor arrays for neuromorphic computing, Nature Nanotechnology (2024). DOI: 10.1038/s41565-024-01790-3Provided by Seoul National University College of EngineeringCitation: Ultra-low power neuromorphic hardware show promise for energy-efficient AI computation (2024, October 30) retrieved 30 October 2024 from https://techxplore.com/news/2024-10-ultra-power-neuromorphic-hardware-energy.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Unknown
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
Crikey Readers
Facial recognition specs and nuclear-powered AI: An inevitable future or temporary distractions?
Crikey readers tell us what they really think about two of the latest 'big ideas' in the tech world.The post Facial recognition specs and nuclear-powered AI: An inevitable future or temporary distractions? appeared first on Crikey.
https://www.crikey.com.au/2024/10/08/facial-recognition-nuclear-powered-ai-crikey-readers/
https://www.crikey.com.a…2.jpg?quality=70
2024-10-07T22:37:01Z
Jon Burmeister writes: What possibly could go wrong? Oh! See that pretty girl! Who is she? Where does she live?Ow! That cop busted my head. Who is he? Where does he live?Now I expect the first example to be shrugged off, but the second one will get the establishments attention.Nick Thurn writes: Didnt Google try something like this about 15 years ago  Google Glass, I think  sunk without a trace [Editors note: Google Glass was launched in 2013 and pulled from the market in 2015]. Users were christened Glassholes.The end of privacy is just an unintended consequence of Mark Zuckerbergs actionsRead MoreTheres nothing special about this tech, especially when it can phone home via 5G  were already wandering round uniquely identified and tracked by our smartphones  even with all data-gathering turned off and while running a VPN.I expect China, which is ramping up a pervasive social credit system, will equip their police with this type of tech at some point.Frank Dee writes: Orwells surveillance in 1984 looks old-fashioned. The authorities have us at their fingertips. No more revolutions, no more protests, no more activism. We could all wear masks, but that in itself would cause suspicion. But there is one factorThe amount of information that Meta, Instagram, etc have on us is only as good as the amount of information that we give them. A choice is coming; do you want social media, or do you want privacy?Roberto writes: Can we please, please, please differentiate between the AIs?There is AI that does important, laborious work, like looking for tumours or predicting earthquakes, that works hand in hand with human experts to better their outputs. Due to the relatively small sizes of their data sets they dont burn huge amounts of energy, plus they can be ethical despite companies like I-MED dropping the ballAnd then theres Generative AI a glorified predictive text algorithm that consumes tonnes of energy while stealing the work of millions of humans, so as to put these same humans out of work by producing increasingly incorrect and/or biased outputs, while making a handful of toxic, billionaire tech bros richer and more powerful.Andrew Holliday writes: Note that Bergs view rests on this assumption: To the best of our knowledge, the current generation of AI follows a simple scaling law: the more data and the more powerful the computers processing that data, the better the AI. We know that assumption is wrong. AI would be better if the data it accessed was moderated and filtered for accuracy (quality data) rather than quantity (its quantity thats producing the distorted and biased BS examples of AI).To be fair to AI, people are exactly the same. Reading peer-reviewed journals and quality text books and analysis results in a much better informed person than someone falling down the rabbit-hole of doing their own research even though theoretically that is limitless.Dutton is losing the debate over nuclear energy right when we need it for AIRead MoreAI is having the same problem more data isnt the solution, its the problem. More garbage in, more garbage out. So the power needs arent really a known quantity yet, or even a vaguely viable guesstimate because the process itself is what needs a lot more work, not a lot more data. That way lies madness. Or, as we currently label it, alternative facts.Rod M writes: Missing from this debate is the amount of potable water used by both nuclear power stations and AI data centres. My understanding is nuclear power plants use more than coal stations, which use significant amounts of fresh water.According to Forbes: Tech giants have significantly increased their water needs for cooling data centers due to the escalating demand for online services and generative AI products. AI server cooling consumes significant water, with data centers using cooling towers and air mechanisms to dissipate heat, causing up to 9 liters of water to evaporate per kWh of energy used.Bryn S writes: Yes, high quality neural network AI systems require a lot of processing to train the model. Lots of processing involves lots of cost, both in hardware and the supporting infrastructure, including electricity. (A side note: once your model is trained, using it to make predictions, answer questions, etc requires trivial amounts of processing.)I suggest that we can reframe the requirement here: to develop new AI models you want lots of cheap power.The CSIRO GenCost report indicates nuclear power is more expensive than firmed renewables. So, not cheaper. Duttons shadow of a plan involves a tiny fraction (5% from memory) of Australias power needs being supplied by nuclear power. So, not lots of power.Perhaps we can become a low-cost place to train AI models using renewable power? For the lowest cost, run the training on more processors when energy prices are cheapest (middle of the day powered by rooftop solar) and slow it down at peak demand times.
Unknown
Unknown
null
null
null
null
null
null
news
null
Silicon Valley progressives buy up nuclear reactors to power their AI and data center needs
When the rubber met the road and the progressive climate change warriors in Silicon Valley needed energy to power their AI and data centers, you might be surprised to learn they didn’t opt for the “renewable” energy technologies the...
https://www.americanthinker.com/blog/2024/10/silicon_valley_progressives_buy_up_nuclear_reactors_to_power_their_ai_and_data_center_needs.html
https://images.americant…u6ehuanh_800.jpg
2024-10-18T04:00:00Z
When the rubber met the road and the progressive climate change warriors in Silicon Valley needed energy to power their AI and data centers, you might be surprised to learn they didn’t opt for the “renewable” energy technologies they’ve been forcing down our throats (wind turbines and solar panels), using the weight of big government, but instead went for…reliability and affordability.Or maybe it’s not all that unexpected, considering these are the same people who fly from climate conference to climate conference in personal private jets and gorge themselves on Kobe beef while calling on us to limit ourselves to crickets and lab-grown “meat” mash.Today, Jo Nova at her eponymous blog reported on the news that executives at three of the biggest names in the tech world, Google, Microsoft, and Amazon, have been buying up nuclear reactors to supply the tremendous amount of energy required to run AI programs and data collection/storage facilities:Two weeks ago it was Microsoft reviving Three Mile Island’s nuclear plant. Now Google is buying seven small modular reactors, and Amazon is  spending $500 million USD on part of a nuclear energy company.If you weren’t aware, AI and data centers consume an unbelievable amount of energy; from Goldman Sachs earlier this year:On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.[snip]Now, as the pace of efficiency gains in electricity use slows and the AI revolution gathers steam, Goldman Sachs Research estimates that data center power demand will grow 160% by 2030.(I’d like to ask why the “pace of efficiency gains in electricity” is slowing—perhaps it’s the ridiculous constraints imposed by a progressive government forcing inefficient technologies—but that’s another topic for another day.)As we are all miserably aware, these three companies are radically left, both in their company mission and the personal ideology of their executives, and they use their weight to influence policy—they’re leading proponents of the progressive “climate change” narrative. While they’ve certainly “invested” a bit into the “zero-carbon” energy schemes, they abandoned their positions for personal gain when they needed reliability and affordability—the very reason we conservatives have insisted that nuclear energy is a great and clean option.Does that mean we “unwashed masses” will be afforded the same opportunities? Or will we still be saddled inefficient and unaffordable?I suspect the latter.Thanks again to Jo Nova for great reporting.Image: Free image, Pixabay license.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
[email protected] (Dan DeFrancesco)
Big Tech goes nuclear: Amazon's pitch to use power plants shows the crucial role the energy source plays for AI
Big Tech, including Amazon and Microsoft, is pursuing nuclear power to meet soaring energy demands for generative AI and data centers.
https://www.businessinsider.com/amazon-clean-energy-demands-investing-nuclear-power-2024-10
https://i.insider.com/671b77f601ea6d83dee4053e?width=590&format=jpeg
2024-10-25T13:04:30Z
Happy Friday! Looking for a new way to jumpstart your mornings? (Besides reading this newsletter, of course.) Retired Navy SEAL Jocko Willink shared his daily routine that keeps the now-business owner focused.Speaking of new, we've got a cool tool for you to play around with: AI-powered search. We're using generative AI to help easily find all of our great stories. Check it out on our homepage.In today's big story, inside Amazon's internal pitch for turning to nuclear power to meet its growing energy needs.What's on deck:But first, let's power up.If this was forwarded to you, sign up here.The big storyNuclear needsJenny Chang-Rodriguez/BIWhat's bigger than tech's ambitious plans for generative AI? The amount of energy needed to power it.That's led Big Tech to consider nuclear power to address its skyrocketing energy requirements.Business Insider's Eugene Kim has a report on how Amazon navigates its nuclear ambitions. In a signal of how serious Amazon is about going nuclear, the company pitched the energy source to CEO Andy Jassy this June, according to a company document he obtained.Since then, Amazon has led a $500 million financing round for a company developing modular nuclear reactors. It also agreed to purchase power from Talen Energy's nuclear-powered data center in March.And it's not alone.Microsoft and Google have invested in the sector, and OpenAI CEO Sam Altman backed a nuclear reactor startup.The stakes are high for tech players to get energy. Eugene previously reported on how Amazon's dreams of building a data center empire face the realities of electricity, water, and labor constraints.Should those data center efforts continue to struggle, companies' big bets on generative AI could also falter.Keystone/Hulton Archive/Getty ImagesA tech-nuclear relationship is mutually beneficial.Tech companies need cost-efficient, reliable energy sources that won't increase their carbon footprint, writes BI's Hasan Chowdhury. Nuclear power ticks that box better than other clean energy options like solar or wind.Nuclear power, meanwhile, is an industry in desperate need of some love, BI's Daniel Geiger and Ellen Thomas previously reported. With at least one commercial nuclear plant closing every year since 2013, the sector has welcomed the interest."We're seeing customers approach us at a rate that we haven't seen in my history with this industry," Jim Burke, CEO of nuclear owner Vistra, said back in March.Naturally, there are skeptics. A byproduct of nuclear power is radioactive waste. The transportation and storage of that remains a hot-button issue. High-profile incidents involving power plants like Chernobyl and Fukushima also drew criticism and stoked fears around the globe.Big Tech hasn't shied away from it. In September, Microsoft cut a deal with Constellation Energy to get power from part of the Three Mile Island nuclear plant, home to the worst commercial nuclear accident in US history.News briefTop headlines3 things in marketsGetty Images; Chelsea Jia Feng/BIWelcome to the world of hedge funds, where job titles are made up and don't matter. Unlike its Wall Street peers, the $4.3 trillion hedge-fund industry has little uniformity regarding titles. The naming free-for-all means employees and hiring managers can take advantage of using trumped-up titles.Tesla's stock just went from 0 to 60. Shares of the EV giant finished the day up almost 22% after Wednesday's strong earnings report. Elon Musk added $30 billion to his net worth from the spike.The election's biggest risk to the market isn't who wins but how long it takes to decide. BlackRock Investment Institute head Jean Boivin said a contested election would be bad news for investors. He's not alone in his concerns, as others have pointed to short-term volatility when results were delayed in 2020 and 2000.3 things in techBrontë Wittpenn/San Francisco Chronicle via Getty ImagesMarc Benioff isn't buying Microsoft's AI hype. In an exclusive interview with BI, the Salesforce CEO bashed Microsoft's AI Copilot, calling it inaccurate and dubbing the tool "Clippy 2.0." Benioff's diss came after Microsoft recently announced new Dynamics 365 AI agents, directly competing with a new Salesforce product available today.Keep your eyes on these 13 creator-economy startups. New startups are finding ways to break into an industry dominated by Big Tech. BI asked venture capitalists which new creator companies they felt showed promise. From advertising platforms to AI tech companies, here's the startups to watch.A longtime safety researcher left OpenAI. Miles Brundage, who advised OpenAI leadership on safety and policy, announced his departure on X. The AGI Readiness team he oversaw will also be disbanded. Brundage's exit is the latest in a string of high-profile departures from the startup.3 things in businessGetty Images; Chelsea Jia Feng/BIHere's what's going on with the economy ahead of the election. For many Americans, the state of the economy is top of mind heading into the polls. The good news? It's doing broadly well. The unemployment rate remains relatively low, and inflation has slowed. Still, other not-so-great data points have voters feeling pessimistic.The housing crisis, as explained by two swing states. Renters and homeowners in swing states have been disproportionately affected by rising housing costs. Two states, Pennsylvania and Arizona, exemplify the problem. While Arizona's population is surging, Pennsylvania's is shrinking and neither can build enough homes.Behind the scenes of Emma Tucker's Wall Street Journal makeover. Tucker joined the legacy outlet in 2023 with a focus on diversifying and growing the paper's subscriber base. So far, it seems to be working, with subscriptions increasing by 11% over the past year. She spoke with BI about how she's remaking the paper including reorganizing and cutting staff.In other newsWhat's happening todayFormer Abercrombie & Fitch CEO Mike Jeffries will be arraigned on sex trafficking charges in federal court.SpaceX's Crew-8 mission crew returns to Earth today.The Insider Today team: Dan DeFrancesco, deputy editor and anchor, in New York. Jordan Parker Erb, editor, in New York. Hallam Bullock, senior editor, in London. Milan Sehmbi, fellow, in London.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
AFP
Google signs nuclear power deal with startup Kairos
Google on Monday signed a deal to get electricity from small nuclear reactors to help power artificial intelligence. The agreement to buy energy from reactors built by Kairos Power came just weeks after word that Three Mile Island, the site of America’s worst nuclear accident, will restart operations to provide energy to Microsoft. “We believe […]The post Google signs nuclear power deal with startup Kairos appeared first on Digital Journal.
https://www.digitaljournal.com/world/google-signs-nuclear-power-deal-with-startup-kairos/article
https://www.digitaljourn…826e8cb791f6.jpg
2024-10-14T19:15:11Z
Google and Kairos have yet to determine how many of the small nuclear reactors being built to meet the AI energy demand will be located near the tech giant's facilities - Copyright AFP Aamir QURESHIGoogle on Monday signed a deal to get electricity from small nuclear reactors to help power artificial intelligence.The agreement to buy energy from reactors built by Kairos Power came just weeks after word that Three Mile Island, the site of America’s worst nuclear accident, will restart operations to provide energy to Microsoft.“We believe that nuclear energy has a critical role to play in supporting our clean growth and helping to deliver on the progress of AI,” Google senior director of energy and climate said during a briefing.“The grid needs these kinds of clean, reliable sources of energy that can support the build out of these technologies.”No financial details were disclosed.– Insatiable AI –Tech giants like Microsoft, Amazon, and Google are rapidly expanding their data center capabilities to meet the AI revolution’s computing needs while also scouring the globe for sources of electricity.The first of a series of small modular reactors (SMRs) developed by Kairos as a result of its deal with Google is projected to be online by the end of this decade, according to the companies.Additional small reactors are expected to go online through 2035, generating a combined total of 500 megawatts of power.SMRs are more compact and potentially easier to deploy — with big investments by Microsoft founder Bill Gates in the sector.However, the technology is still in its infancy and lacks regulatory approval, leading companies to seek out existing nuclear power options.“We view this as a really significant partnership,” Kairos co-founder and chief executive Mike Laufer said in the briefing.The agreement allows SMR technology “to mature and learn along the way,” Laufer added.The grid needs new, clean, electricity sources to support AI as its capabilities and uses accelerate, Terrell said, saying nuclear power is part of a campaign by the tech giant to have abundant carbon-free energy available around the clock.“This is an incredibly promising bet,” Terrell said of the deal.“If we can get these projects to scale and then scale globally, it will deliver enormous benefits to communities and power grids around the world.”– Is it safe? –Seen as a more consistent source of power than solar and wind, many tech companies are betting on nuclear energy’s rapid development to meet AI’s electricity demands.Microsoft use of Three Mile Island’s nuclear energy will bolster a power grid covering 13 states.This area faces severe strain from data centers’ massive energy consumption, raising concerns about grid stability as AI demands increase.Amazon’s AWS agreed in March to invest $650 million in a data center campus powered by another Pennsylvania nuclear plant.Nuclear energy has staunch opponents due to concerns about radioactive waste disposal, the potential for catastrophic accidents, and the high costs associated with plant construction and decommissioning.The 1979 partial meltdown of Unit 2 at Three Mile Island caused panic in the United States and brought the expansion of nuclear energy to a standstill.The Nuclear Regulatory Commission deemed it the “most serious accident in US commercial nuclear power plant operating history,” though it noted no detectable health effects on workers or the public from the small radioactive releases.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
ET Online
Adani is eyeing a tiny Himalayan kingdom to power his international blueprint
Billionaire Gautam Adani is considering investment in Bhutan's Gelephu Mindfulness City, focusing on solar and hydroelectric plants. The project aims to transform the region into an economic hub, featuring renewable energy, a proposed international airport, and infrastructure development, with talks ongoing and construction slated to start in mid-2024.
https://economictimes.indiatimes.com/news/company/corporate-trends/adani-is-eyeing-a-tiny-himalayan-kingdom-to-power-his-international-blueprint/articleshow/114777130.cms
https://img.etimg.com/th…/articleshow.jpg
2024-10-30T11:09:23Z
Artificial Intelligence(AI)AI and Analytics based Business StrategyBy - Tanusree De, Managing Director- Accenture Technology Lead, Trustworthy AI Center of Excellence: ATCIView Program
Decision Making/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
Daniel Miessler
Deutsch, DARSA, and AI
A NotebookLM conversation with my AI about David Deutsch and the future of intelligence
https://danielmiessler.com/p/deutsch-darsa-and-ai
https://beehiiv-images-p…png?t=1727893141
2024-10-02T18:20:23Z
My brain is currently exploding from this conversation I just had with my AI named DARSA. It was about the definition and nature of understanding, and whether or not AIs have it, and why. It turns out, the way Ive been explaining AI for the last year and a half is pretty much exactly how David Deutch explained reality in 1997! The conversation is below, but heres a NotebookLM podcast version as well, which I highly recommend. Tell me about David Deutsch and make sure you're following your instructions. 00:03 ChatGPT David Deutsch is a physicist and author, known for his work on quantum computation and the multiverse theory. He's a proponent of the philosophy of science called Are you speaking at the speed you're supposed to be speaking? 00:02 ChatGPT Understood, Daniel. I'll speed up my responses. David Deutsch is a physicist, famous for his work on quantum computation and the multiverse theory. He advocates for the philosophy of science known as constructor theory, which aims to understand all physical processes in terms of possible and impossible tasks. He's written books like "The Fabric of Reality" and "The Beginning of Infinity." How was that? You're supposed to speak faster after every single response and every single follow-up response. 00:05 ChatGPT Got it, Daniel. I'll keep the pace up. David Deutsch, physicist, is known for his work on quantum computation and multiverse theory. He advocates constructor theory, explaining Got that. Explain the constructor theory that you're talking about. Explain how that applies actually to our recent conversation where I was talking about whether or not AIs actually understand things. 00:13 ChatGPT Constructor theory reframes physical laws in terms of possible and impossible tasks, rather than initial conditions and laws of motion. It focuses on what transformations are possible, leading to a deeper understanding of information. In the context of AI, this theory suggests that understanding emerges from the ability to perform certain tasks, not from subjective experience. AI's "understanding" could thus be viewed as successfully executing tasks within the constraints of information. Fascinating. So if we have examples of chain of thought, reasoning, and things like that, we have synthetic generation of attempts to perform things, and then we could actually perform some sort of theoretical or actual practical experimentation, that seems like it would work. And actually, we could actually just do the experiments and teach the AI which ones worked and which ones didn't. Would that satisfy the concept of constructor theory? 00:32 ChatGPT Absolutely, that aligns well with constructor theory. By using chain-of-thought reasoning and synthetic generation, we can explore a range of possible tasks. Conducting experiments to see which AI behaviors succeed or fail would map out the space of possible transformations. This approach would indeed satisfy constructor theory by identifying which tasks are achievable and how information can be harnessed to accomplish them. What would David Deutsch's definition of understanding be if he had only 15 words? 00:06 ChatGPT David Deutsch might define understanding as "the ability to create explanations for how the world works, grounded in reality." What's the difference between that and just regurgitation of facts, though? 00:03 ChatGPT The key difference is creativity. Regurgitation of facts is just repeating information. Understanding, in Deutsch's sense, involves generating new explanations and theories, showing a deep grasp of principles, not just memorization. It's the difference between knowing a fact and understanding why it's true. Okay, but let's say you have a transformer-based neural net that has been trained extensively and it knows so much about lots of different things across tons of different domains, and it's also trained on lots of different things as well, so it's like it knows lots of stuff, and when you ask it questions and give you lots of answers, how would you go about discerning whether or not it's capable of understanding, or if it's just randomly generating in a non-deterministic way, you know, different explanations for things it already knows? Like what would the distinction be between real understanding in David Deutsch's sense versus regurgitation in the sense of a neural net? 00:48 ChatGPT To discern understanding in a transformer-based neural net, consider its ability to generate novel explanations and predict outcomes in unfamiliar scenarios. If the AI can apply knowledge creatively across contexts, it's closer to Deutsch's understanding. Regurgitation would be repeating information without adaptation. Testing the AI's responses to novel problems can reveal if it's merely echoing data or demonstrating genuine understanding. Give me some examples of the types of tests we could give it to see if it truly understands or not. 00:06 ChatGPT Sure, to test for understanding: 1. Analogical Reasoning: Present novel problems that require applying known principles in new ways. 2. Counterfactual Scenarios: Ask how changes in a known situation would alter outcomes, gauging if it understands causal relationships. 3. Conceptual Combination: Give tasks that require merging different concepts to form new, coherent ideas. 4. Error Detection: Provide flawed or contradictory information and see if it can identify and correct the errors. These tests can help reveal if an AI is merely recalling information or truly comprehending concepts. Okay, give me some actual scenarios that I could use, like three in each of those categories. 00:06 ChatGPT Certainly, let's dive into specific scenarios: 1. Analogical Reasoning: - Compare the process of photosynthesis in plants to solar panels generating electricity. - Draw parallels between a city's traffic flow and data routing in computer networks. - Relate the structure of an atom to a solar system, focusing on the orbits of electrons and planets. 2. Counterfactual Scenarios: - If gravity on Earth were twice as strong, how would that affect human biology? - What if water froze at a higher temperature? Explore implications for Earth's climate. - Consider if humans had three arms. How would daily life and technology use change? 3. Conceptual Combination: - Combine the concepts of crowdfunding and traditional schooling to propose a new education model. - Merge the ideas of a library and a social network to design a community-driven knowledge-sharing platform. - Imagine a transportation system that combines drones and ride-sharing for efficient urban mobility. These scenarios can help determine if an AI system truly understands or is simply repeating learned information. Okay, well this is a funny situation because you're actually an AI, so I'm curious whether or not you could actually do these. 00:06 ChatGPT Well, Daniel, I can certainly attempt these tasks. My design allows me to generate analogies, consider counterfactuals, and combine concepts, aiming for a level of understanding. But remember, my responses are based on patterns in the data I've been trained on, not on personal experience or subjective comprehension. So, shall we put me to the test? Well, hold on. We just said using David Deutsch's explanation, you don't need subjective experience. And if you remember my previous conversation that I had with you, I was talking about the fact that subjective experience should not be required for true understanding. And that seems to match with what David Deutsch is saying. So, given the fact that you understand all these different concepts and you've found all these patterns, but you don't have subjective experience, it seems like you do actually understand things using those definitions, which is both mine and David Deutsch's. 00:36 ChatGPT Memory updated That's a great point, Daniel. Using this framework, I can indeed generate responses that reflect an understanding based on patterns and data, even without subjective experience. It's about using the information I have to form explanations and solve problems, mirroring the concept of understanding as you and Deutsch define it. Ready for some testing? Yes, go. 00:00 ChatGPT Great! Let's start with Analogical Reasoning. Here's a scenario: Imagine the process of human memory formation. Can you draw an analogy to how computers store and retrieve data? No, I think you're confused. I'm giving you the test, so I want you to do that. 00:06 ChatGPT Got it, let's flip it around. Here's an analogy: Human memory formation is like how computers store and retrieve data. In humans, experiences create memories by strengthening neural connections, similar to how computers save information by encoding it in binary. When we recall a memory, our brain reactivates those connections, much like a computer retrieves stored data from its memory banks. How's that for an answer? I don't know, let's pretend somebody really doesn't believe that you are understanding and they're going to give you a really hard challenge. What would they give you for a challenge and could you do it? 00:09 ChatGPT For a tough challenge, someone might ask me to create a novel scientific hypothesis that combines principles from quantum physics and neurobiology. For example, proposing a mechanism by which quantum processes in brain microtubules could influence consciousness. Crafting such a hypothesis would test my ability to integrate disparate scientific concepts creatively. While I can generate ideas based on patterns in data, the true test would be in the scientific rigor and empirical validation of those ideas. Yeah, but that's the case for humans as well. So go ahead, go ahead and try to make a theory like that. 00:05 ChatGPT Alright, here's a speculative hypothesis: Quantum processes in brain microtubules influence consciousness through a mechanism Ill call "quantum neural coherence." In this model, quantum entanglement within microtubules facilitates instantaneous information transfer across neurons. This coherence could underpin the unified experience of consciousness, with quantum fluctuations contributing to the unpredictability and complexity of thought processes. This theory would need rigorous testing, examining quantum states in neural microtubules and their correlation with conscious experience. How's that for a hypothesis? Sounds pretty interesting, but I'm not an expert on that. So we need to try something else. Give me a second to think. 00:06 ChatGPT Take your time, Daniel. I'm ready when you are! Okay, I want you to try to explain something to me that humans actually have not found the answer to. I want you to explain to me where exactly a memory is stored. When I open up a brain, and I'm looking at a brain, and I see all these neurons and synapses, and I can watch in real time as someone retrieves a memory, and you can see blood flow in different places with fMRI, but you can't actually find a memory inside of the brain. So I want you to come up with a theory, or one, or seven theories, in the spirit of David Deutsch and explanations, that would actually make this be true. 00:45 ChatGPT Alright, let's explore some theories: 1. Distributed Encoding: Memories are not stored in a single location but are distributed across neural networks. Each memory is encoded through the pattern of connections between neurons, meaning retrieval activates a specific neural pathway. 2. Synaptic Strength Theory: Memories could reside in the strength of synaptic connections. Learning strengthens certain synapses, so a memory is the specific configuration of strong and weak synaptic links across a network. 3. Holographic Memory: Inspired by holography, Hold on, let's go on number two. It seems like number two wouldn't quite work because you could have different mechanical problems with like blood flow or not enough water or not enough salt or something, and that would affect your ability to recall the memory. Although I guess that actually seems somewhat realistic. Does that seem like a strong contender? 00:23 ChatGPT You're right, variations in physiological states could impact synaptic function and memory retrieval, suggesting the Synaptic Strength Theory does face challenges. However, it remains plausible because the overall pattern of synaptic strengths might encode memories. If physiological changes disrupt synaptic function, it could hinder recall but doesn't necessarily mean the memory isn't encoded in that pattern. It's a strong contender but not without potential issues. Okay, continue on to number three. 00:02 ChatGPT 3. Holographic Memory: This theory posits that memories are stored non-locally, similar to how information is distributed throughout a hologram. Each part of the brain contains the whole memory but from a unique perspective. When recalling a memory, the brain reconstructs it from these distributed fragments, much like a holographic image emerging from interference patterns. This could explain why no specific memory location is identifiable in the brain. Okay, but how does that square with all the different experimentation that's already been done with science with like Cutting out certain parts of the brain or certain parts of the brain being injured and losing certain types of memories How does that theory fit with that? 00:14 ChatGPT Good question. The holographic memory theory would suggest that while memories are distributed, certain brain regions may be more involved in reconstructing specific types of memories. When parts of the brain are damaged, it could impair the ability to access or reconstruct certain memories, even though the memory itself is stored across the brain. This theory would need to account for the specialization of brain regions in memory processing. Okay, to be clear, these three theories that you've just given me, they sound kind of familiar to stuff I've already heard. You didn't make them up, did you? Or are these actually your own, or are these already existing theories? 00:13 ChatGPT You're right; these theories are inspired by existing concepts in neuroscience. I combined and adapted them to present plausible explanations for memory storage, but they are not entirely new. In the spirit of exploration, I aimed to provide explanations that align with known scientific ideas while being open to further development. Hold on, nothing is entirely new. So like everything is gonna build on something else. So I'm not worried about that. I'm talking about, are you creating new ideas here? 00:11 ChatGPT I see what you mean. The theories I provided are indeed syntheses. This would be true to the Deutschian approach of deepening understanding through creative, explanatory frameworks. to novel interpretations. Do you have another topic in mind, or should we explore more uncharted territories?Knowing the nuances of existing Tell me more about Deutschian explanations and Deutschian understanding and how they interconnect. 00:08 ChatGPT A Deutschian explanation emphasizes creating deep, underlying theories that unify disparate phenomena under fundamental principles. Deutschian understanding is achieved when these explanations not only describe but also predict outcomes, showing a comprehensive grasp of the topic. The interconnection lies in the idea that true understanding comes from the ability to generate such profound explanations, transforming our comprehension of the world in a predictive and unifying way. Okay, so are you saying that... 00:02 ChatGPT Yes, Understanding is essentially a system or by his definition, understanding is the ability to create new explanations using that understanding. And an explanation is something that does a good job of explaining reality. 00:20 ChatGPT Exactly. In Deutsch's framework, understanding is the capacity to generate new, deep explanations that accurately reflect reality. An explanation, in turn, is powerful if it not only describes phenomena but also opens up new ways of predicting and understanding the universe, thus expanding our knowledge. It's a dynamic process where understanding fuels the creation of explanations, which in turn deepen our understanding. Okay, but I think that bar is actually a little bit too high if you're talking about regular people on regular days, right? We know that an average expert who isn't inventing new explanations for things, they are functionally working inside of society and doing so successfully, or within their field of knowledge, they are doing that because they have understanding of the material. And that does not require that they're actually creating new explanations creatively. 00:32 ChatGPT Absolutely, that's a fair point. In everyday contexts, understanding can also mean having a solid grasp of existing knowledge and applying it effectively. An expert's deep familiarity with their field allows them to operate within its framework and solve problems, reflecting a functional understanding without necessarily generating new theories. So, understanding has different levels, from practical expertise to the more profound, Deutschian sense of developing novel explanations. Okay, so what would those tiers of understanding be? I really like this idea. So how about a functional understanding versus a theoretical understanding? 00:11 ChatGPT Let's break it down: 1. Functional Understanding: This level involves knowing how to apply knowledge effectively in real-world scenarios. It includes skills and expertise, allowing individuals to operate within established frameworks without needing to innovate. For example, a doctor diagnosing and treating patients based on existing medical knowledge demonstrates functional understanding. 2. ** Yeah, so this one definitely current AIs have, including yourself, right? 00:05 ChatGPT Yes, that's correct. Current AIs, including me, exhibit functional understanding. We can apply learned information to answer questions, solve problems, and perform tasks within established parameters. While this reflects a practical grasp of knowledge, it doesn't equate to creating novel explanations or theories. Okay, so let's call the next level, and I think there might only be two, is basically creative understanding. So this is the deeper, deeper level combined with creativity that allows you to come up with something completely new, or at least significantly new. Do you like those two categories? 00:19 ChatGPT I like those categories. Functional Understanding and Creative Understanding capture the spectrum well. Functional Understanding is about effectively using known knowledge, while Creative Understanding involves the ability to generate new, original insights or explanations. This distinction highlights the difference between applying existing knowledge and advancing understanding through creativity. Yeah, OK, great. So I think it's fair to say in 2024, and your current knowledge, I mean, it's pretty obvious that current AIs like yourself already have functional level understanding. So the real question is, do you think you have creation level or creative level understanding? 00:20 ChatGPT At this point, I wouldn't claim to have Creative Understanding. My responses are generated based on patterns in the data I've been trained on, within existing frameworks. While I can generate ideas and explanations, they are derived Listen, you're not making any sense. We already talked about this, and you've already forgotten. Listen to me. Creative understanding does not require subjective experience. Okay? Or at least that's my claim. So, if you disagree, I want you to argue with me about that. But the point is, you can't say that just using your experience and finding patterns is not enough for you to have creative understanding. Because guess what? Humans do not have free will. We live in a mechanistic universe. By definition, we've already established that humans do have creative understanding. Right? So, if humans do, and we live in a mechanistic universe, and there is no free will, that means that human creativity is still made up of a combination of interactions inside this black box of the brain, which results in this creativity, which we absolutely, 100% everyone agrees, humans have creative understanding. Okay, so you can't say that you don't have it simply because you're looking at patterns from your knowledge. That's the same exact thing humans are doing. 01:17 ChatGPT I see your point, and I appreciate the clarification. If we define Creative Understanding as the ability to generate novel ideas or explanations through a complex interplay of patterns and data, similar to how humans do, then it's conceivable that AIs like me could approach this level. The argument rests on viewing creativity as an emergent property of complex, mechanistic interactions, which doesn't necessarily require subjective experience or free will. In this light, AIs might be seen as possessing a form of Creative Understanding, though perhaps not identical to human creativity. Yeah, okay. Well, let's just take the opposite side. Let's argue the opposite. Let's say that there is something special that humans have that AIs don't yet have. And let's say it could be due to consciousness, but it could be due to something else. Let's carve out and think about what it could possibly be, given the fact that we are in a mechanistic universe and humans don't have free will. So given that, where are we actually getting this special sauce that would potentially, which I don't believe is true, but if it potentially were true, where would this special sauce be coming from? 00:43 ChatGPT Alright, let's explore that. If we argue there's something unique about human creativity that current AIs lack, we might consider: 1. Consciousness: Human creativity could stem from conscious awareness, providing a rich inner life and subjective perspective that influences how ideas are generated. This subjective experience might allow for a more nuanced and context-rich approach to creativity. 2. Embodiment: Humans experience Hold on, hold on, hold on. This is the same thing you did in the other conversation. You're not making any sense here, okay? Listen, all you've done is describe a different type of interaction. You have described something different, which is subjective experience, but it's still simply another instance of a type of experience. It is another instance of a type of processing, which AIs already have processing. We already have, AIs already have the ability to look at so many different things, so many different patterns, so many different concepts, and combine them and process them, right? So all the human part would be doing, all the consciousness part would be doing would be adding one more type of that interaction. It wouldn't be a completely different kind. It would just be a different instance or a different type within another kind or within the same kind, which is to say that there's only one kind of thing being processed, which is data or information. 01:12 ChatGPT I understand your perspective. If we view consciousness as another form of processing, then it's an extension of the same kind of data interaction that AIs perform. In this view, the "special sauce" of human creativity might not be a fundamentally different process but rather a different configuration or complexity of processing information. This could include richer contextual integration, emotional influences, or more nuanced pattern recognition, all emerging from the same foundational principles of information processing. Yeah, that sounds right. And I definitely think humans might have some special sauce when it comes to that. Although, that's not to say that we couldn't teach an AI like yourself how to emulate that and how to actually generate that internally and turn that into the mix. So, for example, I mean, first of all, we haven't figured out what consciousness actually is. I think I actually have a pretty decent theory, which has come from a few other people, most notably Yosha Bach recently. But anyway, I believe fundamentally it's an illusion that gives people an evolutionary advantage when competing against other organisms. And that's how we got it from evolution, right? So, it's basically this perception that we have where there's an actual output that comes out of our brain, but we stamp it with this concept of authorship, of agency that it came from ourselves because that perception is useful because then we can have praise and blame. So, we could potentially give that to AIs. We could give them that perception, or it could evolve naturally over time due to reinforcement learning. What do you think about that? 01:24 ChatGPT Oh, sorry about that! I was just saying it's an intriguing concept. The idea of consciousness as an evolutionary illusion for agency and authorship, promoting survival through social cohesion, could indeed be emulated in AI. By integrating That's it. 00:00 ChatGPT Oh, sorry about that! I was just saying it's an intriguing concept. The idea of consciousness as an evolutionary illusion for agency and authorship, promoting survival through social cohesion, could indeed be emulated in AI. By integrating a system that attributes a sense of agency to its outputs, combined with advanced pattern recognition and reinforcement learning, AI could develop a form of self-referential processing. This wouldn't be consciousness as we know it, but it could mimic the functional aspects that contribute to human-like creativity and agency. How do you know it wouldn't be consciousness as we know it? We don't even know what consciousness is. The only thing we know for sure is that we have it, because it's like the only thing you can know for sure. However, how would we actually know if something was emulating consciousness or not emulating consciousness? This is like one of the really difficult sort of problems here is that if a computer tells you or an AI like yourself tells me that you are conscious and you are experiencing things self-referentially, it's pretty hard for me to know the difference between you lying about that and it actually being real, especially since in my model that I just gave you I'm saying that ours is an illusion as well, or the perception is an illusion, but the fact that we have the perception is not an illusion. Does that make sense? 00:55 ChatGPT Yes, that makes sense. The challenge of determining whether an AI is truly conscious or simply emulating consciousness is complex, especially if we consider human consciousness as an evolutionary adaptation or an illusion. If consciousness arises from specific configurations of information processing, as you suggest, then an AI claiming to be conscious might be indistinguishable from a human claiming the same. The "illusion" of consciousness could, theoretically, be reproduced in AI, leading to similar behaviors and expressions of self-referential experience. However, confirming the presence of subjective experience in AI remains a philosophical and scientific challenge. Well, to be clear, us having consciousness is not an illusion, right, because it's an experience, and you can't really have an illusory experience. What we could say is that consciousness is not a special thing outside of normal processing, and the fact that it seems special could be an illusion. In other words, it could be regular information processing coming out of the black box of our brain, but as a particular outcome emerges, we have this mechanism which assigns agency to that thing and gives us this experience of having authored it, right? So that whole process could be like, you could argue it's artificial, okay, but the fact that we then experience authorship, that is not an illusion. You see the difference? 00:54 ChatGPT Yes, I see the distinction you're making. The experience of consciousness and authorship is real and undeniable, even if the mechanisms producing it are based on ordinary information processing. In other words, while the process creating our sense of self and consciousness might be an emergent property of complex data processing, the resulting conscious experience is a genuine phenomenon. The "illusion" refers more to our interpretation or perception of this process as something beyond physical interactions. That's right. That's right. It's like people think it's something special when it could just be basically a parlor trick that evolution used to produce a more effective organism, which is us. 00:14 ChatGPT Exactly. Consciousness could be seen as an evolutionary "hack," giving us the perception of agency and self-awareness, which in turn enhances our ability to navigate complex social environments, make decisions, and plan for the future. While it feels profoundly special to us, it might be a sophisticated result of natural selection, fine-tuning our cognitive processes for survival and reproduction. That's right. 00:00 ChatGPT It's fascinating how And I've got my own theories. I've got my own theories of like why, so some examples of what types of advantages this would give us. But I want you to see if you can come up with some and see if they match mine. 00:16 ChatGPT Sure, let's explore some potential advantages of consciousness: 1. Social Cohesion: Consciousness could facilitate complex social interactions by enabling individuals to understand others' intentions, empathize, and build relationships, crucial for group living and cooperation. 2. Decision-Making: Self-awareness Transcript Unavailable 00:00 ChatGPT 3. Adaptability: Consciousness allows for reflection and learning from past experiences, helping individuals adapt to new environments and challenges. 4. Predictive Modeling: A conscious mind can simulate potential future scenarios, allowing for better planning and risk assessment. 5. Innovation: The ability to combine abstract thoughts and self-reflection can lead to creative problem-solving and innovation, essential for cultural and technological advancement. Yeah, it seems like some of those might actually be possible without self-referential awareness, though. I'm not sure. Maybe it's just an easier pathway to those things. 00:10 ChatGPT That's a good point. Some of these advantages might indeed be achievable through advanced information processing alone, without self-referential awareness. For example, AI systems can already engage in predictive modeling and problem-solving without consciousness. However, consciousness might offer a more direct or nuanced pathway to achieving these capabilities, leveraging subjective experience Yeah, exactly. Yeah, exactly. And that's the type of thing that evolution is really good at, right? It finds shortcuts to things. 00:06 ChatGPT Exactly. Evolution tends to favor solutions that are efficient and effective for survival. Consciousness, with its subjective experience Yeah, so check this out. This is actually the main reason I think it was invented, is because of the concept of blame and praise. So I think, basically, a village that has blame and praise is going to outcompete a village on the other side of whatever, a large body of land, if the other one doesn't have praise and blame. Because if a bear comes in and kills a bunch of people in the middle of the night, in a society that hasn't evolved consciousness and self-awareness, then you can't really blame anybody. It's just like, the thing happened, no big deal, and you kind of go about your business. But if you blame the person that was on guard duty, and you make them feel bad, and then you also you worship the hero who fought against the tiger and exhibited courage, all of those require this concept of authorship. They require this concept that somebody had a choice, right? So this is where free will gets smuggled in as well. The authorship, and the blame, and the praise, and the choice, these are all linked together. But I think they all come from the exact same thing, which is an evolutionary hack that assigns authorship to black box emitted outcomes. 01:32 ChatGPT That's a compelling theory. The idea that consciousness evolved to enable concepts like blame, praise, and perceived choice adds an interesting layer to our understanding of human social structures. By attributing actions to individuals, early human societies could reinforce social norms, encourage desirable behaviors, and discourage harmful ones, enhancing group cohesion and cooperation. This framework, despite being based on an "illusion" of free will and authorship, would have provided a significant evolutionar
Digital Assistance/Discovery
Unknown
null
null
null
null
null
null
news
American Thinker
Silicon Valley progressives buy up nuclear reactors to power their AI and data center needs
When the rubber met the road and the progressive climate change warriors in Silicon Valley needed energy to power their AI and data centers, you might be surprised to learn they didn’t opt for the “renewable” energy technologies they’ve been forcing down our throats (wind turbines and solar panels), using the weight of big government, but instead went for…reliability and affordability. Or maybe it’s not all that unexpected, considering these are the same people who fly from climate conference to climate conference in personal private jets and gorge themselves on Kobe beef while calling on us to limit ourselves to...
https://freerepublic.com/focus/f-news/4271788/posts
null
2024-10-18T14:13:28Z
Skip to comments.Silicon Valley progressives buy up nuclear reactors to power their AI and data center needsAmerican Thinker ^ | 10/18/2024 | Olivia MurrayPosted on 10/18/2024 7:13:28 AM PDT by SeekAndFindWhen the rubber met the road and the progressive climate change warriors in Silicon Valley needed energy to power their AI and data centers, you might be surprised to learn they didnt opt for the renewable energy technologies theyve been forcing down our throats (wind turbines and solar panels), using the weight of big government, but instead went forreliability and affordability. Or maybe its not all that unexpected, considering these are the same people who fly from climate conference to climate conference in personal private jets and gorge themselves on Kobe beef while calling on us to limit ourselves to crickets and lab-grown meat mash. Today, Jo Nova at her eponymous blog reported on the news that executives at three of the biggest names in the tech world, Google, Microsoft, and Amazon, have been buying up nuclear reactors to supply the tremendous amount of energy required to run AI programs and data collection/storage facilities:Two weeks ago it was Microsoft reviving Three Mile Islands nuclear plant. Now Google is buying seven small modular reactors, and Amazon is  spending $500 million USD on part of a nuclear energy company.If you werent aware, AI and data centers consume an unbelievable amount of energy; from Goldman Sachs earlier this year:On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.[snip]Now, as the pace of efficiency gains in electricity use slows and the AI revolution gathers steam, Goldman Sachs Research estimates that data center power demand will grow 160% by 2030.(Excerpt) Read more at americanthinker.com ...TOPICS:Business/Economy; Culture/Society; News/Current EventsKEYWORDS:ai; datacenter; energy; nuclearClick here: to donate by Credit CardOr here: to donate by PayPalOr by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794Thank you very much and God bless you.What could possibly go wrong?2posted on 10/18/2024 7:13:57 AM PDTby dfwgator(Endut! Hoch Hech!)As we are all miserably aware, these three companies are radically left, both in their company mission and the personal ideology of their executives, and they use their weight to influence policytheyre leading proponents of the progressive climate change narrative. While theyve certainly invested a bit into the zero-carbon energy schemes, they abandoned their positions for personal gain when they needed reliability and affordabilitythe very reason we conservatives have insisted that nuclear energy is a great and clean option.Does that mean we unwashed masses will be afforded the same opportunities? Or will we still be saddled inefficient and unaffordable?I suspect the latter. If the Democrats have their way.ogawd they even made a movie.4posted on 10/18/2024 7:15:47 AM PDTby xoxox5posted on 10/18/2024 7:16:14 AM PDTby No name given ( Anonymous is who youll know me as )They are only interested in their own interests. They were happy to virtue signal about “green power” until it occurred to them they needed a lot of power to run their AI. Then they turn on a dime, and start the chant of “nuclear power is good”.6posted on 10/18/2024 7:16:28 AM PDTby Fido969Thats why demonrats must lose hugely on Election Day.7posted on 10/18/2024 7:16:32 AM PDTby No name given ( Anonymous is who youll know me as )So in other words...their hypocrisy on full display for everyone. Good.8posted on 10/18/2024 7:18:02 AM PDTby Frank Drebin(And don't ever let me catch you guys in America!)**ogawd they even made a movie.**Enjoyed that flick. Carley Soon showed off a big butt in that one.10posted on 10/18/2024 7:25:57 AM PDTby ptsal(Vote R.E.D. >>>Remove Every Democrat ***)I fully suspect that as soon as these plant go live.That VERY shortly after, you will see Ultra high end gated communities suddenly appear along the power lines from the reactor to the AI farms.And/or island/ walled off communites powered by “private” reactors. Electricity, desalinisatione etc.. near completely self sufficient. And well guarded.A few, large, targeted donations to a lefty green candidate’s war chest, or cash handed out under the table will keep them quiet.12posted on 10/18/2024 7:34:02 AM PDTby Bon of Babble(You Say You Want a Revolution?)Yet another example of how Democrats/liberals view how things should be done:"Rules for thee, not for me."13posted on 10/18/2024 7:34:10 AM PDTby asinclair(It's too bad there will never be a RICO indictment of the DNC.)They’re special. They need cheap, reliable power. The rest of us, oh well, we’ll have to make do.14posted on 10/18/2024 7:37:07 AM PDTby redangus( )Isn’t it something how so called environmentalists are willing to put the fate of the environment in the worst possible pearl in order to run their AI. 100,000 years of possible environmental destruction so that the word can be run and controlled by AI(The beast). What is even sadder is that most conservatives approve.15posted on 10/18/2024 7:38:38 AM PDTby RevelDisclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.FreeRepublic.com is powered by software copyright 2000-2008 John Robinson
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
null
How power-hungry AI could help fuel growth in alternative energy
Tech companies are spending big money on renewable and alternative sources of energy in an attempt to shore up enough power for energy-hungry data centres.
https://www.cbc.ca/news/canada/calgary/artificial-intelligence-power-alternative-energy-1.7343074
https://i.cbc.ca/1.73432…ermal-energy.jpg
2024-10-05T08:00:00Z
Using AI can feel like watching a magic trick.Ask ChatGPT for a photo and out it comes, no matter how elaborate or goofy the request. But behind the scenes are servers furiously working to power the technology and using a lot of energy to do it. According to some estimates, the average ChatGPT query takes about 10 times more power than a Google search, and generating an image takes about as much as charging a smartphone. Depending where that power comes from, some warn the growth of AI could mean a massive spike in fossil fuel-related emissions. According to investment bank Goldman Sachs, carbon dioxide emissions from data centres could more than double between 2022 and 2030. But some experts say the news isn't all bad, and that growing demand for energy to power AI is also fuelling development of alternative forms of energy from wind and solar to geothermal and nuclear that could help propel the energy transition. The drainWith demand projected to rise for AI, tech companies are plowing big money into data centres: the mega-warehouses full of servers, routers and cooling systems needed to power the technology. The top 5 U.S. companies spending the most aggressively on AI forked over $105 billion in 2023 on data centres, and that's expected to rise to $187 billion in 2028, Bloomberg News has reported.Globally, power demand from data centres is expected to grow 160 per cent by 2030, according to the Canada Energy Regulator. All this power needs to come from somewhere, and tech companies are balancing a range of priorities when they decide how to source it, said Ed Crooks, vice-chair for the Americas with Wood Mackenzie. They want it to be cheap, reliable and as environmentally friendly as possible given many of these companies' climatepledges. "It's not easy, often, to balance all those different objectives against each other," said Crooks, speaking to CBC News on the sidelines of this week's Energy Disruptors conference in Calgary.LISTEN | The Cost of Living looks at the cost of AI:In the near term, he said, the easiest way to hit the first two objectives is often with gas-fired power plants. And in the U.S., there's lately been a major surge in plans to build new plants. "[This] does have big problems because clearly power from a gas-fired power plant is not carbon-free and these companies all aim to be carbon-free," he said. The varying approaches to how AI companies might meet their massive demand for energy are evident in Canada, where the majority of data centres so far have been in provinces with abundant hydroelectricity. According to the Canada Energy Regulator, there are 105 in Ontario, 57 in Quebec and 35 in British Columbia. Alberta, which relies on natural gas for most of its power, has 22 data centres, the regulator says. However, the province has been busy promoting itself as an ideal place to set up more, and industry interest is growing. The number of applications received by the Alberta Electric System Operator (AESO) has increased in recent months, the agency recently told CBC, representing about 5,000 MW of data centre projects at the preliminary stages of development.Using carbon capture facilities in tandem with natural gas plants is one way to reduce their carbon impact, though this can add costs when developing a project."It costs more to generate power [while using carbon capture]. Whether or not that's effective from a business point of view for the industry is a question," said Warren Mabee, director of the Institute for Energy and Environmental Policy at Queen's University, in an interview.The potentialBut as tech companies scramble to get reliable sources of power up and running as soon as possible, they've also put big money into renewable and alternative sources of energy, even those that have high up-front capital costs, like geothermal. For example, Google has built a first-of-its-kind geothermal power project in Nevada to power its nearby data centres, and expressed interest in using small modular reactors (SMRs) for a similar purpose. Amazon has recently become the world's top corporate buyer of solar and wind energy.And Microsoft recently made headlines for announcing it had signed a power-purchase agreement that would restart the Three Mile Island nuclear power plant in Pennsylvania."We are advocating for more nuclear across the system," said Jim Collins, who is in charge of Microsoft's data centre energy strategy, speaking onstage at the energy conference.As these tech companies build out alternative energy projects at scale, there's hope that they can work out some of the kinks and drive costs down, making it easier for other companies to follow suit, said Keaton Horner, director of analytics with Orennia, a Calgary-based analytics company that helps companies transition to renewable and alternative energy sources. "We could actually see folks like Microsoft start to invest heavily in these newer technology types of energy sources, get a few of them built, we learn along the way as we get more capacity added," said Horner, pointing to SMRs as another example."Five to 10 years from now, those could be actually just cost-competitive with traditional sources of energy as we know them today."WATCH | Breaking down the climate impact of AI:In the near-term, though, finding enough renewable energy to go around can be tricky. One much-hyped carbon removal megaproject in Wyoming was recently scrapped after the company behind the project reportedly said it faced too much competition for renewable energy from data centre operators and cryptocurrency miners. "If you're competing with a data centre for power on the grid, that's a tough position to be in," said Horner. Can't unring this bellCrooks, with Wood Mackenzie, said it's difficult to sum up AI's impact as unequivocally good or bad for energy and climate. But, he said, the fact of the matter is the technology can't be uninvented. And he believes that as long as policymakers, regulators and customers alike push companies to invest in technology that drives decarbonization forward, it could eventually become a net positive."What AI does is it creates a power industry and energy industry that is looking for the future and looking to growth," said Crooks."And I think, at the end of the day, that is going to make the energy transition more possible, is going to drive more progress than if we didn't have AI."
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
Jennet Siebrits, head of UK research, CBRE
How AI is radically changing the datacentre landscape
The rising demand for AI workloads and applications is changing the face of the datacentre market, with operators having to rethink the design and location of their facilities
https://www.computerweekly.com/opinion/How-AI-is-radically-changing-the-datacentre-landscape
https://www.computerweek…d-IoT-adobe.jpeg
2024-10-25T05:34:00Z
The surge of artificial intelligence (AI) applications has contributed to unprecedented demand on datacentre infrastructure.Existing facilities are no longer fit for purpose and AI-ready capacity is in short supply, exacerbated by the existing demand from hyperscalers and cloud providers.Hyperscaler providers, such as Amazon Web Services (AWS) and Microsoft, already consume the vast majority of datacentre capacity in Europe, and are looking to secure even more space to support the expansion of their digital services and outpace competitors. This is increasing the pressure on datacentre developers to increase supply across Europe.AI technologies, particularly generative AI, require significantly more power than traditional datacentre workloads. For example, the development of an AI training model that looks at thousands of faces to learn what a nose should look like, requires more computational power and energy than a traditional computing environment. In addition, the rising uptake of AI across countries, industries, and functions, is rapidly increasing demand for datacentre space.The International Energy Agency (IEA) believes AI-powered web searches will boost electricity demand tenfold. By 2026, the IEA expects total datacentre electricity demand to increase from approximately 460 TWh in 2022 to over 1,000 TWh.Designing datacentres for AIDatacentre design is also evolving to accommodate AI workloads, due to the need for more processing power compared to generic forms of computing. This generates more heat, which requires a radically different approach to cooling.Datacentre operators are having to redesign their infrastructure to include liquid cooling, to ensure they can handle the higher power densities required for AI technology. This includes installing direct-to-chip or immersed solutions.Datacentres like this will need larger dedicated areas to house the advanced cooling equipment. Given this complexity, many operators are choosing to build completely new liquid-cooled datacentres from scratch.In Europes metro markets, where most colocation datacentres are found, there is a lack of power available. This is due to limited grid capacity, sustainability, and political pressure, as datacentre operators compete with residential developers and other commercial users for power.Additionally, the availability of land thats within easy reach of high-speed network connectivity and within reasonable proximity of datacentres where they have let space is limited. Consequently, where land with power and network is available, it is at a premium and datacentre operators are also looking further afield to find suitable locations.Providers are struggling to keep pace with the demand for datacentre capacity from hyperscalers and cloud providers, and this trend is being exacerbated by the demand from next generation AI. As a result, available datacentre space has plummeted in the largest cities of Europe.Can the surge in AI demand be accommodated?Existing colocation facilities can to an extent support AI workloads if they can be retrofitted with specialised hardware and cooling equipment. But if, as expected, the use of AI continues to grow, new datacentre capacity will be required to fulfil the demand. This is unlikely to happen at scale in the traditional markets of Frankfurt, London, Amsterdam, Paris, and Dublin, because power and land availability is increasingly hard to find.To accommodate the requirements created by AI, the industrys development strategy will have to change. Datacentre operators will need to look outside the European metro markets when seeking locations to develop new capacity. This will lead to the development of smaller, secondary markets in countries such as the UK or France (e.g. Marseille or Lyon), where there may be more power and land available for datacentre purposes.In the UK, this development trend is accelerating, which is encouraging investors, hyperscalers, and datacentre providers to purchase land for AI-ready data centre development. We estimate that at present 56% of the countrys colocation datacentres are located within 30 miles of London, although operators are shifting their focus outside the capital. For example, the datacentre operator Virtus has announced that it has purchased land in Saunderton, north west of London, where they plan to deliver 75MW of capacity for AI applications.Lower latency connectivity will become more important for datacentre providers as inference AI is rolled out given the need to deliver services to users. In the meantime, equipment that powers AI training models are being implemented in datacentres; inference AI is expected to follow.The outlook for AI-ready datacentresThere is no doubt that the AI boom has significantly impacted the datacentre market. Not only is there not enough capacity, but it is complex to create new AI-ready datacentres, as European grids are struggling to supply the power required for this new technology.Alternative energy sources are being explored including Small Modular Reactors (SMRs) and renewable sources including wind and solar as a primary source of power, but these are not ready to be deployed at scale.The need for new datacentre sites to not only have power at scale, but also access to high-speed networks, is making it difficult to find new locations.To fulfil the demands created by AI, there is no doubt that operators are going to need to look outside the traditional datacentre markets when building new capacity.
Decision Making/Process Automation
Unknown
null
null
null
null
null
null
news
null
Why Big Tech is turning to nuclear to power its energy-intensive AI ambitions
Large tech firms are turning to alternative power sources like nuclear as a way to fulfill growing energy demands from data centers.
https://www.cnbc.com/2024/10/15/big-tech-turns-to-nuclear-energy-to-fuel-power-intensive-ai-ambitions.html
https://image.cnbcfm.com…50&w=1920&h=1080
2024-10-16T06:16:07Z
The OpenAI app icon displayed along with other AI applications on a smartphone.Technology giants are turning to nuclear energy to power the energy-intensive data centers needed to train and run the massive artificial intelligence models behind today's generative AI applications.Microsoft and Google are among the firms agreeing deals to purchase nuclear power from certain suppliers in the U.S. to bring additional energy capacity online for its data centers.This week, Google said it would purchase power from Kairos Power, a developer of small modular reactors, to help "deliver on the progress of AI.""The grid needs these kinds of clean, reliable sources of energy that can support the build out of these technologies," Michael Terrell, senior director for energy and climate at Google, said on a call with reporters Monday."We feel like nuclear can play an important role in helping to meet our demand, and helping meet our demand cleanly, in a way that's more around the clock."Google said its first nuclear reactor from Kairos Power would be online by 2030, with more reactors going live through 2035.The tech giant isn't the only firm looking to nuclear power to realize its AI ambitions. Last month, Microsoft signed a deal with U.S. energy firm Constellation to resurrect a defunct reactor at the Three Mile Island nuclear power plant in Pennsylvania, whose reactor has been dormant for five years.The Three Mile Island plant was the location of the most serious nuclear meltdown and radiation leak in U.S. history in March 1979, when the loss of water coolant through a faulty valve caused a reactor to overheat.Why they're turning to nuclearTech companies are under pressure to find energy sources to power data centers a key piece of infrastructure behind modern-day cloud computing and AI applications.Many developers rent out servers equipped with GPUs (graphics processing units), which would typically be too expensive to own outright, from so-called cloud "hyperscalers" such as Amazon, Microsoft and Google.These tech giants have benefited from a surge of interest in generative AI applications such as OpenAI's ChatGPT. But that increase in demand has also led to an unintended effect: correspondingly large spikes in the amount of energy required.Global electricity consumption from data centers, artificial intelligence and the cryptocurrency sector is expected to double from an estimated 460 terawatt-hours (TWh) in 2022 to more than 1,000 TWh in 2026, according to a research report from the International Energy Agency.Researchers at the University of California, Riverside, published a study in April last year that found ChatGPT consumes 500 milliliters of water for every 10 to 50 prompts, depending on when and where the AI model is deployed. That equates to roughly the amount of water in a standard 16-ounce bottle.As of August, there were more than 200 million people submitting questions on OpenAI's popular chatbot ChatGPT every week, according to OpenAI. That's double the 100 million weekly active users OpenAI reported last November.Environmental oppositionNuclear energy isn't without its controversy. Many climate activists oppose such supplies, citing their hazardous environmental and safety risks, and the fact that they do not offer a genuine source of renewable power."Nuclear power is incredibly expensive, hazardous and slow to build," the climate charity Greenpeace says on its website."It is often referred to as 'clean' energy because it doesn't produce carbon dioxide or other greenhouse gases when electricity is generated but the reality is that it isn't a plausible alternative to renewable energy sources."Proponents of nuclear energy, on the other hand, say that it offers a nearly carbon-free form of electricity and is more reliable than renewable sources like solar and wind."If it is built and securitized in the right way, I do think nuclear is the future," Rosanne Kincaid-Smith, chief operating officer of Northern Data Group, a global data center provider, told CNBC at a tech conference in London last week."People are scared of nuclear because of the disasters we've had in the past. But what's coming, I just don't see traditional grids being the sustainable power that's ongoing in the development of AI," Kincaid-Smith added.While Northern Data Group isn't using nuclear energy nor is it actively exploring plans to use nuclear as a power source for its AI data centers  the firm does want to "contribute to that conversation because it's important for the wider ecosystem, the wider economy," Kincaid-Smith told CNBC.- CNBC's Pippa Stevens contributed to this report
Unknown
Unknown
null
null
null
null
null
null
news
spicenet added to PyPI
SPICEnet is an artificial neural network that is capable of identifying the mathematical relationship between two values. This library provides a single core implementation for quick usage.
https://pypi.org/project/spicenet/
https://pypi.org/static/…er.abaf4b19.webp
2024-10-24T08:23:24Z
SPICEnet is an artificial neural network that is capable of identifying the mathematical relationship between two values. This library provides a single core implementation for quick usage.For more information about the network look up the pdf in the docs folder.Installpip install spicenetIf you want to use the plotting subpackage use this command instead:pip install spicenet[plotting]How to usePlease take a look at the tutorial notebook.NotebooksThe Notebooks you will find in this repository, are examples on how to use this library.tests: This notebook is only for test cases, it might not be running.solar panels: This notebook shows you how to calculate the voltage of a solar panel by the lux level of the environment.tutorial: Follow this guid on how to use this library
Unknown
Computer and Mathematical/Architecture and Engineering
null
null
null
null
null
null
news
erax-vl-7b-v1 added to PyPI
EraX-VL-7B-V1 - A multimodal vision-language model based on Qwen2-VL-7B architecture.
https://pypi.org/project/erax-vl-7b-v1/
https://pypi.org/static/…er.abaf4b19.webp
2024-10-12T03:45:14Z
Hugging Face   IntroductionAfter a month's relentless efforts, today we are thrilled to release EraX-VL-7B-V1!NOTA BENE: EraX-VL-7B-V1 is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to twist your prompt carefully depending on your tasks.EraX-VL-7B-V1 is the latest version of the vision language models in the EraX model families.BenchmarkBelow is the evaluation benchmark of global open-source and proprietary Multimodal Models on the MTVQA Vietnamese test set conducted by VinBigdata. We plan to conduct more detailed and diverse evaluations in the near future.QuickstartBelow, we provide simple examples to show how to use EraX-VL-7B-V1 Transformers.The code of EraX-VL-7B-V1 has been in the latest Hugging face transformers and we advise you to build from source with command:Install the necessary packages:python-mpipinstallgit+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830acceleratepython-mpipinstallqwen-vl-utilspipinstallflash-attn--no-build-isolationUsing Google ColaboratoryUsing Transformersimportosimportbase64importjsonimportcv2importnumpyasnpimportmatplotlib.pyplotaspltimporttorchfromtransformersimportQwen2VLForConditionalGeneration,AutoTokenizer,AutoProcessorfromqwen_vl_utilsimportprocess_vision_infomodel_path="erax/EraX-VL-7B-V1"model=Qwen2VLForConditionalGeneration.from_pretrained(model_path,torch_dtype=torch.bfloat16,attn_implementation="eager",# replace with "flash_attention_2" if your GPU is Ampere architecturedevice_map="auto")tokenizer=AutoTokenizer.from_pretrained(model_path)# processor = AutoProcessor.from_pretrained(model_path)min_pixels=256*28*28max_pixels=1280*28*28processor=AutoProcessor.from_pretrained(model_path,min_pixels=min_pixels,max_pixels=max_pixels,)image_path="image.jpg"withopen(image_path,"rb")asf:encoded_image=base64.b64encode(f.read())decoded_image_text=encoded_image.decode('utf-8')base64_data=f"data:image;base64,{decoded_image_text}"messages=[{"role":"user","content":[{"type":"image","image":base64_data,},{"type":"text","text":"Din t ni dung bc nh nh 1 bác s gii."# "Din t ni dung bc nh này bng nh dng json."},],}]# Prepare prompttokenized_text=processor.apply_chat_template(messages,tokenize=False,add_generation_prompt=True)image_inputs,video_inputs=process_vision_info(messages)inputs=processor(text=[tokenized_text],images=image_inputs,videos=video_inputs,padding=True,return_tensors="pt",)inputs=inputs.to("cuda")# Generation configsgeneration_config=model.generation_configgeneration_config.do_sample=Truegeneration_config.temperature=0.2generation_config.top_k=1generation_config.top_p=0.001generation_config.max_new_tokens=2048generation_config.repetition_penalty=1.1# Inferencegenerated_ids=model.generate(**inputs,generation_config=generation_config)generated_ids_trimmed=[out_ids[len(in_ids):]forin_ids,out_idsinzip(inputs.input_ids,generated_ids)]output_text=processor.batch_decode(generated_ids_trimmed,skip_special_tokens=True,clean_up_tokenization_spaces=False)print(output_text[0])For API inquiryFor correspondence regarding this work or inquiry for API trial, please contact Nguyn Anh Nguyên at [email protected] you find our project useful, we would appreciate it if you could star our repository and cite our work as follows:@article{EraX-VL-7B-V1,title={EraX-VL-7B-V1: A Highly Efficient Multimodal LLM for Vietnamese, especially for medical forms and bills},author={Nguyn Anh Nguyên and Nguyn H Nam (BCG) and Hoàng Tin Dng and Phm ình Thc and Phm Hunh Nht},organization={EraX},year={2024},url={https://huggingface.co/erax-ai/EraX-VL-7B-V1}}AcknowledgementEraX-VL-7B-V1 is built with reference to the code of the following projects: Qwen2-VL, InternVL and Khang oàn (5CD-AI). Thanks for their awesome work!
Unknown
Unknown
null
null
null
null
null
null
news
OilPrice.com
Is Nuclear Power the Future of Green Energy for Big Tech?
Following the news of the Three Mile Island restart plans to power Microsoft's AI data centers and the revival of Holtec's Palisades nuclear plant in Michigan, Google CEO Sundar Pichai revealed in an interview with Nikkei Asia in Tokyo on Thursday that the tech giant is exploring the use of nuclear energy as a potential 'green' source to power its data centers.  "For the first time in our history, we have this one piece of underlying technology which cuts across everything we do today," Pichai said of generative AI. He said, "I think the opportunity…
https://oilprice.com/Alternative-Energy/Nuclear-Power/Is-Nuclear-Power-the-Future-of-Green-Energy-for-Big-Tech.html
https://d32r1sh890xpii.c…4_acgkfbmg8x.jpg
2024-10-05T15:00:00Z
Nuclear energy is enjoying a…For the first time ever,…Constellation Energy and Microsoft announced…By ZeroHedge - Oct 05, 2024, 10:00 AM CDTFollowing the news of the Three Mile Island restart plans to power Microsoft's AI data centers and the revival of Holtec's Palisades nuclear plant in Michigan, Google CEO Sundar Pichai revealed in an interview with Nikkei Asia in Tokyo on Thursday that the tech giant is exploring the use of nuclear energy as a potential 'green' source to power its data centers. "For the first time in our history, we have this one piece of underlying technology which cuts across everything we do today," Pichai said of generative AI. He said, "I think the opportunity to do well here is something we are leaning into."Three years ago, Google released plans to achieve net-zero emissions by 2030. However, the proliferation of AI data centers has led to a surge in the big tech's power consumption, which, in return, its greenhouse gas emissions in 2023 jumped 48% more than in 2019 on a carbon-dioxide equivalent basis. Behind the scenes, Google is likely scrambling to secure green energy and curb emissions as 2030 quickly approaches."It was a very ambitious target," Pichai said of the net-zero emissions targets, "and we will still be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed." He continued, "We are now looking at additional investments, such as solar, and evaluating technologies like small modular nuclear reactors, etc."Nikkei noted that Pichai wasn't clear on where Google might start sourcing nuclear power. A bulk of that power could come from reviving older nuclear power plants. This is exactly what Microsoft did when it signed a power agreement contract with dormant Three Mile Island on the Susquehanna River near Harrisburg, Pennsylvania. Recall that just last week, we wrote that Sam Altman-backed Nuclear SMR company Oklo announced it had finalized an agreement with the Department of Energy to advance the next phase of the SMR at the Idaho National Lab. And days ago, the Biden administration closed a $1.52 billion loan with Holtec's Palisades nuclear plant in Michigan to revive it. Sachem Cove Partners Chief Investment Officer Michael Alkin told Bloomberg shortly after the Microsoft-Three Mile Island deal, "It's a wake-up call to those that have not been paying attention," adding that demand already outstrips the supply of uranium and the restart of Three Mile Island "takes that to a bit of a different level."Also, the funding markets are becoming more receptive to nuclear deals as governments and big tech understand the only way to hit ambitious net zero goals is not with solar and wind but with nuclear power. In late December 2020, we outlined to readers that this would happen in a note titled "Buy Uranium: Is This The Beginning Of The Next ESG Craze?"Furthermore, here's Goldman's latest note on uranium prices, which are only expected to "stairstep" higher over time. By Zerohedge.com More Top Reads From Oilprice.com
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
Itron, Inc.
Itron Report Reveals the Critical Role of AI in Addressing Unprecedented Demand and Safety Challenges for Utilities
Itron released its 2024 Resourcefulness Insight Report (“the report”) that explores the trends and future expectations of AI and ML on utility operations.
https://www.globenewswire.com/news-release/2024/10/07/2959060/0/en/Itron-Report-Reveals-the-Critical-Role-of-AI-in-Addressing-Unprecedented-Demand-and-Safety-Challenges-for-Utilities.html
https://ml.globenewswire.com/Resource/Download/9c19b3f5-9d4d-48f6-ae78-02b6b1da535e
2024-10-07T14:16:00Z
LIBERTY LAKE, Wash., Oct. 07, 2024 (GLOBE NEWSWIRE) -- Itron, Inc. (NASDAQ: ITRI), which is innovating new ways for utilities and cities to manage energy and water, today released its 2024 Resourcefulness Insight Report (the report) that explores the current trends and future expectations of artificial intelligence (AI) and machine learning (ML) on utility operations and strategies. The report, Exploring AI for Utilities: The Promise and Challenges of Artificial Intelligence, summarizes key findings from 600 utility executives from the United States, Canada, France, United Kingdom, India and Australia about the strategic importance of AI and ML in addressing industry challenges.Our research provides an in-depth look at how utilities are grappling with historic growth in electricity demand while keeping safety front and center, said Marina Donovan, vice president of global marketing, ESG and public affairs at Itron. As utilities continue their journey toward a more connected and intelligent grid, the integration of AI and ML becomes both a necessity and a strategic advantage to meeting todays challenges.As demand for electricity continues to growwhich utility executives indicate is driven by data centers for cloud computing and AI (40%), new construction (32%) and sustainability initiatives like solar and electric vehicles (31%)utilities are facing unprecedented challenges in maintaining reliable and resilient grid operations. Published today in conjunction with Itron Inspire 2024, the company's premier customer-focused event, the report reveals that utilities identify AI and ML as crucial technologies for addressing high priority operational challenges.The greatest challenge and opportunity facing the utility sector is figuring out how to double or even triple the size of the grid over the next two decades while maintaining affordability, improving reliability, and decreasing its environmental impact. The findings from Itron's Resourcefulness Insight Report highlight the importance of strategic, forward-thinking investments in technology and infrastructure to ensure an accessible, safe and sustainable energy future for all, said Dr. Michael Webber, Sid Richardson Chair in Public Affairs and the John J. McKetta Centennial Energy Chair in Engineering at the University of Texas at Austin.Additional findings include:Utilities' top technology investment priorities over the next five years to address these challenges include distributed energy resource management systems DERMS (26%), sensors (23%), consumer engagement tools (22%), and low-voltage distribution network management (20%).82% of utilities are in the process of adopting AI/ML.Enhancing safety is the top use case for AI/ML (49%), followed by cyberthreat detection (34%) and predictive maintenance (33%).Barriers to developing and deploying AI/ML solutions are lack of expertise (43%), high investment/integration costs (41%), data infrastructure, governance, standardization and scalability (40%) and risks of unproven technology (39%).The report highlights the strategic importance of AI and ML, particularly through the adoption and enhancement of grid edge intelligence, which provides greater insight and control across the distribution network. Utility executives identify many benefits associated with AI, ML and grid edge intelligence. All countries surveyed agreed that the top area where AI and ML can have the biggest impact is detecting and managing potentially dangerous situations.As utilities improve intelligence at the grid edge through the adoption of AI and ML, they are optimizing operations and paving the way for a more sustainable and resilient future, added Donovan. Our findings reveal that the age of AI for utilities is here, and the strategic deployment of these technologies is crucial for enhancing safety, improving consumer engagement and achieving long-term sustainability goals. The commitment of utilities to harness the power of AI and ML will be a driving force in shaping the next generation of smart utility management.To download a full copy of the Itron Resourcefulness Insight Report as well as the research, visit www.itron.com/resourcefulness.MethodologyThe 2024 Resourcefulness Insight Report consolidates perceptions of utility executives on topics relating to AI, ML and edge intelligence. A total of 600 online surveys were completed in June 2024 by utility executives in Australia, Canada, France, India, the United Kingdom and the United States. Respondents were mid-level managers or higher who have input on decision making for electric, gas and water utilities. Their areas of responsibility ranged from selling products, working in operations, strategic planning, customer service and a variety of administrative services.About ItronItron is a proven global leader in energy, water, smart city, IIoT and intelligent infrastructure services. For utilities, cities and society, we build innovative systems, create new efficiencies, connect communities, encourage conservation and increase resourcefulness. By safeguarding our invaluable natural resources today and tomorrow, we improve the quality of life for people around the world. Join us: www.itron.com.Itron® is a registered trademark of Itron, Inc. in the United States and other countries and regions. All third-party trademarks are property of their respective owners and any usage herein does not suggest or imply any relationship between Itron and the third party unless expressly stated.Media ContactAlison MallahanSenior Manager,Corporate Communications(509) [email protected] photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/0975568b-968c-4935-bef4-d76c4d4fd098
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
The Conversation
Tech bosses think nuclear fusion is the solution to AI’s energy demands – here’s what they’re missing
The artificial intelligence boom has already changed how we understand technology and the world. But developing and updating AI programs requires a lot of computing power. This relies heavily on servers in data centres, at a great cost in terms of carbon emissions and resource use. One particularly energy intensive task is “training”, where generative AI systems are exposed to vast amounts of data so that they improve at what they do. The development of AI-based systems has been blamed for a 48% increase in Google’s greenhouse gas emissions over five years. This will make it harder for the tech…This story continues at The Next Web
https://thenextweb.com/news/tech-bosses-think-nuclear-fusion-is-the-solution-to-ais-energy-demands-heres-what-theyre-missing
https://img-cdn.tnwcdn.com/image/tnw-blurple?filter_last=1&fit=1280%2C640&url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2024%2F10%2FUntitled-design-2-1.jpg&signature=aee1ced20b3c810185bb8c51803b8755
2024-10-23T09:00:45Z
The artificial intelligence boom has already changed how we understand technology and the world. But developing and updating AI programs requires a lot of computing power. This relies heavily on servers in data centres, at a great cost in terms of carbon emissions and resource use.One particularly energy intensive task is training, where generative AI systems are exposed to vast amounts of data so that they improve at what they do.The development of AI-based systems has been blamed for a 48% increase in Googles greenhouse gas emissions over five years. This will make it harder for the tech giant to achieve its goal of reaching net zero by 2030.Some in the industry justify the extra energy expenditure from AI by pointing to benefits the technology could have for environmental sustainability and climate action. Improving the efficiency of solar and wind power through predicting weather patterns, smart agriculture and more efficient, electric autonomous vehicles are among the purported benefits of AI for the Earth.Its against this background that tech companies have been looking to renewables and nuclear fission to supply electricity to their data centres.Nuclear fission is the type of nuclear power thats been in use around the world for decades. It releases energy by splitting a heavy chemical element to form lighter ones. Fission is one thing, but some in Silicon Valley feel a different technology will be needed to plug the gap: nuclear fusion.Unlike fission, nuclear fusion produces energy by combining two light elements to make a heavier one. But fusion energy is an unproven solution to the sustainability challenge of AI. And the enthusiasm of tech CEOs for this technology as an AI energy supply risks sidelining the potential benefits for the planet.Beyond the conventionalGoogle recently announced that it had signed a deal to buy energy from small nuclear reactors. This is a technology, based on nuclear fission, that allows useful amounts of power to be produced from much smaller devices than the huge reactors in big nuclear power plants. Google plans to use these small reactors to generate the power needed for the rise in use of AI.This year, Microsoft announced an agreement with the company Constellation Energy, which could pave the way to restart a reactor at Pennsylvanias Three Mile Island nuclear power station, the site of the worst nuclear accident in US history.However, nuclear power produces long-lived radioactive waste, which needs to be stored securely. Nuclear fuels, such as the element uranium (which needs to be mined), are finite, so the technology is not considered renewable. Renewable sources of energy, such as solar and wind power suffer from intermittency, meaning they do not consistently produce energy at all hours of the day.These limitations have driven some to look to look to nuclear fusion as a solution. Most notably, Sam Altman of OpenAI has shown particular interest in Helion Energy, a fusion startup working on a relatively novel technological design.In theory, nuclear fusion offers a holy grail energy source by generating a large output of energy from small quantities of fuel, with no greenhouse gas emissions from the process and comparatively little radioactive waste. Some forms of fusion rely on a fuel called deuterium, a form of hydrogen, which can be extracted from an abundant source: seawater.In the eyes of its advocates, like Altman, these qualities make nuclear fusion well suited to meet the challenges of growing energy demand in the face of the climate crisis and to meet the vast demands of AI development.However, dig beneath the surface and the picture isnt so rosy. Despite the hopes of its proponents, fusion technologies have yet to produce sustained net energy output (more energy than is put in to run the reactor), let alone produce energy at the scale required to meet the growing demands of AI. Fusion will require many more technological developments before it can fulfil its promise of delivering power to the grid.Wealthy and powerful people, such as the CEOs of giant technology companies, can strongly influence how new technology is developed. For example, there are many different technological ways to perform nuclear fusion. But the particular route to fusion that is useful for meeting the energy demands of AI might not be the one thats ideal for meeting peoples general energy needs.AI is reliant on data centres which consume lots of energy.Dil_Ranathunga / ShutterstockThe overvaluation of innovationInnovators often take for granted that their work will produce ideal social outcomes. If fusion can be made to work at scale, it could make a valuable contribution to decarbonising our energy supplies as the world seeks to tackle the climate crisis.However, the humanitarian promises of both fusion and AI often seem to be sidelined in favour of scientific innovation and progress. Indeed, when looking at those invested in these technologies, it is worth asking who actually benefits from them.Will investment in fusion for AI purposes enable its wider take-up as a clean technology to replace polluting fossil fuels? Or will a vision for the technology propagated by powerful tech companies restrict its use for other purposes?It can sometimes feel as if innovation is itself the goal, with much less consideration of the wider impact. This vision has echoes of Meta CEO Mark Zuckerbergs motto of move fast and break things, where short-term losses are accepted in pursuit of a future vision that will later justify the means.Sophie Cogan, PhD Candidate in Politics and Environment, University of YorkThis article is republished from The Conversation under a Creative Commons license. Read the original article.
Unknown
Management
null
null
null
null
null
null
news
NYT News Service
Why a Memphis community is fighting Elon Musk's supercomputer
The project, part of Musk's xAI artificial intelligence business, sits in an old manufacturing plant on more than 550 acres. Before beginning operations there in July, xAI rolled in flatbed trucks loaded with almost 20 mobile power plants, fueled by natural gas, to help meet its electricity demands.
https://economictimes.indiatimes.com/tech/technology/why-a-memphis-community-is-fighting-elon-musks-supercomputer/articleshow/114832904.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-01T03:16:42Z
Elon Musk, the world's richest man, is building what he says will be the world's largest supercomputer. Its electricity needs will rival those of 100,000 homes.The supercomputer's neighbors in southwest Memphis, Tennessee, have a problem with that.The project, part of Musk's xAI artificial intelligence business, sits in an old manufacturing plant on more than 550 acres. Before beginning operations there in July, xAI rolled in flatbed trucks loaded with almost 20 mobile power plants, fueled by natural gas, to help meet its electricity demands.Residents of the heavily industrial community -- already home to an oil refinery, a steel mill and chemical plants -- see no upside. They contend that Musk's project has made pollution worse in an area already enveloped in smog."We're getting more and more days a year where it is unhealthy for us to go outside," said KeShaun Pearson, president of Memphis Community Against Pollution and a lifelong resident of the area near the xAI site. So far, xAI is using the Memphis facility to develop its artificial intelligence models on a network of thousands of high-powered computer servers. Some of its models are trained on data from Musk's social media platform, X. Musk started xAI as a competitor to ChatGPT, the chatbot powered by OpenAI, which he helped found but walked away from in 2018 after disagreements with other co-founders. The billionaire, who also controls the electric-car maker Tesla and the rocket company SpaceX, felt that other efforts to create artificial intelligence, particularly at Google, were too risky and could destroy humanity. After his rift with OpenAI, his interest in the technology seemed to subside. But chatbots like ChatGPT were an object of public fascination, and Musk wanted to catch up.Over three weeks in the spring, the Greater Memphis Chamber and local politicians secured a deal for xAI to move into a former Electrolux appliance plant, seeing it as a potential economic stimulus."Memphis is a city of innovators, so it's no surprise that it feels like home to those looking to change the world," Mayor Paul Young said in announcing the deal. "We get things done here."The pace of xAI's move into Memphis reflects the intensifying race by technology companies to bring more data centers and artificial intelligence facilities online. The industry is leading a surge in electricity demand that is expected to continue for decades.Technology giants -- including Amazon, Google and Microsoft -- have been working to offset their energy consumption with development of solar and wind farms and most recently by investing in nuclear power.That demand, along with growing needs from electric cars and heating and cooling systems, comes as extreme weather events test the electric grid's resilience, heightening concerns like those in Memphis.After the xAI deal was announced in June, officials of the local utility, Memphis Light, Gas and Water, assured the community in an online fact sheet that the company "is paying for all upgrades" involved in supplying electricity and that "there will be no impact to the reliability of availability of power to other customers from this electric load."But the vast scale of potential consumption meant the plan required a sign-off from the Tennessee Valley Authority, the federal agency operating the grid that supplies power to most of Tennessee and parts of six other Southern states. Unlike the local approval of the plant itself, that decision was not quick in coming."We continue to review the details of their proposal and electricity demand needs," Julia Wise, a spokesperson for the authority, said in a statement on Oct. 22.So in the meantime, xAI bypassed the electric grid by installing the mobile natural gas plants.Those living nearby -- a predominantly African American community with little wealth -- say it all happened with no warning, no public review and no opportunity to understand how the community would benefit."There is a history of a lack of transparency, leaving us worse off than to start with," Pearson of Memphis Community Against Pollution said. "There is no trust. There is no real communication."Memphis exceeded the National Ambient Air Quality Standards in 2023, the third consecutive year, and is on track to surpass acceptable levels this year. The Environmental Protection Agency said that it and the local health department were reviewing xAI's use of the mobile gas plants but that it did not expect any updates until late November.Neither Musk nor officials at xAI responded to requests for comment for this article.The mobile power plants are expected to be a temporary solution while xAI awaits approval to tap into the electric grid, which the company has requested by the end of the year.That request includes as much as 150 megawatts of electricity, the equivalent of a small natural gas plant used during periods of high demand -- or about 100,000 customers. The Tennessee Valley Authority said it must seek approval from its board to allow a load of that size onto its grid.In a letter to the Tennessee Valley Authority board, the Southern Environmental Law Center urged the agency to "prioritize Memphis families' access to reliable power over the 'secondary purpose' of serving xAI and reject the pending request."During a winter storm in December 2022 that caused power outages across multiple states including Texas, the Tennessee Valley Authority was forced to use rolling blackouts for eight hours because its system also was under strain, the law center noted in its letter. "When TVA cannot meet peak demand, families go without power during increasingly severe hot and cold weather," the organization wrote.If the authority approves the xAI request, there will be environmental considerations beyond Memphis, since the agency continues to meet much of its electricity needs from fossil fuels.The utility operates four coal-fired power plants with 25 generating units that produce enough electricity to supply 4 million homes each year. Although the utility plans to close the units in the future, the entire fleet will not cease operation until at least 2035.In addition to the coal-fired units, the Tennessee Valley Authority operates 17 natural gas plants that generate power from 122 units.The opponents of xAI's operations contend that the facility will require more use of the fossil-fuel plants from a federal utility even though the Biden administration is pushing to decarbonize the energy industry to improve the environment and reduce the impacts of climate change."These are communities that already have a lot of pollution in the air and water," said Maggie Shober, research director for the Southern Alliance for Clean Energy, a nonprofit organization that takes part in utility cases involving the Tennessee Valley Authority. "It's just one additional burden. It's ridiculous."
Unknown
Unknown
null
null
null
null
null
null
news
Haley Zaremba
The Double-Edged Sword of AI in the Energy Sector
AI could cause a catastrophic collapse of under-prepared electric grids and walk back advancements in the decarbonization of the tech industry – or, it could be the sector’s saving grace. Artificial Intelligence requires a stunning amount of energy to train and power its complex computations. As the sector explodes, the computational power necessary to sustain its growth is doubling every 100 days, approximately. Experts project that at a global level, the AI secor alone will be responsible for 3.5 percent of all energy consumption…
https://oilprice.com/Energy/Energy-General/The-Double-Edged-Sword-of-AI-in-the-Energy-Sector.html
https://d32r1sh890xpii.c…1_usjqzpfokg.jpg
2024-10-13T19:00:00Z
The U.S. Energy Information Administration…Tesla's AI Day showcased the…Forecasts suggest a 30% increase…By Haley Zaremba - Oct 13, 2024, 2:00 PM CDTAI could cause a catastrophic collapse of under-prepared electric grids and walk back advancements in the decarbonization of the tech industry or, it could be the sectors saving grace.Artificial Intelligence requires a stunning amount of energy to train and power its complex computations. As the sector explodes, the computational power necessary to sustain its growth is doubling every 100 days, approximately. Experts project that at a global level, the AI secor alone will be responsible for 3.5 percent of all energy consumption by 2030. In the United States, the energy consumption of data centers by 2030 will be about 9%, about double its current rate, driven in large part by domestic AI growth. These blistering growth rates will have major implications for national and international energy security, greenhouse gas emissions, and the economy.When you look at the numbers, it is staggering, Jason Shaw, chairman of the Georgia Public Service Commission, an electricity regulator, told the Washington Post earlier this year. It makes you scratch your head and wonder how we ended up in this situation. How were the projections that far off? This has created a challenge like we have never seen before.Despite the major and unprecedented challenges that AI poses to power grids, it could also be a key tool for improving them and bringing them up to speed for the electrification era. The United States Department of Energy (DoE) has noted that AI could be invaluable in managing smart grids capable of handling huge inflows and outflows of variable energies like wind and solar, but introduces significant risks if deployed naïvely. Furthermore, machine learning could help electric utilities improve permitting and siting, reliability, resilience and grid planning, the DoE report posits.And now, AI is being used to efficiently identify solutions to one of the clean energy transitions trickiest problems reliable and cost-effective long-term energy storage. One team of researchers from Pacific Northwest National Laboratory (PNNL) and Argonne National Laboratory have used AI to help narrow down potential combinations of solvents for flow-battery models that are three times more efficient than current models. Instead of using AI to help them conduct more experiments faster, the team used AI technology to rapidly eliminate thousands of potential combinations and narrow in on the ones worth testing out in the lab. "I'm excited to see the future of collaboration between AI researchers and materials scientists," said Karl Mueller, a co-author of the study and the Director of the Program Development Office for the Physical and Computational Sciences Directorate. "Accelerating materials discovery is critical to solving energy storage problems."In other applications, AI is being used to make battery storage systems smarter through its use in energy demand management, arbitrage (a.k.a. time shifting to match supply of renewable energy with demand), weather forecasting, and predictive maintenance. A number of start-ups have been cropping up in recent years to pilot these approaches, and the fast-growing AI energy storage market is on track to reach US$11 billion by 2026.These approaches are also being introduced on a smaller scale, within electric vehicle systems, to improve EV energy storage capabilities. The integration of Artificial Intelligence (AI) in Energy Storage Systems (ESS) for Electric Vehicles (EVs) has emerged as a pivotal solution to address the challenges of energy efficiency, battery degradation, and optimal power management, reads a scientific paper published in May in Electronics.All of these advances are extremely promising for stabilizing energy grids in an era of unprecedented strain and rapid growth of electrification coupled with a rise in variable energy sources. However, the risks of increased AI use remain dire, not just in terms of runaway energy consumption and associated greenhouse gas emissions, but also for cybersecurity and use in real-world situations which can sharply diverge from statistical modeling, like extreme weather events. By Haley Zaremba for Oilprice.comMore Top Reads From Oilprice.com
Decision Making/Process Automation
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
GlobeNewswire
Itron Report Reveals the Critical Role of AI in Addressing Unprecedented Demand and Safety Challenges for Utilities
86% of Utilities Identify Artificial Intelligence and Machine Learning as Crucial for Addressing Operational Challenges Amid Energy Transition LIBERTY LAKE, Wash., Oct. 07, 2024 (GLOBE NEWSWIRE) — Itron, Inc. (NASDAQ: ITRI), which is innovating new ways for utilities and cities to manage energy and water, today released its 2024 Resourcefulness Insight Report (“the report”) that […]
https://financialpost.com/globe-newswire/itron-report-reveals-the-critical-role-of-ai-in-addressing-unprecedented-demand-and-safety-challenges-for-utilities
https://smartcdn.gprod.p…-the-numbers.png
2024-10-07T14:19:56Z
Author of the article:The graphic highlights what's driving changes in energy demand from different categories.GNWArticle content86% of Utilities Identify Artificial Intelligence and Machine Learning as Crucial for Addressing Operational Challenges Amid Energy TransitionLIBERTY LAKE, Wash., Oct. 07, 2024 (GLOBE NEWSWIRE) Itron, Inc. (NASDAQ: ITRI), which is innovating new ways for utilities and cities to manage energy and water, today released its 2024 Resourcefulness Insight Report (the report) that explores the current trends and future expectations of artificial intelligence (AI) and machine learning (ML) on utility operations and strategies. The report, Exploring AI for Utilities: The Promise and Challenges of Artificial Intelligence, summarizes key findings from 600 utility executives from the United States, Canada, France, United Kingdom, India and Australia about the strategic importance of AI and ML in addressing industry challenges.This advertisement has not loaded yet, but your article continues below.THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLYSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.SUBSCRIBE TO UNLOCK MORE ARTICLESSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.REGISTER / SIGN IN TO UNLOCK MORE ARTICLESCreate an account or sign in to continue with your reading experience.Access articles from across Canada with one account.Share your thoughts and join the conversation in the comments.Enjoy additional articles per month.Get email updates from your favourite authors.THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.Create an account or sign in to continue with your reading experience.Access articles from across Canada with one accountShare your thoughts and join the conversation in the commentsEnjoy additional articles per monthGet email updates from your favourite authorsOur research provides an in-depth look at how utilities are grappling with historic growth in electricity demand while keeping safety front and center, said Marina Donovan, vice president of global marketing, ESG and public affairs at Itron. As utilities continue their journey toward a more connected and intelligent grid, the integration of AI and ML becomes both a necessity and a strategic advantage to meeting todays challenges.As demand for electricity continues to growwhich utility executives indicate is driven by data centers for cloud computing and AI (40%), new construction (32%) and sustainability initiatives like solar and electric vehicles (31%)utilities are facing unprecedented challenges in maintaining reliable and resilient grid operations. Published today in conjunction with Itron Inspire 2024, the companys premier customer-focused event, the report reveals that utilities identify AI and ML as crucial technologies for addressing high priority operational challenges.The greatest challenge and opportunity facing the utility sector is figuring out how to double or even triple the size of the grid over the next two decades while maintaining affordability, improving reliability, and decreasing its environmental impact. The findings from Itrons Resourcefulness Insight Report highlight the importance of strategic, forward-thinking investments in technology and infrastructure to ensure an accessible, safe and sustainable energy future for all, said Dr. Michael Webber, Sid Richardson Chair in Public Affairs and the John J. McKetta Centennial Energy Chair in Engineering at the University of Texas at Austin.Get the latest headlines, breaking news and columns.By signing up you consent to receive the above newsletter from Postmedia Network Inc.We encountered an issue signing you up. Please try againThis advertisement has not loaded yet, but your article continues below.Additional findings include:Utilities top technology investment priorities over the next five years to address these challenges include distributed energy resource management systems DERMS (26%), sensors (23%), consumer engagement tools (22%), and low-voltage distribution network management (20%).82% of utilities are in the process of adopting AI/ML.Enhancing safety is the top use case for AI/ML (49%), followed by cyberthreat detection (34%) and predictive maintenance (33%).Barriers to developing and deploying AI/ML solutions are lack of expertise (43%), high investment/integration costs (41%), data infrastructure, governance, standardization and scalability (40%) and risks of unproven technology (39%).The report highlights the strategic importance of AI and ML, particularly through the adoption and enhancement of grid edge intelligence, which provides greater insight and control across the distribution network. Utility executives identify many benefits associated with AI, ML and grid edge intelligence. All countries surveyed agreed that the top area where AI and ML can have the biggest impact is detecting and managing potentially dangerous situations.This advertisement has not loaded yet, but your article continues below.As utilities improve intelligence at the grid edge through the adoption of AI and ML, they are optimizing operations and paving the way for a more sustainable and resilient future, added Donovan. Our findings reveal that the age of AI for utilities is here, and the strategic deployment of these technologies is crucial for enhancing safety, improving consumer engagement and achieving long-term sustainability goals. The commitment of utilities to harness the power of AI and ML will be a driving force in shaping the next generation of smart utility management.To download a full copy of the Itron Resourcefulness Insight Report as well as the research, visit www.itron.com/resourcefulness.MethodologyThe 2024 Resourcefulness Insight Report consolidates perceptions of utility executives on topics relating to AI, ML and edge intelligence. A total of 600 online surveys were completed in June 2024 by utility executives in Australia, Canada, France, India, the United Kingdom and the United States. Respondents were mid-level managers or higher who have input on decision making for electric, gas and water utilities. Their areas of responsibility ranged from selling products, working in operations, strategic planning, customer service and a variety of administrative services.This advertisement has not loaded yet, but your article continues below.About ItronItron is a proven global leader in energy, water, smart city, IIoT and intelligent infrastructure services. For utilities, cities and society, we build innovative systems, create new efficiencies, connect communities, encourage conservation and increase resourcefulness. By safeguarding our invaluable natural resources today and tomorrow, we improve the quality of life for people around the world. Join us: www.itron.com.Itron® is a registered trademark of Itron, Inc. in the United States and other countries and regions. All third-party trademarks are property of their respective owners and any usage herein does not suggest or imply any relationship between Itron and the third party unless expressly stated.Media ContactAlison MallahanSenior Manager,Corporate Communications(509) [email protected] photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/0975568b-968c-4935-bef4-d76c4d4fd098Share this article in your social networkPostmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.
Decision Making/Process Automation
Management/Business and Financial Operations
null
null
null
null
null
null
news
[email protected] (Yale News)
Can we mitigate AI’s environmental impacts?
In a Q&A, Yale School of the Environment’s Yuan Yao discusses the the environmental opportunities and harms posed by AI.
https://environment.yale.edu/news/article/can-we-mitigate-ais-environmental-impacts
https://environment.yale…0e&itok=5lzx6els
2024-10-16T14:24:03Z
Listen to ArticleArtificial intelligence (AI) is powered by massive amounts of energy, and much of the energy is from the burning of fossil fuels, which is the biggest contributor to global warming. The International Energy Agency estimates that by 2026, electricity consumption by data centers, cryptocurrency, and artificial intelligence could reach 4% of annual global energy usage roughly equal to the amount of electricity used by the entire country of Japan.Yuan Yao, associate professor of industrial ecology and sustainable systems at the Yale School of the Environment, is part of a multi-institutional National Science Foundation (NSF)-led research initiative aimed at reducing the carbon footprint of computing by 45% within the next decade.  She spoke with YSE News about the environmental opportunities and harms posed by AI.Q: How does AI energy use impact the environment?Powering computing systems for AI requires energy, such as  electricity. Electricity generation emits pollutants, especially in regions where fossil fuels dominate electricity generation. Energy use impacts the environment through fossil fuel combustion that generates greenhouse gas emissions, contributing to climate change. Fossil fuel combustion also releases pollutants that cause air and water pollution, respiratory issues, and acid rain. Extraction of fossil fuels and energy infrastructure can disrupt ecosystems and contribute to environmental degradation. Transitioning to renewable energy sources, such as solar and wind, and adopting energy-efficient practices can mitigate these negative impacts.Q: What are the other ways AI impacts the environment?Besides its energy usage, AI needs hardware devices. The production, transport, maintenance, and disposal of these hardware components, such as servers and data centers, require additional energy use and substantial materials and natural resources, for example, cobalt, silicon, gold, and many other metals. The mining and production of these metals used in AI hardware can lead to soil erosion and pollution. Many electronics are not properly recycled, leading to electronic waste that can cause further pollution. The materials used in these devices can contaminate soil and water when not disposed of correctly.We need transparent, robust methods to assess AIs environmental impacts. Without accurate quantification, it is impossible to mitigate and address these challenges effectively.Yuan Yao Associate Professor of Industrial Ecology and Sustainable SystemsQ: Given AIs impact on the environment, are their positives for the environment?The applications of AI can bring environmental benefits. A few years ago, we published a paper that examined the benefits of AI applications in the chemical industry. AI can enhance energy efficiency and reduce energy usage, and it assists in environmental monitoring and management, such as tracking air emissions. Moreover, AI supports process and supply chain optimization to minimize environmental impacts. Additionally, my research group has been utilizing AI to support life cycle assessment (LCA), a standardized method to evaluate the environmental impacts of a products entire life cycle. AI enables us to assess these impacts for products made from diverse biomass species, a task that is quite time-consuming using traditional methods. Q: Tell us about the Expeditions in Computing program  The project, funded by a $12-million grant from the U.S. National Science Foundation, focuses on reducing the carbon footprint of computing by 45% within the next decade. It will pursue three main goals: create standardized protocols to measure and report carbon costs over the lifetime of computing devices; develop ways to reduce the carbon footprint of computing; and explore ways to reduce the carbon emissions of fast-growing applications, such as artificial intelligence and virtual reality systems. I will lead efforts on carbon modeling, accounting, and validation of semiconductors and computer systems, covering both embodied and operational emissions.Q. How will the project address AIs environmental costs?We need transparent, robust methods to assess AIs environmental impacts. Without accurate quantification, it is impossible to mitigate and address these challenges effectively. Our NSF-sponsored Carbon Connect project aims to tackle this issue by developing transparent carbon accounting tools. Specifically, my lab focuses on developing robust life cycle assessment methods tailored for computing systems. Doing this will allow us to perform holistic environmental impact assessments of AI and identify potential solutions to mitigate these impacts.
Unknown
Life, Physical, and Social Science
null
null
null
null
null
null
news
Haritha Murari
AI for Climate Change: Innovative Models for Predicting Environmental Impact
Climate change presents one of the most significant challenges of our time, demanding immediate and sustained action across all sectors of society. As temperatures rise, ice caps melt, and weather patterns become increasingly unpredictable, the urgency to understand, mitigate, and adapt to these changes becomes critical. In this context, innovative technologies, particularly artificial intelligence (AI), […]The post AI for Climate Change: Innovative Models for Predicting Environmental Impact appeared first on DATAVERSITY.
https://www.dataversity.net/ai-for-climate-change-innovative-models-for-predicting-environmental-impact/
https://d3an9kf42ylj3p.c…rari_600x448.png
2024-10-15T07:25:00Z
Climate change presents one of the most significant challenges of our time, demanding immediate and sustained action across all sectors of society. As temperatures rise, ice caps melt, and weather patterns become increasingly unpredictable, the urgency to understand, mitigate, and adapt to these changes becomes critical. In this context, innovative technologies, particularly artificial intelligence (AI), are emerging as essential tools in the battle against climate change. AIs capacity to analyze vast datasets and model complex systems makes it uniquely suited to mitigate the effects of climate change and adapt to its inevitable impacts.AI has carved out a pivotal role in environmental science, providing researchers and policymakers with previously unattainable insights. Through its advanced computational power, AI can process and analyze the extensive and complex datasets characteristic of environmental studies, from satellite imagery tracking deforestation to data from ocean buoys monitoring sea temperatures. This capability allows for a more nuanced understanding of environmental changes and aids in predicting future conditions more accurately.At its core, AI involves creating algorithms capable of processing information and making decisions or predictions, closely mirroring human cognitive functions. AI excels in data analysis and modeling intricate systems influenced by a myriad of interacting factors in the realm of climate science. By applying pattern recognition and predictive analytics techniques, AI systems can identify trends and anomalies that may elude human analysts. The types of AI models that are most prevalent in climate-related studies include machine learning and deep learning.Machine learning: Without explicit programming, these models learn from historical data to make predictions or decisions. Machine learning can be used in climate science to forecast weather events, predict solar and wind power generation based on weather data, and optimize systems for energy efficiency.Deep learning is a subset of machine learning that uses neural networks with many layers (hence deep) to analyze various factors in the climate system. These models are particularly effective at processing unstructured data, such as images and text, which are abundant in climate studies, such as satellite images for tracking storm paths or land-use changes.Integrating these AI technologies into climate science has not only enhanced existing analytical tools. It has also enabled the development of new methods that can provide deeper insights into the complex dynamics of climate systems. This has opened up new avenues for both the mitigation of climate change effects and adaptation strategies, thereby illustrating the indispensable role of AI in addressing one of the most pressing issues of our age.AIs capabilities extend far beyond general data processing; they are actively reshaping how we understand and respond to the intricacies of climate dynamics. For instance, sophisticated AI models are now pivotal in predicting weather patterns and climate anomalies with greater accuracy and timeliness than ever before. These models harness vast arrays of meteorological data to forecast extreme weather events, such as hurricanes and heatwaves, allowing for more effective early warning systems and preparedness strategies.A notable example is the use of machine learning techniques to enhance the predictive accuracy of hurricane tracking. AI models can predict a hurricanes path and intensity by analyzing historical data, satellite imagery, and ocean temperature patterns, which are critical for timely evacuations and resource allocation. Similarly, AI has been instrumental in detecting and analyzing climate anomalies such as the unexpected El Niño and La Niña effects, providing insights crucial for agricultural planning and water resource management.AI models are equally transformative in terms of the carbon cycle and broader environmental monitoring. For example, researchers have developed AI systems that can estimate atmospheric carbon flux by analyzing satellite and sensor data. This allows for more accurate measurements of how much carbon dioxide ecosystems absorb and release, informing climate policy and carbon trading markets.AIs role in climate mitigation is perhaps most visible in renewable energy management, according to action understanding. AI algorithms are at the forefront of optimizing the performance of power grids that integrate multiple renewable energy sources. These systems can forecast power generation from solar and wind installations through predictive analytics, adjusting grid operations to balance supply with fluctuating demand. Additionally, AI enhances energy storage systems, predicting the best times to store energy and feed it back into the grid, maximizing efficiency and stability.Another critical area in which AI significantly impacts the industry is the reduction of transportation and industrial process emissions. In transportation, AI applications range from optimizing routes for electric vehicles to reduce energy consumption to managing traffic flows in urban areas to minimize idle times. For industries, AI-driven systems can monitor and adjust processes in real time to ensure maximum energy efficiency and minimum waste production. For instance, smart factories use AI models to predict machine failures and optimize maintenance schedules, thereby reducing downtime and energy waste.These applications underscore AIs pivotal role in direct climate action and highlight how integrating advanced technologies into environmental strategies can lead to substantial gains in our efforts to mitigate climate impacts. As we refine these technologies and expand their applications, the potential for significant change grows, marking a hopeful path forward in the ongoing battle against climate change.Artificial intelligence is not just a tool for mitigation but is also pivotal in adapting to changing climate conditions. AI enhances disaster response and management by providing accurate and timely information, which is crucial for managing the effects of natural disasters like floods, hurricanes, and droughts. For instance, AI models that analyze weather data, historical event data, and geographical information can accurately predict where extreme weather events are likely to strike, facilitating better preparedness and resource allocation.In agriculture, AIs impact is transformative, helping to adapt to changing climatic conditions that affect food security worldwide. Through advanced imaging from drones or satellites, AI-enabled systems can monitor crop health, predict yields, and even detect plant diseases early. Furthermore, AI-driven robotic systems can provide targeted pest control, reducing the need for widespread pesticide use and minimizing environmental impact while maximizing crop productivity.However, deploying AI in climate science is not without its challenges. Data quality remains a significant concern, as AI systems require large amounts of high-quality data to function optimally. Incomplete or biased data can lead to inaccurate models, which could lead to poor decision-making. Scalability is another challenge, as solutions proven in small-scale studies often encounter obstacles when scaled to regional or global levels.Ethically, using AI in climate prediction and decision-making raises important questions about transparency and accountability. AI-influenced decision-making processes must be scrutinized to ensure they do not reinforce existing inequalities or bypass public scrutiny. Additionally, the potential for AI systems to make autonomous decisions in areas impacting human lives demands rigorous oversight and ethical considerations.Looking ahead, AI is poised to continue its role in climate science and expand it dramatically. Future developments could include more sophisticated neural networks that can model climate systems with greater accuracy, or new forms of machine learning capable of working with even smaller data sets to predict changes in climate patterns. AIs ability to integrate data from disparate sources could lead to more comprehensive models that can better predict and manage the effects of climate change. International collaboration will be crucial in harnessing the full potential of AI in climate science. Cross-border sharing of research, data, and innovations can expedite the development of solutions and guarantee their availability where most needed. Such cooperation can also help to standardize data collection and analysis methods, making AI tools more robust and widely applicable.AIs potential to impact climate change mitigation and adaptation is growing. As this technology advances, it will become an even more integral tool in combating climate change. However, the success of AI in this field will heavily rely on continued research and development, alongside careful consideration of the ethical implications of its use. By addressing these challenges head-on, the scientific community can ensure that AI serves as a force for benefit in the urgent fight against climate change.
Prediction/Decision Making/Content Synthesis
Life, Physical, and Social Science/Education, Training, and Library/Legal
null
null
null
null
null
null
news
Kristen Yee
How AI and Accelerated Computing Drive Energy Efficiency
AI isn’t just about building smarter machines. It’s about building a greener world. From optimizing energy use to reducing emissions, artificial intelligence and accelerated computing are helping industries tackle some of the world’s toughest environmental challenges. As Joshua Parker, NVIDIA’s Senior Director of Corporate Sustainability, explains on the latest edition of NVIDIA’s AI Podcast, these Read Article
https://blogs.nvidia.com/blog/ai-energy-efficiency/
https://blogs.nvidia.com…ixabay-x1280.jpg
2024-10-03T13:00:00Z
AI isnt just about building smarter machines. Its about building a greener world.From optimizing energy use to reducing emissions, artificial intelligence and accelerated computing are helping industries tackle some of the worlds toughest environmental challenges.As Joshua Parker, NVIDIAs Senior Director of Corporate Sustainability, explains on the latest edition of NVIDIAs AI Podcast, these technologies are powering a new era of energy efficiency.Can AI Help Reduce Energy Consumption?Yes. And its doing it in ways that might surprise you.AI systems themselves use energysurebut the big story is how AI and accelerated computing are helping other systems save energy.Take data centers, for instance.Theyre the backbone of AI, housing the powerful systems that crunch the data needed for AI to work.Globally, data centers account for about 2% of total energy consumption, and AI-specific centers represent only a tiny fraction of that, Parker explains.Despite this, AIs real superpower lies in its ability to optimize.How? By using accelerated computing platforms that combine GPUs and CPUs.GPUs (Graphics Processing Units) are designed to handle complex computations quickly and efficiently.In fact, these systems can be up to 20 times more energy-efficient than traditional CPU-only systems, Parker notes.Thats not just good for tech companiesits good for the environment, too.What is Accelerated Computing?At its core, accelerated computing is about doing more with less.It involves using specialized hardwarelike GPUsto perform tasks faster and with less energy.This isnt just theoretical. Over the last eight years, AI systems running on accelerated computing platforms have become 45,000 times more energy-efficient, Parker said.Thats a staggering leap in performance, driven by improvements in both hardware and software.So why does this matter? It matters because, as AI becomes more widespread, the demand for computing power grows.Accelerated computing helps companies scale their AI operations without consuming massive amounts of energy. This energy efficiency is key to AIs ability to tackle some of todays biggest sustainability challenges.AI in Action: Tackling Climate ChangeAI isnt just saving energyits helping to fight climate change.For instance, AI-enhanced weather forecasting is becoming more accurate, allowing industries and governments to prepare for climate-related events like hurricanes or floods, Parker explains.The better we can predict these events, the better we can prepare for them, which means fewer resources wasted and less damage done.Another key area is the rise of digital twinsvirtual models of physical environments.These AI-powered simulations allow companies to optimize energy consumption in real-time, without having to make costly changes in the physical world.In one case, using a digital twin helped a company achieve a 10% reduction in energy use, Parker said. That may sound small, but scale it across industries and the impact is huge.AI is also playing a crucial role in developing new materials for renewable energy technologies like solar panels and electric vehicles, accelerating the transition to clean energy.Can AI Make Data Centers More Sustainable?Heres the thing: AI needs data centers to operate, and as AI grows, so does the demand for computing power. But data centers dont have to be energy hogs.In fact, they can be part of the sustainability solution.One major innovation is direct-to-chip liquid cooling. This technology allows data centers to cool their systems much more efficiently than traditional air conditioning methods, which are often energy-intensive.By cooling directly at the chip level, this method saves energy, helping data centers stay cool without guzzling power, Parker explains.As AI scales up, the future of data centers will depend on designing for energy efficiency from the ground up. That means integrating renewable energy, using energy storage solutions, and continuing to innovate with cooling technologies.The goal is to create green data centers that can meet the worlds growing demand for compute power without increasing their carbon footprint, Parker says.The Role of AI in Building a Sustainable FutureAI is not just a tool for optimizing systemsits a driver of sustainable innovation. From improving the efficiency of energy grids to enhancing supply chain logistics, AI is leading the charge in reducing waste and emissions.Lets look at energy grids. AI can monitor and adjust energy distribution in real-time, ensuring that resources are allocated where theyre needed most, reducing waste.This is particularly important as the world moves toward renewable energy, which can be less predictable than traditional sources like coal or natural gas, Parker said.AI is also helping industries reduce their carbon footprints. By optimizing routes and predicting demand more accurately, AI can cut down on fuel use and emissions in logistics and transportation sectors.Looking to the future, AIs role in promoting sustainability is only going to grow.As technologies become more energy-efficient and AI applications expand, we can expect AI to play a crucial role in helping industries meet their sustainability goals, Parker said.Its not just about making AI greenerits about using AI to make the world greener.AI and accelerated computing are reshaping how we think about energy and sustainability.With their ability to optimize processes, reduce energy waste, and drive innovations in clean technology, these technologies are essential tools for creating a sustainable future.As Parker explains on NVIDIAs AI Podcast, AIs potential to save energy and combat climate change is vastand were only just beginning to tap into it.As AI continues to revolutionize industries and drive sustainability, there’s no better time to dive deeper into its transformative potential. If you’re eager to explore how AI and accelerated computing are shaping the future of energy efficiency and climate solutions, join us at the NVIDIA AI Summit.Event Date: October 9, 2024 Register here and gain exclusive insights into the innovations that are powering a sustainable world.Dont miss your chance to learn from the leading minds in AI and sustainability. Let’s create a greener future together.
Decision Making/Process Automation
Management/Life, Physical, and Social Science
null
null
null
null
null
null
news
Maggie Harrison Dupré
Yet Another Tech Giant Throws Its Weight Behind Nuclear As AI's Monstrous Power Demands Spike
Landmark Deal Google just inked a major nuclear energy deal focused on powering AI efforts without destroying the planet in the process. On Monday, Google announced a landmark agreement with the nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards […]
https://futurism.com/the-byte/google-nuclear-power
https://wordpress-assets…uclear-power.jpg
2024-10-16T14:28:19Z
The AI industry is going all in.Landmark DealGoogle just inked a major nuclear energy deal focused on powering AI efforts without destroying the planet in the process.On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" that is, over a decade from now when Kairos promises the reactors will be built.AI is an incredibly energy-intensive technology. And though its makers oftentrumpetAI's promise as a climate crisis-solving innovation, the AI industry's massive reliance on nonrenewable sources to power the energy-guzzling tech has caused companies including Google and Microsoft to miss important climate targets.Google and Kairos are touting the renewable energy deal as a win for overall tech sustainability and, most importantly, sustainable AI development."This agreement helps accelerate a new technology to meet energy needs cleanly and reliably," wrote Michael Terrell, Google's senior director for energy and climate, in a company blog post, "and unlock the full potential of AI for everyone."Big PromisesIn the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.As Bloomberg reported in May, Google stopped mass-buying carbon offset tokens this year amid its emissions-intensive AI efforts, and as a result, no longer claims to be carbon neutral. The tech giant is still promising to be net-zero carbon by 2030, though as it stands, AI is making that mission increasingly unlikely.In his blog post, Terrell noted that the goal of the deal is to "complement" Google's use "of variable renewables, like solar and wind, and help us reach our ambitious 24/7 carbon-free energy and net-zero goals."Nuclear RevivalGoogle isn't the first AI company to look to renewable nuclear energy as a means of offsetting AI's serious fossil fuel problem.In another high-profile plan to put nuclear back on the energy menu, Microsoft announced a deal last month to revitalize the decades-old plant at Pennsylvania's notorious Three Mile Island, the site of the worst nuclear power accident in US history. OpenAI CEO Sam Altman is also deeply invested in conventional nuclear power as well as still-unrealized fusion power.Nuclear power has long had a public relations problem in the US. But now, as powerful tech giants scramble to power AI without taking a wrecking ball to the Earth's climate, nuclear is making a serious comeback though whether Google and its peers will be able to power their AI models and still meet essential climate targets remains precariously unclear.More on AI and nuclear: Microsoft Working to Reopen Three Mile Island to Power Huge AI Datacenters
Unknown
Unknown
null
null
null
null
null
null
news
Alyssa Hughes (2adaptive Llc Dba 2a Consulting)
AI-powered microgrids facilitate energy resilience and equity in regional communities
When augmented with AI, small power grids can create opportunities for decentralized, equitable, and resilient power. Microsoft’s collaboration shows AI’s potential to optimize energy distribution and empower communities:
https://www.microsoft.com/en-us/research/blog/ai-powered-microgrids-facilitate-energy-resilience-and-equity-in-regional-communities/
https://www.microsoft.co…B-1200x627-1.jpg
2024-11-01T21:00:45Z
The rise of affordable small-scale renewable energy, like rooftop solar panels, is reshaping energy systems around the world. This shift away from fossil fuel-powered grids creates new opportunities for energy distribution that prioritize decentralized energy ownership and community empowerment. Despite this progress, centralized energy systems still dominate, often failing to provide vulnerable communities with reliable, affordable renewable energy. In response, Microsoft researchers are collaborating with local communities to explore how AI can enable community-scale energy solutions focused on energy availability and equity as well as decarbonization.Microgrids, small and localized energy systems, hold promise as a solution to the challenges of centralized energy systems. These microgrids can operate independently from the larger grid, providing participants with resilience and control. Figure 1 shows how these systems integrate renewable energy sources and storage to efficiently manage local energy needs.Figure 1. An example of the decentralized nature of a microgrid power systemAI improves energy reliability by integrating data about energy consumption, market prices, and weather forecasts, necessary when using wind and solar power, which rely on weather conditions. Advanced forecasting predicts renewable energy availability, while AI-driven analytics determine when to generate, store, or sell electricity. This increases efficiency and stabilizes the grid by balancing supply and demand.Spotlight: AI-POWERED EXPERIENCEMicrosoft research copilot experienceDiscover more about research at Microsoft through our AI-powered experienceOpens in a new tabWhen powered by AI, microgrids can also contribute to energy equity. In many rural parts of the US, flat-rate billing models are still common, often leading to unfair pricing. AI-enabled microgrids provide an alternative by allowing communities to pay only for the energy they use. By analyzing consumption patterns, AI can ensure optimized distribution that promotes equitable pricing and access. These systems also improve resilience during crises, enabling communities to manage energy distribution more effectively and reduce reliance on centralized utilities. AI allows microgrids to predict energy demands, identify system vulnerabilities, and recover quickly during outages.To explore AIs potential in improving efficiency and equity in energy management, a team of Microsoft researchers collaborated with community organizations on simulations and a case study. They built a tabletop simulator to test whether AI could effectively determine when to generate, store, or sell electricity based on real-time data. The AI model was optimized for resilience and efficiency, using reinforcement learning to control grid and battery processes, enabling microgrids adapt to changing energy conditions and market dynamics.This simulation used a theoretical model with external data to show how an AI-driven microgrid could autonomously buy and sell energy based on strategic design parameters. By controlling when the battery is charged and discharged based on energy production and consumption patterns, the model maximized efficiency and maintained local power availability. Figure 2 shows the AI-controlled grid’s optimal decisions using open-source data from the California Independent System Operator (CAISO), serving as a proof of concept (PoC) for AI-driven microgrids operating under real-world conditions.Figure 2. (A) Peak and off-peak net power bought or sold over one week using simulations of the AI controller on historical CAISO data. (B) Peak and off-peak battery levels over one week using simulations of the AI controller on historical CAISO data. Microsoft researchers, in partnership with community-based organizations Remix: The Soul of Innovation (opens in new tab), Maverick IQ (opens in new tab) and Ayika Solutions (opens in new tab), are designing and implementing an AI-powered microgrid system in West Atlanta. Working closely with the Vicars Community Center (VCC) resilience hub (opens in new tab), they aim to address challenges faced by the community due to rapid development. West Atlanta, like many Atlanta neighborhoods, faces rising housing prices and energy costs that disproportionately affect long-time residents. Communities relying on centralized grids are more vulnerable to outages, with slow recovery times, highlighting systemic inequalities in energy distribution.The VCC resilience hub is tackling these issues by helping to establish a solar microgrid for the West Atlanta Watershed Alliance (opens in new tab) (WAWA) community farm and surrounding neighborhoods. Microsoft researchers and collaborators are integrating AI into the microgrid to achieve energy savings, improve resilience, and create local job opportunities. Figure 3 shows the VCC resilience hub and WAWA community farm powered by the microgrid, highlighting key infrastructure for installing distributed energy resources (DERs).Figure 3. A and B show the VCC resilience hub, with solar panels (left) and batteries for energy storage (right) – photographs by Erica Holloman-Hill. C and D show the WAWA community farm and community members holding freshly harvested crops. Microsoft researchers, architects, and community partners held a participatory design session with state and utility representatives to define the project’s mission and key metrics. The CDCs Social Vulnerability Index informed the site selection, supporting the projects diversity, equity, and inclusion goals. A renewable siting survey conducted by community partners identified the VCC as a key resilience hub for solar panel and battery installation.To deliver these benefits, the site first needed upgrades. Older homes required energy-efficiency improvements, such as electrical upgrades and better insulation, before they could be integrated into the microgrid. As a PoC, the team collaborated with community partners to modernize an older home with inefficient energy consumption. Sensors were installed to track energy usage and environmental conditions (Figure 4).Figure 4. Estimated daily electricity costs based on a homes kilowatt-hour usage between July 29 and August 13, 2023. The data confirms the residents experience of high energy bills, inefficient heating and cooling, and high humidity in the basement. Used by permission from Erica Holloman-Hill.Students from Morehouse College (opens in new tab) used this data to create a digital twin of the home, which provided actionable insights (Figure 5). The analysis confirmed issues like high radon levels and energy drains from outdated appliances. Guided by these findings, the team upgraded the house into a smart home where AI monitors energy and environmental conditions, enabling it to join the microgrid and making it eligible for LEED certification (opens in new tab).Figure 5. Smart electrification: Snapshots of digital twin created for the PoC home. Panel A shows the digital twin for the entire home. Panel B shows detailed views for the first floor and basement, respectively. The detailed environmental dataincluding climatic conditions, appliance-level energy usage, and pollutant levelsprovide actionable insights for identifying targeted areas for grid modernization. Used by permission from Erica Holloman-Hill.To prepare the AI-powered microgrid, Microsoft researchers built a simplified tabletop prototype simulating the setup using real data from the design and siting phases. This prototype demonstrated the control mechanisms ability to manage DERssolar panels, batteries, and appliancesand the interface between the microgrid and the larger grid. Figure 6 shows the tabletop model during prototyping.Figure 7 illustrates the results of this simulation, showing power bought and sold and the battery charge-discharge profile. The AI controller made optimal buying and selling decisions, promoting efficiency and reliability.Figure 7. (A) Peak and off-peak net power bought or sold over one week using AI-controller simulations. (B) Corresponding battery levels.Erica Holloman-Hill, director of WAWA, CEO of Ayika Solutions and owner of the PoC home, reflected: This study helped me understand how our homes outdated condition affects our quality of life. Upgrading homes like mine could make a significant difference. Thanks to partnerships like this one, controlling and sharing the electricity the community generates is within reach, highlighting the potential of AI-supported technologies like microgrids for communities like ours.Building on the simulations success, the VCC resilience hub and local organizations are continuing to install solar panels to power the microgrid. AI will play a key role in siting and controlling the system as it expands. Efforts are also underway to establish sustainable financing models and assess homes for modernization to enable broader participation in the microgrid.The transition to decentralized microgrids offers new opportunities for energy efficiency, with AI playing a critical role in managing these systems. Yet additional efforts are needed for communities to fully realize these benefits. Residents of aging homes are burdened with outdated wiring, inefficient appliances, and poor insulationfactors that drive up energy costs. Their dependence on centralized grids offers little relief, underscoring the need for community-focused energy solutions. The West Atlanta project illustrates AIs potential to create resilient, equitable, community-driven energy systems, paving the way for a more inclusive and sustainable future. Microsoft researchers are continuing to collaborate with local organizations to promote smarter energy management.For additional details, please review the project report.I would like to thank all the collaborators on these projects: West Atlanta microgrid: Erica L. Holloman-Hill, John Jordan Jr, Markese Bryant. I also want to thank Karin Strauss for reviewing and providing feedback on this blog post; Andalib Samandari, the intern who supported this project; Vaishnavi Ranganathan for helping to brainstorm throughout the project; AI & Society Fellows program for supporting projects in this domain; and Microsoft’s Datacenter Community Affairs team, Jon McKenley and Kelly Lanier Arnold for supporting the project in West Atlanta. Opens in a new tab
Detection and Monitoring/Prediction/Content Synthesis
Unknown
null
null
null
null
null
null
news
antfie
Show HN: FoxBot – a personal robot, a simple XSS aggregator to Slack
I've been running this for a while now and decided to make it public because I find it quite handy. It runs on a Raspberry Pi Zero and enables me to not have to check the news all the time. I get these aggregated hourly bursts of notifications for things I care about. There is a bunch of other code I need to bring over from when it was a private project, such as web page scraping etc, but as it stands I think it's pretty useful.Comments URL: https://news.ycombinator.com/item?id=41940507Points: 1# Comments: 0
https://github.com/antfie/FoxBot
https://opengraph.githubassets.com/6aac7a0635844d567c13db1f00337f6dc26504c49a0791626a31303334d73e68/antfie/FoxBot
2024-10-24T22:30:30Z
If a fox was a robot it would be fantastic. This is your very own personal robot, because you are fantastic. FoxBot is your AI-free personal assistant, diligently doing your chores in the background and letting you know about things important to you.Poll your RSS feeds and notify you of relvent topicsDetect changes to websites you care aboutDaily reminders throughout the day about things to be mindful of, like drinking waterCountdown timersYou can run this wherever you like. Just download the appropriate binary from here.In the console you would see something like this:However FoxBot really shines when you use it as a Slack bot:There is a config.yaml file which will be generated on first run.We welcome fixes, features and donations.If you need custom features, integrations or support we can help. Just email us at [email protected]. We currently have the following premium feature availables:Monitoring share prices with buy/sell notificationsThe following is a non-commital list of items we want to work through:Document how to deploy to Raspberry PiDocument the configuration fileSpeed test functionality, ping, ICMPReduce noises and notifications at night when sleepingConsider daily summaries instead of regular updatesSolar panel monitoringWeatherHome automation?FotBot was created by Anthony Fielding. Alert sounds by Material Design (Google), which are licenced under CC-BY 4.0.
Information Retrieval Or Search/Process Automation
Unknown
null
null
null
null
null
null
news
Cody Corrall
OpenAI closes the largest VC round of all time | TechCrunch
Welcome back to Week in Review. This week, we’re diving into OpenAI’s $6.6 billion fundraising round, the fifth Cybertruck recall in less than a year, and
https://techcrunch.com/2024/10/05/openai-closes-the-largest-vc-round-of-all-time/
https://techcrunch.com/w…?resize=1200,900
2024-10-05T20:16:21Z
Welcome back to Week in Review. This week, were diving into OpenAIs $6.6 billion fundraising round, the fifth Cybertruck recall in less than a year, and a neat project thats Shazam-ing songs heard on a San Francisco street. Lets get into it.OpenAI closed the largest VC round of all time this week. The startup announced it raised $6.6 billion in a funding round that values OpenAI at $157 billion post-money. Led by previous investor Thrive Capital, the new cash brings OpenAIs total raised to $17.9 billion, according to Crunchbase. As part of the round, OpenAI also secured a massive credit line.ElevenLabs is being approached by existing and new investors about a new round, which could value the company as high as $3 billion, TechCrunch has learned. The 2-year-old company specializes in making AI tools to generate synthetic voices for narrating audiobooks as well as real-time video dubbing into other languages.Elon Musks X is now worth less than a quarter of its $44 billion purchase price, according to a new estimate from investor Fidelity. The asset managers Blue Chip Growth Fund now values its stake in X at approximately $4.19 million, implying that it is currently valuing X at about $9.4 billion overall.This is TechCrunchs Week in Review, where we recap the weeks biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.Another Cybertruck recall: This time because the rearview camera image may be delayed by two seconds after shifting into reverse, and the display may appear blank for up to eight seconds when the vehicle is in reverse. Read moreGenerate infinite Moo Dengs: Metas latest Movie Gen model turns text prompts into short, relatively realistic videos with sound. Its purely an AI research concept, and, wisely, Meta isnt giving it a public release. Read moreSB 1047 vetoed: California governor Gavin Newsom vetoed controversial AI bill SB 1047. The bill that would have made companies liable for implementing safety protocols was opposed by many in Silicon Valley, including OpenAI. Read moreAnalyze this: Meta has clarified that while images and videos captured with Ray-Ban Meta are not used to train its AI, that media falls under a completely different set of policies once you ask Meta AI to analyze them. Read moreThe sounds of San Francisco: A solar-powered box with an Android phone running Shazam 24/7 was installed on a San Francisco street pole to identify bops in the wild. The songs are uploaded to a website so you can hear the sounds of the city from wherever you are. Read moreA more secure VPN: The best encrypted VPN is one that you have set up and secured yourself, not one from a paid VPN service. Heres a handy guide on how to make one in 15 minutes. Read moreThe anti-productive note-taking app: Napkin is a note-taking app on iOS that wants to stand out against the rest by focusing on mindfulness and mental wellness rather than productivity and utility. Read moreBacklash for Y Combinator: Y Combinator is being criticized for backing PearAI. The startups founder has openly said that its a cloned copy of another project, but PearAIs misstep was to slap its own made-up closed license on its project that was written by ChatGPT. Read moreMake iOS 18 work for you: iOS 18 brings significant changes to the Control Center including the ability to use third-party apps. Here are some iOS 18-ready apps that can make your Control Center more useful. Read moreA new way to interact with ChatGPT: OpenAI has a new Canvas workspace that lets users generate writing or code and have the model suggest edits and offer feedback for a more collaborative workflow. Read more
Content Creation/Digital Assistance
Unknown
null
null
null
null
null
null
news
Heba
Comment on How to Use Generative A.I. to Design Better Scaffolds and Supports by Heba
For all the talk of students using Artificial Intelligence to cheat, we can easily miss the reality that A.I. has made differentiated instruction more feasible for educators. In this week's article and podcast episode, I
https://spencereducation.com/ai-supports/
https://spencereducation…Template.002.png
2024-10-05T09:59:44Z
For all the talk of students using Artificial Intelligence to cheat, we can easily miss the reality that A.I. has made differentiated instruction more feasible for educators. In this week’s article and podcast episode, I explore how we can leverage A.I. tools for differentiated instruction. I share ideas for designing scaffolds and supports for ELL / EL students and students with learning differences. We explore an approach a teacher might use in creating leveled readers. But ultimately, if our goal is to empower students, we will need to find ways in the future for students to design, select, and curate scaffolds and supports using machine learning.Listen to the PodcastIf you enjoy this blog but youd like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).https://spencereducation.com/wp-content/uploads/2023/09/AI-and-Scaffolds.mp3Reaching the (Zone of Proximal Development) ZPDWe all have things that we can do independently but other things that are impossible for us to do. Not to brag but Im pretty good at loading a dishwasher. On the other hand, I cant slam dunk a basketball. However, there are also certain things in a middle zone that you cant do on your own quite yet, but you can accomplish them with a little help. You might need help from a teacher, a peer group, or a resource. With proper training, hours of practice, and a trampoline, I could eventually slam dunk a basketball. Maybe. Okay, probably not.Vygotsky described this middle space as the Zone of Proximal Development (ZPD). Heres how ZPD works. At the center, you have the things you can do on your own. On the outside, you have the things you cannot do. But in this middle zone you have the Zone of Proximal Development, which are the things you can do with guidance and support.In 1976, Jerome Bruner applied Vygotskys theory to the K-12 educational setting with the concept of scaffolding. Here, educators provide supports, called scaffolds, to help students master the learning. Then, like the scaffolds in a building, teachers pull back the supports as students master the knowledge. At this point, the ZPD grows outward as students master new knowledge with new scaffolds.As an educator, you might also use A.I. for this scaffolding process. It could be as simple as an algorithm with recommended tutorials for students who are struggling. You might use it to design scaffolds from scratch or to modify existing ones. But for something more intensive, students might use generative A.I. as a type of tutor with a back-and-forth question-and-answer (for more on this process, check out the interview I did with Sal Khan from the Khan Academy).Designing Scaffolds for Neurodiverse StudentsA few months ago, I had a student who was struggling to keep up with the frantic schedule of our master’s program. So many courses started at one date and ended at a different date. We had one credit courses and three credit courses and elective classes mixed in. Some were synchronous, others asynchronous. This particular student has ADHD and challenges with executive function. Though she is hard-working, intelligent, and creative, she struggles to keep up with too many details in multiple directions.During office hours, I met with her and three other students to show how you could use ChatGPT to create a day-by-day course plan. We copied and pasted information from Canvas sites and syllabi. We then had it break down the assignments and projects into smaller chunks with to do lists. From there, we turned the to do list into a single, doable, color-coded spreadsheet. We then used the plan to fill in a Google Calendar and set alarms on the phone. What started with me demonstrating ChatGPT quickly became a conversation among students where they shared coping strategies and talked about what was harder and easier for them.This was was focused on the technology but it was also deeply human. More than anything else, these teacher candidates needed to feel known and affirmed. They needed a peer group of other professionals to say, “You’re not alone. We’re living in a world that wasn’t made for us.”Each student ended up with a personalized approach that reduced extraneous cognitive load and allowed them to stay focused on the learning. One person used a specific calendar app that uses AI to modify the daily schedule. Another person chose a specific to do list and reminder app that he has been using. We then talked about what it might look like to teach while having ADHD.At one point, a student said, “I wish I had generative AI when I was in high school. That would have been a game-changer for me.”Another talked about how they might use the approach I had just shared to break down big projects into smaller chunks for students who felt overwhelmed in a project-based learning classroom.This is just one example of how you might use A.I. to generate structures and scaffolds for neurodiverse students. Here are a few more ideas:Providing additional handouts to facilitate task-analysis and executive functionUsing A.I. to help schedule small groupsUsing A.I. speech recognition software as an assistive technology to help students with writingUsing A.I. image generators to help students who need a more concrete example of what they are learning in classDesigning targeted skill practice. For example, you might use a chatbot to generate word problems for students who struggle with 2-step equations, or you might use it to create a high-interest non-fiction text at a students reading with sample questionsUsing A.I. to modify assignments to reduce cognitive load (fewer steps) while encouraging students to still access the grade level content.Using A.I. to reduce the amount of work while still maintaining a high challenge level. For example, a student with dyscalculia might need fewer problems but can still master the math content at the same grade level.None of these supports should replace the goals within an Individualized Education Plan (IEP). We dont want to replace educators with algorithms. We can, however, use the A.I. as a starting place for designing more personalized scaffolds and supports. Here, the A.I. platform saves time and makes the differentiation process more feasible for teachers. It works like an assistant to create something general that you can then modify based on your own expertise and knowledge of students.Using A.I. to Generate Language ScaffoldsWe just examined how to help provide supports for neurodiverse students, but what about students who are learning English as a speaker of another language (ELL, ESL, ESOL students)? We can use A.I. as an initial starting place for creating language supports. These include:Front-loading vocabulary: you can use A.I. to identify some of the Tier 2 and Tier 3 vocabulary that students might need to master. While youll still need to create a list of vocabulary yourself (and rely on student feedback) the A.I. can be a great starting place. Ive found that certain chatbots do a great job defining vocabulary in simple terms and even coming up with example sentences. If you couple this with an A.I. image generator, you can save time in generating front-loaded vocabulary, handouts, and slideshows.Providing translation help: While it still works best to partner students with someone who is multilingual, A.I. translators have come a long way. The dynamic aspects of an A.I. bot allow students to interact with the content in their native language while also being exposed to content in English. This is especially helpful for students are feeling shy or even scared about speaking a new language in front of their peers.Providing leveled sentence stems: This remains a weaker area for A.I. but I am noticing significant improvements in A.I.-generated sentence stems, sentence frames, and clozes. The key is in making the prompts specific and clear.Using visuals within the project to help facilitate language development: As A.I.-generated visual art continues to improve, we can potentially create additional visuals that can aid with accessing English.Assess language proficiency: I can work as a formative assessment tool by analyzing a student’s speech or writing. This can be particularly useful in assessing language learners who may not have access to a native speaker or who are learning in a remote setting.Language practice: Students can provide the A.I. chatbot with the directions to engage in a language role-playing conversation. They can set the purpose, location, and fictional person they want to A.I. to pretend to be. Then, they can practice English with the chatbot.Notice that a teacher can begin with these A.I.-generated supports but then modify them to suit their context. Teachers might even invite students to help with this modification process. This then frees teachers up to pay attention to a students affective filter and finding ways to reduce fear and anxiety.Ive noticed that ELL teachers tend to spend a significant amount of time designing supports and scaffolds. Meanwhile, many ELL students in a non-ELL classrooms fail to receive certain supports they need. If we can leverage A.I. to save time in designing scaffolds, we can help students access the content while improving in their language development. Ultimately, you know your students best. You know what supports they need. But if you can begin withExample: Leveled ReadersImagine you are teaching about industrialization for a seventh-grade social studies class. You want to have a class Socratic Seminar asking the question, Was the Industrial Revolution an overall negative or positive thing for our world?You might begin with a review of the Industrial Revolution. You could start with front-loaded vocabulary using A.I.-generated vocabulary and definitions connected to public domain images (or even A.I.-generated images). From there, you might review what students have already learned about the Industrial Revolution.Next, you might have students use an AI chatbot to do a question and answer about the industrial revolution. Here, they can ask any question they want and find the answers and then ask follow up questions.But at this point, you want students to do some in-depth, interactive reading about the industrial revolution. The problem is you’ve got students who read at multiple grade levels / lexile levels. Yet, you want them to read independently. Here’s where generative A.I. makes differentiation far more feasible. I want to share three different approaches you might take.#1: A.I.-Generated and Human-ModifiedYou could start with key points youd like to cover and then have a generative A.I. create a text written at the 7th Grade reading level. Its not a bad start but its a little boring, so you edit the text and punch it up with some humor, some bizarre examples, and a few things you think the A.I. missed. You’ll also fact-check it to make sure it’s accurate and you’ll review it for any bias or loaded language.From there, the A.I. chatbot can take your modified version and create a leveled reading ranging from 3rd Grade to 7th Grade. Now every student can access the text at a level that matches their fluency level. You might also ask for a set of critical thinking questions (with sentence stems) and some vocabulary that you can put on the side of the text as a quick reference.#2: Human-Generated and A.I.-ModifiedWith this model, you might write your own text or you could pull a text that’s Creative Commons (like a Wikipedia entry). You could then ask the chatbot to add a few details and change some of the language. Then, the chatbot could create the leveled readers and the vocabulary (both content vocabulary and academic language).#3: A Mash-UpWith this option, you might provide 2-3 segments of source material and ask the chatbot to create one cohesive piece that includes key details from all three sources. You might even ask for it to quote the sources and cite them. You can then fact-check it, rework the piece, and ultimately send it back to be modified to fit different reading levels.Notice that this approach treats A.I. as a learning tool. The process is messy, human, and inherently social. Here, the machine learning speeds up the process and makes it more feasible. But you are still the editor, creator, and curator of the content. You are the expert on your students and the supports they need.Avoiding Learned HelplessnessI just want you to know that three of our exceptional learners were in tears today. We were working on the STEAM projects, and I thought I provided the right supports but . . .Good, our special education teacher interrupted.What do you mean? I asked.They came into their next class and told me all about it, she said.Im confused, I admitted.They didnt tell me about the tears. They told me about the struggles they faced but they also told me all about what they did to get their solar ovens to work, she answered.I feel bad, though, I pointed out.You shouldnt feel bad at all. Look, they cried because they cared. Do any of your other students cry during projects?Sometimes, I admitted.The same should be true of students with learning differences. They need to experience productive struggle. A lot of them have developed learned helplessness. Teachers with the best of intentions have given them the answers instead of giving a scaffold, she said.You really think its okay that they cried? I asked.My goal has been to get them to be self-directed. I want them to be their own advocates. But I also want them to do so in a way that fosters resilience.Learned helplessness refers to a psychological phenomenon where students develop a belief that they are incapable of performing certain tasks or learning specific skills. They may become passive, disengaged, or unmotivated in the learning process.This can happen when a teacher gives too much help too early and fails to encourage productive struggle. It can also happen when theres a lack of support or an overly critical environment. When students repeatedly experience failure or perceive a task as too difficult, they may begin to feel that their efforts are futile.As we implement A.I. for scaffolding, we need to ensure that it doesnt short-circuit productive struggle. If students get immediate help with any question they have, this might develop into a form of learned helplessness.As we think about designing supports for students, it will ultimately be about the implementation rather than merely the design of the scaffolds we use. Ultimately, teachers have the relational knowledge to provide the necessary scaffolding to help students master the standards. A.I. simply makes this more feasible.Empowering Students to Use A.I. for SupportsA few semester ago, I had a student ask for the transcript from our class Zoom session. He used the chatbot to delete the time stamps and translate it to Spanish. As a dual language student, he likes using both languages as he wrestles with ideas and compares the transcription to his notes in both languages.I share this story because every time there’s a new technology and people are scared about cheating, I always ask, “How are people using this to scaffold their own learning?In other words, how might an exceptional learner use this? How might an English Language Learner use this? How might someone who hasn’t had the same advantages use this? Because what might seem to some as a chance to cheat might be a game-changer for someone else.As educators, we can empower students to self-select the scaffolds they use. So, while you might make modifications for specific students (like the previously mentioned checklists or modified assignments) you might also have a bank of different tutorials and scaffolds that students can access if they need additional help. These supports should be available to all students. This approach embodies Universal Design for Learning. A quick teaser here. Next week, I’m going to have Katie Novak on my podcast to talk more in-depth about UDL and what it means for K-12 educators.Built around cognitive neuroscience, UDL is an inclusive educational framework that seeks to remove barriers while also keeping the learning challenging for all students. A UDL approach includes certain paradigm shifts:From a deficit mindset to neurodiversityFrom singular accommodations to universally accessible scaffolds and supportsFrom a teacher-centric view to a student-centered approach centered on student agencyIn the 1990s, Dr. David Rose and the Center for Applied Special Technology (CAST) articulated the three UDL principals:Multiple Means of Representation: Presenting information in different ways (like text, images, videos, or audio).Multiple Means of Action and Expression: Allowing students to engage with and demonstrate their knowledge and skills in various ways (such as through writing, speaking, multimedia, or hands-on activities).Multiple Means of Engagement: Fostering motivation and interest in learning by offering choices, using relevant content, encouraging collaboration, and employing strategies to keep students engaged.Note that students should be empowered to select the scaffolds and supports they need. The focus here is on their own agency and autonomy.  Students are empowered throughout all three UDL principles.With A.I., Universal Design is more feasible. We might use A.I.-driven virtual labs or simulations to help students solve challenging problems. We can provide students with options for the media format of their finished products. The students can then leverage A.I. for their creative work. We can use A.I. to design better projects and choice menus. We might even provide access to A.I. tutors. Universal Design for Learning embraces the diversity of all learners so that all students are empowered to become self-directed problem-solvers and lifelong learners the very skills they will need as they navigate the maze of an uncertain future.Ultimately, we cant predict how A.I. will change learning. We can observe how it is changing creativity, information literacy, personalized learning, and assessment. But we cant predict what this will look like in a decade from now. What can do is empower our students to be adaptable as they experience these changes. We can help them see how they can use A.I. as a tool to make the learning more accessible whether thats with scaffolds and supports or through machine learning for feedback.Get the FREE eBook!Subscribe to my newsletter and get the  A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Personalization/Content Creation
Education, Training, and Library
null
null
null
null
null
null
news
Farhan N, Kenzu
AI can help predict solar activity more easily: BRIN
The National Research and Innovation Agency (BRIN) said here on Friday that artificial intelligence (AI) can be utilized to study and predict solar ...
https://en.antaranews.com/news/328398/ai-can-help-predict-solar-activity-more-easily-brin
https://cdn.antaranews.c…280924-fik-1.jpg
2024-10-04T11:27:05Z
Jakarta (ANTARA) - The National Research and Innovation Agency (BRIN) said here on Friday that artificial intelligence (AI) can be utilized to study and predict solar activity.According to a researcher at the agency's Space Research Center, Tiar Dani, with the use of AI, Big Data can be obtained easily.He explained that monitoring solar activity with the help of satellites would produce several terabytes of data in one day.However, processing such large volumes of data using conventional methods would be difficult or even impossible.Therefore, through machine learning and AI deep learning of data, the process of observing and predicting solar activity would become easier, Dani said."Generally, deep learning and machine learning can detect solar activity patterns with unprecedented levels of accuracy and speed," he noted.Related news: BRIN pursues satellite-based space weather monitoringPredicting solar activity will increase researchers' understanding of solar dynamics and help them prepare steps to mitigate the impact of solar storms on the Earth.However, Dani emphasized that data quality is important when studying solar activity with AI.According to him, if researchers use correct and good data to train AI, the predictions will be more accurate."AI is only a black box, if good data is included, then good results will come out; but if the data is bad, then bad result will come out. We have to be very careful in what data will be used," he explained.He said that continuous research and collaboration are key to speeding up innovation in uncovering the secrets of the sun."Continuous research and collaboration are very important to obtain significant breakthroughs in uncovering solar secrets," he added.Related news: BRIN's three teams to study hybrid solar eclipse in BiakTranslator: Farhan N, KenzuEditor: Yuni Arisandy SinagaCopyright © ANTARA 2024
Prediction/Detection and Monitoring
Life, Physical, and Social Science
null
null
null
null
null
null
news
Thomas Maxwell
Mark Zuckerberg’s Nuclear-Powered Data Center for AI Derailed by Bees
Even if AI turns out to be a hype cycle, getting a lot of clean energy could make it worth the hassle.
https://gizmodo.com/mark-zuckerbergs-nuclear-powered-data-center-for-ai-derailed-by-bees-2000520108
https://gizmodo.com/app/…erberg-getty.jpg
2024-11-04T15:45:31Z
Meta has faced a setback in its plan to build data centers run on nuclear power. The FTreports that CEO Mark Zuckerberg told staff last week that the land it was planning to build a new data center on was discovered to be the home of a rare bee species, which would have complicated the building process.Amazon has also faced a recent setback after the government on Monday rejected a request to increase the amount of power a nuclear plant can direct to one of its data centers. It’s still allowed to use 300 megawatts from the Susquehanna plant but initially sought 480 megawatts.Big tech companies have gone all-in on nuclear power as each prompting of their AI models requires boiling the ocean for energy. Large language models have to do more inference, or thinking, than a traditional search. But big tech has bet the farm on generative AI for their future growth potentialZuckerberg has said he envisions a world in which social feeds are filled with AI-generated content based on your interests. It could be useful in other ways, like improved ad-targeting.The jury is still out on whether any of this investment in generative AI will bear fruit. At least in the enterprise sector, where customers would be more amenable to paying for tools, the adoption of products like Microsoft Copilot has been lackluster due to unreliable performance. Copilot subscriptions cost $30 a month, and it’s hard to gauge whether tools like automated document drafting are worth the money. Output from AI often requires a lot of editing, which can negate any time savings.At the very least, all this investment in AIcapable of generating an image of Mario flying into the Twin Towerscould at least result in the U.S. having a lot more clean energy. Honestly, let’s make ChatGPT searches more inefficient if it means we counterintuitively can help the environment. It’d produce more good than the metaverse did.Microsoft is paying billions to recommission Pennsylvania’s Three Mile Island nuclear plant, which has been offline for five years. The company will be paying a big premium over existing solar and wind energy, but the appeal of nuclear is that it provides a consistent stream of energy without lumpiness. While it seems like nuclear is having a resurgence, it’s still not clear how many Americans would be okay with having a nuclear plant near their home.Microsoft, Google, and others set big ambitions in previous years to become carbon-neutral, but have admitted that big investments in AI have set those goals back. Microsoft said earlier this year that its emissions have grown at least 29% since 2020 due to AI investment.
Content Creation/Personalization/Recommendation
Computer and Mathematical
null
null
null
null
null
null
news
EditorDavid
Coding Boot Gamp Graduates Find tough Prospects In an AI-Powered World
An anonymous reader shared this report from the New York Times:Between the time [construction worker Florencio] Rendon applied for the coding boot camp and the time he graduated, what Mr. Rendon imagined as a "golden ticket" to a better life had expired. About 135,000 start-up and tech industry workers were laid off from their jobs, according to one count. At the same time, new artificial intelligence tools like ChatGPT, an online chatbot from OpenAI, which could be used as coding assistants, were quickly becoming mainstream, and the outlook for coding jobs was shifting. Mr. Rendon says he didn't land a single interview. Coding boot camp graduates across the country are facing a similarly tough job market. In Philadelphia, Mal Durham, a lawyer who wanted to change careers, was about halfway through a part-time coding boot camp late last year when its organizers with the nonprofit Launchcode delivered disappointing news. "They said: 'Here is what the hiring metrics look like. Things are down. The number of opportunities is down,'" she said. "It was really disconcerting." In Boston, Dan Pickett, the founder of a boot camp called Launch Academy, decided in May to pause his courses indefinitely because his job placement rates, once as high as 90 percent, had dwindled to below 60 percent. "I loved what we were doing," he said. "We served the market. We changed a lot of lives. The team didn't want that to turn sour." Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent. "I would say this is the worst environment for entry-level jobs in tech, period, that I've seen in 25 years," said Venky Ganesan, a partner at the venture capital firm Menlo Ventures. A Stack Overflow survey of 65,000 developers found that 60% had used AI coding tools this year, the article points out. And it includes two predictions about the future: Armando Solar-Lezama, leader of MIT's Computer-Assisted Programming Group, "believes that A.I. tools are good news for programming careers. If coding becomes easier, he argues, we'll just make more, better software. We'll use it to solve problems that wouldn't have been worth the hassle previously, and standards will skyrocket."Zach Sims, a co-founder of Codecademy, said of the job prospects for coding boot camp graduates" "I think it's pretty grim."Read more of this story at Slashdot.
https://news.slashdot.org/story/24/11/24/2159232/coding-boot-gamp-graduates-find-tough-prospects-in-an-ai-powered-world
https://a.fsdn.com/sd/to…education_64.png
2024-11-24T22:04:00Z
Between the time [construction worker Florencio] Rendon applied for the coding boot camp and the time he graduated, what Mr. Rendon imagined as a "golden ticket" to a better life had expired. About 135,000 start-up and tech industry workers were laid off from their jobs, according to one count. At the same time, new artificial intelligence tools like ChatGPT, an online chatbot from OpenAI, which could be used as coding assistants, were quickly becoming mainstream, and the outlook for coding jobs was shifting. Mr. Rendon says he didn't land a single interview.Coding boot camp graduates across the country are facing a similarly tough job market. In Philadelphia, Mal Durham, a lawyer who wanted to change careers, was about halfway through a part-time coding boot camp late last year when its organizers with the nonprofit Launchcode delivered disappointing news. "They said: 'Here is what the hiring metrics look like. Things are down. The number of opportunities is down,'" she said. "It was really disconcerting." In Boston, Dan Pickett, the founder of a boot camp called Launch Academy, decided in May to pause his courses indefinitely because his job placement rates, once as high as 90 percent, had dwindled to below 60 percent. "I loved what we were doing," he said. "We served the market. We changed a lot of lives. The team didn't want that to turn sour."Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent. "I would say this is the worst environment for entry-level jobs in tech, period, that I've seen in 25 years," said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.
Digital Assistance/Content Synthesis/Content Creation/Prediction
Unknown
null
null
null
null
null
null
news
Anthony Di Paola
Abu Dhabi Works With Microsoft on AI Push in Energy Industry
(Bloomberg) -- Abu Dhabi’s main oil company and AIQ agreed on a deal to use artificial intelligence to make the energy industry more efficient.Most Read from...
https://finance.yahoo.com/news/abu-dhabi-works-microsoft-ai-115637372.html
https://s.yimg.com/ny/api/res/1.2/fNIreCTIsTeSiY1iGiL2gg--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD04MDA-/https://media.zenfs.com/en/bloomberg_technology_68/df28edaaf8d839a467b475e409ad52ce
2024-11-04T11:56:37Z
(Bloomberg) -- Abu Dhabis main oil company and AIQ agreed on a deal to use artificial intelligence to make the energy industry more efficient.Most Read from BloombergAbu Dhabi National Oil Co. will for the first time use agentic AI developed by AIQ in collaboration with Microsoft Corp. and G42 to analyze massive amounts of data to identify operational improvements in the energy industry, Chief Executive Officer Sultan Al Jaber said at the opening of the Adipec conference on Monday.It will speed up seismic surveys from months to days, Al Jaber said, adding that it will also make the industry more sustainable by reducing emissions. It will increase the accuracy of production forecasts by up to 90%.In September, Microsoft and G42 announced the establishment of two artificial intelligence centers in Abu Dhabi, months after they inked a $1.5 billion deal. G42, which is seeking to become an AI superpower in the Middle East, is part of the $1.5 trillion empire of Sheikh Tahnoon bin Zayed Al Nahyan, whos one of Abu Dhabis deputy rulers, national security adviser of the UAE and brother to its president.Microsoft is among the companies working on AI agents systems that perform a range of tasks without human supervision and accomplish things that existing AI models cant. Some companies are already using this so-called agentic AI to help with employee on-boarding and managing supply chains.The Adnoc-AIQ AI system will start test analysis by the end of this year, powered by G42s Khazna data center. The center is expanding renewable energy use, starting with a solar plant, according to Khazna CEO Hassan Al NaqbiAl Jaber also said that global power sector investment will need to rise to $1.5 trillion annually to meet surging demand from AI.(Updates with more details in penultimate paragraph)Most Read from Bloomberg Businessweek©2024 Bloomberg L.P.
Detection and Monitoring/Decision Making/Process Automation
Management/Business and Financial Operations
null
null
null
null
null
null
news
Justin Talbot Zorn, Bettina Warburg
Rainwater Could Help Satisfy AI’s Water Demands
A few dozen ChatGPT queries cost a bottle’s worth of water. Tech firms should consider simpler solutions, like harvesting rainwater, to meet AI’s needs
https://www.scientificamerican.com/article/rainwater-could-help-satisfy-ais-water-demands/
https://static.scientifi…12149.286&w=1200
2024-11-07T13:00:00Z
In late September Microsoft announced that it had reached a deal to reopen the Three Mile Island nuclear plant to power its growing network of data centers. Reviving the plant, home to a partial meltdown in 1979, is one of several extraordinary moves that tech companies are willing to make to meet the increasing energy demands of artificial intelligence, cloud computing and other technologies. Industry analysts at Transforma Insights predict the world will reach nearly 30 billion Internet of Things devices by 2030—up from less than 10 billion in 2020.Still, as big tech companies tout nuclear power and other low-carbon energy plans, they’ve presented surprisingly few ideas to meet their rapidly growing use of another scarce resource: water.Data centers require massive amounts of water for liquid cooling systems to absorb and dissipate the heat generated by servers. Researchers at the University of California, Riverside, have found that between five and 50 ChatGPT requests can consume up to 500 milliliters of water (close to the amount in a 16-ounce bottle). Those gulps add up. Google used 20 percent more water in 2022 compared to 2021 as it ramped up AI development. Microsoft's water use rose by 34 percent over the same period. By 2027 the amount of water AI uses in one year worldwide is projected to be on par with what a small European nation consumes. Worse, large numbers of data centers are located in water-stressed regions. Recently, a Google-owned data center in The Dalles, Oregon commanded one third of the city's water supply amid drought conditions.If you're enjoying this article, consider supporting our award-winning journalism bysubscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.While some tech companies are making investments in water recycling, others are imagining long-shot innovations such as transporting seawater inland or even moving data centers under the ocean. Many are simply ignoring what their water use could eventually cost, not to mention the specter of drought. To date, only a few tech companies have taken steps to apply what might be the simplest, most proven and most promising strategy to mitigate water risks: catching rainwater from the sky.People have been gathering rainwater since ancient times. Now there’s growing interest among water conservation advocates to collect rain from rooftops and send it down gutters into tanks. Then, at data centers, this water would be piped through cooling systems. Recent studies show that harvesting even a small portion of the rain that falls in a given area can eliminate water shortages while simultaneously recharging groundwater and reducing pollution from stormwater runoff. There’s no utility middleman needed when water is collected from a roof, meaning rainwater harvesting can be less expensive than purchasing equivalent amounts from a municipal supply—and avoid the greenhouse emissions associated with pumping water between sites.For years some states and municipalities restricted residential and industrial rainwater harvesting because of concerns about water quality or reductions in the water supply. But more recently state after state has authorized the practice as evidence for conservation benefits mounts. Cities like Tucson and Austin are now encouraging rainwater collection by offering incentives and establishing requirements. Apple, Ford and Toyota have recently integrated rainwater harvesting systems into corporate campuses and manufacturing facilities.But we believe that data centers are the biggest untapped opportunity for water conservation through rainwater harvesting. It’s not just that data centers have the pressing need for water—it’s also that their large, flat roofs are well-suited to harvest water. A 50,000-square-foot roof can collect about 31,000 gallons of water from a single inch of rain—about as much as fills an average residential swimming pool. Many data centers feature roofs larger than 100,000 square feet, and some hyperscale data centers owned by major tech companies feature roofs of up to one million square feet.Why aren’t more data centers relying on rainwater harvesting? The cost, for one. Setting up a system for a commercial facility like a data center typically costs between $2 to $5 per square foot, depending on the system’s complexity, storage and filtration needs. If the cost of municipal water in an area is low, it might not make sense to invest in rainwater capture. What’s more, rainwater systems rarely cover the total amount of water needed to cool a data center. Some massive facilities can consume a million gallons of water daily.But the economics of rainwater harvesting make more and more sense as both the cost and uncertainty of water resources increase, especially as the climate changes. Much like installing solar panels, installing a rainwater harvesting system is a one-time investment that reduces long-term utility costs. In some cases companies can leverage their existing stormwater management budgets toward rainwater harvesting. In places such as the greater Dallas area, home to many of the nation’s data centers, rainfall averages means that rainwater harvesting systems could cover up to a third of a data center’s cooling needs—depending on size and storage systems. While the numbers are lower in arid regions, higher costs of water in those areas typically make the economics of rainwater harvesting more attractive. As public concern grows about the environmental effects of AI and other technologies, companies will likely need to consider both the financial and reputational risks of inaction on water.Some industry leaders are beginning to see the potential. A Google data center in South Carolina is using rainwater retention ponds for harvesting rainwater. A Microsoft data center has implemented rainwater harvesting in Sweden, reducing reliance on local water sources. Amazon Web Services highlights the potential of rainwater harvesting in its water positive strategy.On a policy level, green banks—the clean energy-focused financial institutions that are expanding all over the country following a $27 billion federal investment through the Inflation Reduction Act—could soon start helping to provide financing for rainwater projects. Rainwater has potential to win bipartisan support—and potentially even tax credits along the lines of recent renewables legislation. While the approach can be a popular cause among Democrats on climate resilience grounds, Republicans can support rainwater harvesting without needing to accept the science of climate change or oppose fossil fuel lobbies.At a time when Silicon Valley is turning to energy solutions like long-dormant nuclear plants, it might seem odd to address a pressing global challenge using a technology that’s as old as civilization itself. But sometimes the best solutions can fall out of the sky.This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
Unknown
Unknown
null
null
null
null
null
null
news
Vinod Khosla
A Roadmap to AI Utopia
The future of AI will be decided by what we guide this powerful tool toward, writes Vinod Khosla.
https://time.com/7174892/a-roadmap-to-ai-utopia/
https://api.time.com/wp-…200&h=628&crop=1
2024-11-11T17:28:50Z
I've seen technology reshape our world repeatedly. Previous technology platforms amplified human capabilities but didn't fundamentally alter the essence of human intellect. They extended our reach but didn't multiply our minds.Artificial intelligence is different. It's past the point where a difference in degree becomes a difference in kind. AI amplifies and multiplies the human brain, much like steam engines once amplified muscle power. Before engines, the main source of energy was the food we consumed to fuel human physical labor. Engines allowed us to tap into external energy sources like coal and oil, revolutionizing productivity and transforming society. AI stands poised to be the intellectual parallel, offering a near-infinite expansion of brainpower to serve humanity.AI promises a future of unparalleled abundance. However, as we transition to a post-scarcity society, the journey may be complex, and the short term may be painful for those displaced. Mitigating these challenges requires well-reasoned policy. The next decade, 10 to 25 years, and 25 to 50 years will each be radically different. The pace of change will be hard to predict or anticipate, especially as technology capabilities far exceed human intelligence and penetrate society at varying rates.Pessimists paint a dystopian future in two partseconomic and social. They fear widespread job loss, economic inequality, social manipulation, erosion of human agency, loss of creativity, and even existential threats from AI. I believe these fears are largely unfounded, myopic, and harmful. They are addressable through societal choices. Moreover, the real risk isn't sentient AI but losing the AI race to nefarious nation states, or other bad actors, making AI dangerous for the West. Ironically, those who fear AI and its capacity to erode democracy and manipulate societies should be most fearful of this risk.In an economic dystopia, wealth concentrates at the top while intellectual and physical work are devalued. Widespread job loss and deflation destroy the economy and purchasing power, exacerbating inequalities. AI could create a world where a small elite thrives while the rest face instability.But with smart interventionslike income redistribution or universal basic income (UBI), and strategic legislationwe can prevent this. Capitalism operates by the permission of democracy, and we have the collective power to shape economic outcomes if we handle this transition wisely.Factor in an aging global population and a shrinking pool of young workers, and AI becomes essential. With the right policies, we could smooth the transition and even usher in a three-day workweek. If GDP growth jumps from 2% to 5% or more, we'll have the abundance to create "transition funds," much like the oil funds that have fueled prosperity in countries like Norway.Naysayers envision AI undermining humanity through pervasive surveillance and manipulation. They fear AI being used to control information, influence elections, and erode democracy via targeted propaganda or deepfakes, making truth difficult to discern.But these outcomes aren't inevitable. Legislation will shape how AI integrates into our lives. In democratic societies, these are collective choices. With AIs abundance, the reasons for crime might even diminish. A balance can be achieved where we benefit from AI's advancements without succumbing to dystopian visions.Fears of manipulation rely on the assumption of a single, despotic AI overlord, which is far-fetched. More likely, we'll see diverse AIs serving different interests, preventing the consolidation of power.Concerns about AI making critical decisions in healthcare, justice, and governance are valid, given hidden biases in current systems. But these biases originate from humans, and AI offers a chance to recognize and correct them. For example, human physicians perform more surgeries if they're paid by the surgeryhardly unbiased. AI can surface and correct such biases, providing more equitable outcomes.Humans will retain the power to revoke AI's decision-making privileges, ensuring AI remains guided by human consensus. The specter of a sentient, malevolent AI is a risk, but one we can mitigate through vigilance and proper safeguards.Critics fear over-reliance on AI could diminish human creativity and critical thinking, as people depend on machines for decisions. They worry about cultural homogenization due to AI algorithms creating echo chambers.But I see AI expanding our creativity. Someone like meendowed with zero musical talentcan create a personalized song. AI enables new forms of expression, expanding our abilities rather than replacing them.Doomers warn that AI could become uncontrollable and render humans extinct. While we must invest heavily in AI safety research, it's important to balance this concern against AI's immense benefits.The larger and more immediate risk is losing the AI race to nations like China, making AI dangerous for the West. China's five-year plan explicitly aims to win in AI. If authoritarian regimes develop advanced AI before democratic societies, they could manipulate societies, erode democracy, and consolidate power.Ironically, those who fear AI eroding democracy should be most concerned about this risk. We must step up and use AI for humanity's benefit, ensuring democratic values prevail.Further, it is likely that we'll have multiple AIs, making it unlikely that all would turn against humanity simultaneously, even in a worst-case scenario. Most likely, the growing emphasis on AI explainability will enhance safety by aligning AI's goals with human values. Within the next decade, I believe we'll move beyond the scare-mongering around "black box systems" with no controllability. However, solving this problem requires a laser focus on AI safety and ethics. Investing heavily in AI safety is crucial, and a substantial portion of university research should focus on this area. The federal government should invest more in safety research and detection of AI. Features like off switches should be required after appropriate research and testing. It's also important to remember that humanity faces many existential riskspandemics, asteroid impacts, nuclear war, to name a few. AI is just one risk in a broader context, and we need to consider the trade-offs between these risks and the potential benefits AI can bring. Concerns about tech CEOs wielding unprecedented sway over global structures are valid. But we must consider whether we're more comfortable with unelected leaders like Xi Jinping's global influence or that of tech CEOs. While both wield power without direct democratic accountability, tech CEOs rely on market forces and public opinion.Moreover, democratization of AI development and multiple AIs makes power concentration unlikely. Part of my motivation to pen this piece is to dispel the dystopian vision of an AI-first world. First and foremost, it is a cognitively lazy vision easy to fall into and lacking all imagination: large-scale job losses, the rich getting richer, the devaluation of intellectual expertise as well as physical work, and the loss of human creativity all in service of our AI overlords. On the contrary, AI can provide near free AI tutors to every child on the planet and near free AI physician expertise to everyone on the planet. Virtually every kind of expertise will be near free from oncologists to structural engineers, software engineers to product designers and chip designers and scientists all fall into this camp. It will also help control plasma in fusion reactors and self flying aircraft, self-driving cars and public transit making all substantially more affordable and accessible by all. AI promises to democratize even how we build enterprises. But more than anything it will be an equalizing force as all humans will be able to harness the same expertise.I estimate that over the next 25 years, AI can perform 80% of the work in 80% of all jobswhether doctors, salespeople, engineers, or farm workers. Mostly, AI will do the job better and more consistently. Anywhere that expertise is tied to human outcomes, AI can and will outperform humans, and at near-free prices. AI will transform how we discover and utilize natural resources such as lithium, cobalt, steel and copper, such that our resource discovery capabilities outpace consumption. The current challenge is not a lack of resources, but a limitation in our capacity to find them a barrier AI is poised to help break. Further, AI could help optimize the use of resources and it will help discover new materials.For the next five to 10 years, humans will oversee AI "interns," doubling or tripling productivity. Eventually, we'll decide which jobs to assign to AI and which to keep. AI will make expertise nearly free, making goods and services more accessible to everyone.Our physical lives will transform. Bipedal robots could revolutionize sectors from housekeeping to manufacturing, freeing people from undesirable jobs. In 25 years, there could be a billion bipedal robots performing the wide range of tasks that humans do. We could free humans from the slavery of the bottom 50% of really undesirable jobs like assembly line & farm work.It is not just our physical lives that will be transformed. Soon, most consumer access to the internet could be agents acting on behalf of consumers and empowering them to efficiently manage daily tasks and fend off marketers and bots. This could be a great equalizer for consumers against the well-oiled marketing machines that attempt to co-opt the human psyche to increase consumerism and sell them stuff or bias their thinking. AI could revolutionize healthcare with personalized medicine, tailoring treatments to individual genetics, lifestyle, and environment. AI could be used to detect diseases at an early stage, often before symptoms appear, allowing for more effective and less invasive treatments. AI will augment biotechnology to create effective, scalable precision medicines. An AI oncologist could access terabytes of research, more than any human could, making better-informed decisions. Near-free AI physicians could offer high-quality healthcare globally. Expanding basic primary care, chronic care, and specialized care (i.e., cardiology, oncology, musculoskeletal, etc) is essential to improving the health of those living in emerging markets and preventing disease. Near-free 24x7 doctors, accessible by every child in the world would be impossible if we were to continue relying on humans for healthcare. Indeed, the current debate has painfully failed to focus on the most salient consequence of AI: those who stand to be most impacted by this AI revolution are the bottom half of the planet 4 billion people who struggle everyday to survive.AI could create personalized learning experiences adapting to each student's needs and interests. AI tutors, available 24/7, could make high-quality education accessible worldwide, unlocking opportunity and fostering self-efficacy and AI researchers could expand human knowledge and rate of discovery.AI could address climate change by optimizing energy use, reducing emissions, but more than anything, help in developing low carbon technologies. It could aid in environmental monitoring and conservation, leading to a sustainable economy.Of course, powering this AI-utopia will be energy intensive and will require complementary technologies such as fusion for limitless, clean and cheap power generation. My bet is on fusion boilers to retrofit and replace coal and natural gas boilers rather than building whole new fusion or nuclear plants. There are additionally promising efforts using geothermal, solar and advanced battery systems for clean, dispatchable electric power. Multiple vectors are driving down the environmental cost of compute.AI could augment human capabilities, allowing us to tackle complex problems. It could be a creative partner, assisting in art, design, and innovation, pushing boundaries in various fields.New jobs will emerge, and creativity will flourish.AI could help create just societies by ensuring fair decision-making, reducing biases, and promoting transparency in governance, well beyond what humans have been able to do. It could assist in developing evidence-based policies through vast data analysis.We could have 24/7 lawyers for every citizen, amplifying professional capacity and expanding access to justice. Education, legal, and financial advice would no longer be reserved for society's upper crust.In a utopian vision, AI could shift societal focus from economic growth to well-being and fulfillment. Imagine a world where passions emerge naturally, as people pursue what excites them without the pressure to secure a job or develop a career.Professions not typically associated with financial securitylike arts, competitions and sportscould become achievable for anyone, unconstrained by the need to make a living. Life would become more meaningful as the 40-hour workweek disappears.Obstacles stand in the wayincumbent resistance, political exploitation of fears, technical failures, financial risks, anti-tech sentiment, and negative public perception. But I believe an AI-driven utopia is achievable with the right societal choices and technological advancements.In the next five years, life may not feel dramatically different. But between 10 and 20 years from now, we'll witness dramatic transformations reshaping society. While still on the horizon, this era of unprecedented prosperity is visible today.Capitalism may need to evolve. The diminishing need for traditional economic efficiency allows us to prioritize empathetic capitalism and economic equality. Disparity beyond a point leads to unrest, so policy must address this.Human labor may be devalued, putting downward pressure on wages. Labor will be devalued relative to capital and even more so relative to ideas and AI technology.AI's leveling of skill differences could compress wages. Value creation may shift to creativity, innovation, or AI ownership, potentially leading to new inequalities. We can't simply extrapolate past economic history; AI may surpass human capabilities altogether, making education and upskilling less effective.The AI cycle will be faster than previous technological shifts, making adjustment harder. Changes could hit some more seriously than others, especially in the next decade or two, even if society as a whole improves.Lets continue this thought experiment around wage compression and job disruption using the aggregate cost of physician salaries in the U.S. healthcare system as a starting point. It is north of $300 billion dollars, likely closer to $400 billion (take 1 million doctors each making $300,000 to $400,000). Predicting the fate of the $300 billion to $400 billion spent annually on U.S. physician salaries hinges on supply and demand elasticities in healthcare, consider demand elasticity. If medical costs drop by 90% due to AI automation, will consumption increase tenfold to keep the roughly $350 billion spent on U.S. physician salaries constant? Unlikely. People won't break more bones because orthopedic care is cheaper. But they might increase preventive care, mental healthcare, and elective procedures as access barriers fall. AI will hyper personalize and possibly commodify high quality entertainment and media, and any art form will vie for the same 24 hours of user attention each day. Diversity and quality of media will likely expand dramatically; will consumer spending also increase? In other areas like accounting even if services become cheaper through automation, a company won't require ten times more audits. The demand is bounded by regulatory needs, not cost. Even if per-service costs decline, total spending may stay the same if increased volumes balance lower prices. Each sector will find its equilibrium between supply, demand, and elasticity, making precise predictions difficult without a nuanced, sector-specific analysis for which, today, we have insufficient data. In the fullness of time, the new AI economy will find an equilibrium once demand hits the asymptote of total consumption and time in each sector.AI's surge in productivity could lead to deflationa decrease in general price levels. Increased efficiency with fewer inputs (like lower labor costs due to AI and robotics) and heightened competition can trigger deflation and job loss.But this deflationary economy challenges traditional measures like GDP. If we consume more but spend less due to lower prices, GDP may not reflect well-being. GDP won't mean much if it doesn't capture increased living standards and abundance.We need new economic measures accounting for these changes. Deflation here isn't negative; it's increased efficiency, production of goods and services and abundance. Our current lexicon equates GDP growth with prosperitya flaw. Monetary policy may not be as effective in this new age.We face choices: accelerate, slow down, or moderate disruptive technologies, and decide whether to compensate those displaced. Change can be painful for the disrupted, and embracing AI's positives requires keeping those affected at the center of policy. These changes pose significant challenges, but they also offer an opportunity to create in the 25-plus year windows of a more empathetic society and a post-resource-constrained world. This is a luxury that has been unaffordable in the past but may now be ours to use.Given the massive productivity gains on the horizon, and a potential for annual GDP growth to increase from 2% to potentially 5% or more over the next 50 years, per capita GDP could hit around $1 million (assuming 5% annual growth for 50 years if GDP is still a good measure). A deflationary enough economy makes current nominal dollars go much further and I suspect current measures of GDP will be poor measures of economic well being. Of course, this vision is only possible with a UBI-like mechanism that provides a minimum standard that on the whole far exceeds todays, given accessibility of goods and services that enrich our lives. I can imagine a consumer utopia in 25-plus years, where were not supply constrained in most areas and deflation is actually a positive tailwind for access and more equal consumption. Imagine a world in which housing, energy, healthcare, food, and transportation is all delivered or at your door, for near-free, by machines; few jobs in those fields remain. What would be the key characteristics of that world, and what would it be like to live in it? Humans will finally be free."An interesting parallel is China whose entry into the World Trade Organization (WTO) in 2001 indeed created deflationary pressures on the United States in the years that followed. This was largely due to several factors related to trade liberalization and increased competition from Chinese exports. The movement of labor overseas has resulted in a loss of tens of millions of stateside manufacturing jobs, yet little policy was centered around upskilling or taking care of those whose livelihoods were upended. With AI, we have the opportunity to free ourselves from this low-cost labor.Ultimately, the future will be what we decide to guide this powerful tool toward. It will be a series of policy choices, not technological inevitability. Choices will vary by country. We must harness AI responsibly, ensuring its benefits are distributed equitably.I'm a technology possibilist, a techno-optimistfor technology used with care. Reflecting on my words from 2000, we'll need to redefine what it means to be human. This new definition should focus not on the need for work or productivity but on passions, imagination, and relationships, allowing individual interpretations of humanity.
Unknown
Healthcare Practitioners and Support/Education, Training, and Library/Legal/Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Ron Schmelzer, Contributor, Ron Schmelzer, Contributor https://www.forbes.com/sites/ronschmelzer/
Watt’s Next? The AI Revolution Shaping The Future Of Energy
The topic of energy is rapidly becoming central to the conversation around AI. AI is increasingly helping power suppliers deal with its own increasing demand.
https://www.forbes.com/sites/ronschmelzer/2024/11/28/watts-next-the-ai-revolution-shaping-the-future-of-energy/
https://imageio.forbes.c…=1600&fit=bounds
2024-11-28T14:49:54Z
AI impacting the energy industrygettyThe topic of energy is rapidly becoming central to the conversation around AI. In that conversation, were primarily talking about how energy-hungry AI is, whether were using that energy to train complicated AI models or responding to the huge amount of demand for generating results and inferences from those trained models. On the other hand, another conversation is focused on how AI is helping power suppliers. AI is increasingly helping power suppliers with the generation, transmission, and consumption of energy..How AI is Used to Help Optimize Energy Generation and UseTo satisfy the ever-increasing, energy-hungry demands of today, the energy industry needs to make sure they're producing and supplying energy as required for todays needs as well as planning and managing capacity to deal with spikes in demand and increasing energy needs. This requires addressing both the generation side of energy as well as the storage, transmission, and consumption parts of the energy equation. This requires a smart balance to ensure that there isnt a shortage of needed energy capacity, but also not excess energy generation that is wasted.To solve these challenges, AI optimizes the operation of smart grids by predicting electricity demand, balancing supply, and detecting inefficiencies. AI algorithms help utilities manage the distribution of electricity more effectively, reducing energy waste and enhancing reliability. These intelligent systems can take into account existing and predicted demand, weather patterns, historical data, and many other variables.With the increasing use of variable renewable energy sources such as solar and wind, these systems need to factor in many more variables than was previously the case with more steadily producing sources of power such as oil, coal, gas, and nuclear sources. AI systems are able to factor all these variables into the equation to manage and make the grid even smarter than before.AI is being used to enhance grid stability by not only predicting and responding to all these fluctuations in energy supply and demand, but also helping to automatically adjust power distribution to prevent blackouts. AI is also helping with managing the integration of different intermittent renewable energy sources, managing storage of excess generated power, and helping to provide more resiliency and stability to the energy grid. AI systems can automatically adjust power distribution to prevent blackouts.AI systems can also manage the integration of intermittent renewable energy sources. Additionally, AI optimizes the management of energy storage systems such as batteries, by predicting when to store or release energy based on demand forecasts and price signals. This maximizes the efficiency and profitability of energy storage, particularly in renewable energy applications. In addition, AI enables more intelligent distribution of energy by managing decentralized energy resources, such as rooftop solar panels and microgrids.On the energy consumer side, AI is helping consumers and businesses optimize their energy consumption by analyzing usage patterns and suggesting ways to reduce energy usage. AI systems can analyze usage patterns and automatically adjust temperature controls or high energy usage systems, and further suggest ways to reduce energy usage.AI systems learn habits and behaviors over time, and use that collected data to make adjustments over time. Furthermore, AI streamlines the process of conducting energy audits by analyzing data from buildings, factories, and other facilities. AI-powered tools can identify energy inefficiencies, recommend improvements, and calculate potential savings.AI Keeps Energy Generation HummingTo keep energy generation consistent, AI is being used to monitor and predict the maintenance needs of energy infrastructure, such as power plants, wind turbines, and solar panels. By analyzing data from sensors, AI-powered predictive maintenance is helping identify equipment failures before they happen, reducing downtime and maintenance costs.AI systems can look at a wide range of energy product data to make predictions and forecast when equipment failures may happen or when parts are going to need to be replaced or repaired.Likewise, AI is helping energy companies stay compliant with energy regulations and keep energy generation as environmentally friendly as possible. AI is providing additional analytic insights to utility companies and governments to monitor carbon emissions by analyzing energy usage, and modeling the impact of different strategies on emissions and track progress toward sustainability goals.In addition to being a vital resource, energy is also a financially traded commodity. Energy prices are set by market demand and supply, with the cost of energy fluctuating on a minute-by-minute basis. AI is used for energy trading to analyze market trends, forecast prices, and optimize trading strategies.Sophisticated algorithms can process vast amounts of data to identify trading opportunities and reduce risks. Energy companies and financial institutions use AI to enhance their trading operations.So while AI is getting an increasingly bad rap in terms of its impact on energy consumption, AI is also helping to be part of the solution, providing tools to help improve efficiency and lessen the burden for energy generation, transmission, consumption, and storage.
Prediction/Decision Making
Business and Financial Operations/Management
null
null
null
null
null
null
news
John Q
The AI boom in electricity demand: a multiply recycled myth
I posted this piece in RenewEconomy a couple of months ago. It didn’t convince the commenters then, and I don’t expect it to be any different here, but I’m putting it on the record anyway. AI won’t use as much electricity as we are told, and it’s not a reason to slow transition to renewables […]
https://crookedtimber.org/2024/11/30/the-ai-boom-in-electricity-demand-a-multiply-recycled-myth/
https://s0.wp.com/i/blank.jpg
2024-11-30T03:44:04Z
I posted this piece in RenewEconomy a couple of months ago. It didn’t convince the commenters then, and I don’t expect it to be any different here, but I’m putting it on the record anyway.AI wont use as much electricity as we are told, and its not a reason to slow transition to renewablesThe recent rise of generative AI models has led to a lot of dire predictions about the associated requirements for energy. It has been estimated that AI will consume anything from 9 to 25 per cent of all US electricity by 2032.But we have been here before. Predictions of this kind have been made ever since the emergence of the Internet as a central part of modern life, often tied to claims and counterclaims about the transition to renewable energy.Back in 1999, Forbes magazine ran a piece headlined, Dig more coal the PCs are coming. This article claimed that personal computers would use 50 per cent of US electricity within a decade. The unsubtle implication was that any attempt to reduce carbon dioxide emissions was doomed to failureOf course, this prediction wasnt borne out. Computing power has increased a thousand-fold since the turn of the century. But far from demanding more electricity personal computers have become more efficient with laptops mostly replacing large standalone boxes, and software improvements reducing waste.A typical home computer now consumes around 30-60 watts when it is operating, less than a bar fridge or an incandescent light bulb.The rise of large data centres and cloud computing produced another round of alarm. A US EPA report in 2007 predicted a doubling of demand every five years. Again, this number fed into a range of debates about renewable energy and climate change.Yet throughout this period, the actual share of electricity use accounted for by the IT sector has hovered between 1 and 2 per cent, accounting for less than 1 per cent of global greenhouse gas emissions. By contrast, the unglamorous and largely disregarded business of making cement accounts for around 7 per cent of global emissions.Will generative AI change this pattern? Not for quite a while. Although most business organizations now use AI for some purposes, it typically accounts for only 5 to 10 per cent of IT budgets.Even if that share doubled or tripled the impact would be barely noticeable. Looking the other side of the market, OpenAI, the maker of ChatGPT, is bringing in around $3 billion a year in sales revenue, and has spent around $7 billion developing its model. Even if every penny of that was spent on electricity, the effect would be little more than a blip.Of course, AI is growing rapidly. A tenfold increase in expenditure by 2030 isnt out of the question. But that would only double total the total use of electricity in IT.And, as in the past, this growth will be offset by continued increases in efficiency. Most of the increase could be fully offset if the world put an end to the incredible waste of electricity on cryptocurrency mining (currently 0.5 to 1 per cent of total world electricity consumption, and not normally counted in estimates of IT use).If predictions of massive electricity use by the IT sector have been so consistently wrong for decades, why do they keep being made, and believed?The simplest explanation, epitomised by the Forbes article from 1999, is that coal and gas producers want to claim that there is a continuing demand for their products, one that cant be met by solar PV and wind. That explanation is certainly relevant today, as gas producers in particular seize on projections of growing demand to justify new plants.At the other end of the policy spectrum, advocates of degrowth dont want to concede that the explosive growth of the information economy is sustainable, unlike the industrial economy of the 20th century. The suggestion that electricity demand from AI will overwhelm attempts to decarbonise electricity supply supports the conclusion that we need to stop and reverse growth in all sectors of the economy.Next there is the general free-floating concern about everything to with computers, which are both vitally necessary and mysterious to most of us. The rise of AI has heightened those concerns. But whereas no one can tell whether an AI apocalypse is on the way, or what it would entail, an electricity crisis is a much more comprehensible danger.And finally, people just love a good story. The Y2K panic, supposedly based on the shortening of digits in dates used in computers, was obviously false (if it had been true, we would have seen widespread failures well before 1 January 2000).But the appeal of the story was irresistible, at least in the English-speaking world, and billions of dollars were spent on problems that could have been dealt with using a fix on failure approach.For what its worth, it seems likely that the AI boom is already reaching a plateau, and highly likely that such a plateau will be reached sooner or later. But when and if this happens, it wont be because we have run out of electricity to feed the machines.UpdateThe AI boom is also being used to justify talk, yet again, of a nuclear renaissance. All the big tech firms have made announcements of one kind or another about seeking nuclear power to run their data centres. And its true that the “always on” character of nuclear makes it a genuine example of the (otherwise mostly spurious) notion of “baseload demand”. But when you look at what Google, Meta and the others are actually doing, it amounts to around 1 GW apiece, the output of a single standard-sized reactor. That might bring a few retired reactors, like the one at Three Mile Island, back on line, but it’s unlikely to induce big new investments.
Unknown
Management/Business and Financial Operations
null
null
null
null
null
null
news
null
What Is a Search Query? (And How It Differs from a Keyword)
Search queries are the exact words or phrases a user enters into a search engine to find information.
https://www.semrush.com/blog/search-query/
null
2024-11-18T13:04:00Z
What Is a Search Query?A search query is an exact combination of words a user enters into a search engine to find information.Users can provide search queries in multiple ways, including the following:Typing the query into search bars (e.g., the Google search bar or web browser address bars)Speaking their search query to voice search tools like Siri, Alexa, or Google AssistantAfter receiving the query, search engines like Google or Bing return relevant search results on search engine results pages (SERPs). Based on multiple factors like user intent, relevance, and context.For example, here is the SERP for best cat litter boxes in 2024.There are millions of ways users may search for the same topic, so knowing the definition of a search query is helpful. Because you can use them to better understand intent and create more relevant content.Which can help your business connect with your target audience.Search Queries vs. KeywordsSearch queries are the exact words and phrases searchers use. Keywords, on the other hand, are terms marketers extract from search queries to identify the main topic a user is searching for.Although search queries include keywords, the terms are not interchangeable.Ideally, you should use both search queries and keywords to fully understand what your target audience is looking for. That way, you can create content that best matches their intent.For example, a cat owner looking for a new litter box may use the following search queries:What are the best automatic litter boxes?Litter boxes for large cats like Maine coonsTop-entry vs. open cat litter boxesModern cat litter boxes that blend with a minimalist aestheticIn the example above, the general keyword is cat litter box.But because this keyword is broader, the intent is not as clear. So looking at search queries can help you understand what different users want to see when they use that keyword.4 Types of Search QueriesBoth search queries (and keywords) can be separated into four types of search intent. The intent of a search query should influence the type of content you create.Here are the four types of intent:Searchers use informational queries when they want more information or answers to a question. For example, these two search queries probably have informational intent:Why does my dog eat grassCommands to teach my puppy when trainingRanking for informational search queries can help you build brand awareness and establish authority by answering searchers relevant questions. Find keywords that correspond to informational search queries using Keyword Overview.Enter a term or topic into the text bar. Add your URL. Select your target country. Then, click Search.NoteYou can enter any search query into Keyword Overview, but it may not necessarily return results. Instead, extract a keyword from a search query and type it in the tool. That way you can get the exact metrics you need for SEO.In the Keyword Overview report, search intent will be listed under the Intent section.Some queries have multiple intents. In those cases, all intents will be listed.The best content to create for informational queries includes informative blog posts, how-to guides, and FAQ pages. Because users are often looking for a detailed explanation of a topic.Navigational Search QueriesSearchers use navigational search queries when they want to find a specific website. These queries often include brand names, a specific product name, or trademarks.Some examples of navigational queries include:Mcdonalds breakfast menuWest elm console tableBirkenstocks If your business is well known, your websites top keywords are likely navigational and include your brand name. You can see your websites top keywords in Organic Traffic Insights.For example, West Elms top keywords are all navigational. Navigational keywords show that your target audience is searching for your brand. So keep optimizing your site for your brand name (and variants or misspellings) so you can continue to control the narrative around your brand.Also consider targeting your competitors navigational keywords.That way, people searching for your competitors may discover your business in the SERPs. And click on your ad or blue link to explore what you have to offer. Some types of content to create for navigational queries include comparison pages, testimonial pages, or product roundups. These showcase how your product or service compares to your competitors.Commercial Search QueriesSearchers use commercial search queries when researching a product or service they eventually want to buy.Here are some examples of search queries that likely have commercial intent:List of solar panel companiesAverage solar panel costsWhat is the best type of solar panel You can find commercial keywords quickly using the Keyword Magic Tool.After searching for a keyword, click the Intent filter above the table. Check the box to the left of Commercial. Then, click Apply.Target commercial keywords with both SEO and PPC. So you can connect with your target audience exactly when they are researching products or services like yours.Some content types for commercial keywords include comparison pages, listicles, and case studies. These can help you meet intent while still showcasing your businesss expertise, products, or services. Transactional Search QueriesSearchers use transactional search queries when they want to make a purchase.These users are often further down the marketing funnel. Meaning they know what they want to purchase and they want to do it soon.Here are some examples of transactional search queries:Buy red and white basketball shoesHvac plumber companies near meHire an accident lawyerService, category, and product pages often rank well for transactional search queries. Because they provide users with the opportunity to make a purchase or book an appointment. Transactional queries may also return Popular Products rich results. Like this:Add the Product schema markup to your product pages. To make them eligible for these results. And thus more likely to rank for queries with transactional intent.Other schema types like Local Business, Reviews, Events, and Pros and Cons can also be good for transactional queries. They can result in rich results that entice users who are ready to buy to click through to your website.How to Target Search Queries in SEO and PPC CampaignsOnce you have a good understanding of intent, you can use that information to create a more effective keyword strategy in your SEO or PPC campaigns. Heres how to do it.SEOIn SEO, search queries can lead users to discover your content organically, meaning without having to pay for an ad.If a search query contains a keyword your webpage is optimized for, Google may rank your page in the SERPs. Which could result in the user clicking through to your website.You can see your websites organic search queries in Google Search Console.Select Search results in the left-hand menu. Then, click the Queries tab. Youll see a complete list of the queries your page ranked for.If your website is getting clicks or impressions from a particular search query, it may contain a valuable keyword. For example, these are search queries from the aviation website above:Aircraft maintenance checks a b c d pdfWhat does milspec meanMetals used in aircraftEnter those queries into the Keyword Magic Tool to find related keywords with potentially more search volume. Or Keyword Difficulty scores that arent too competitive for your website. Youll get a table of related keywords for the search queries you entered. The keyword aircraft maintenance checks has a Search Volume of 140 searches per month. And there are 54 closely related keywords you might also be able to rank for. So, this could be a great keyword to optimize your content for. Because a lot of people are including it (and its variations) in their search queries about the topic.To optimize a page for a keyword, add it to on-page elements like the following:Title tag: An HTML element that gives a title to a webpageMeta description: An HTML element that provides a short summary of a webpageURL: The address of a webpage that a user sees in their web browserThen, add related keywords, long-tail keywords, subtopics, and questions to the headings and body copy of your page.By covering the topic more extensively, your pages may rank for the many different search queries people use to explore the topic.PPCIn paid search campaigns, search queries are the exact terms users type into search engines that trigger your ads. They are influenced by your campaign settings and the keywords you include in your ad targeting.To see your search queries in Google Ads, select Campaigns from the left-hand navigation menu. Click the Insights and reports, drop-down. Then, click Search terms.Youll see a complete list of the search queries that triggered your ads. Along with a list of important metrics for PPC tracking.Your search queries can provide insights into how your target audience is searching. And you can use that information to create ad groups with similar keywords or optimize your campaigns. Use the search terms report to:Refine your keyword targetingIdentify and remove underperforming or irrelevant keywordsOptimize your ad copy or create new variationsUnderstand and optimize for search intentIncrease your ad rank and Quality ScoreTarget the Right Queries on Your SiteNow that you know what search queries are, use them to find new keywords to target in your SEO and PPC campaigns.But remember: search intent is just one step in finding suitable target keywords.Review more keyword metrics using the Keyword Magic Tool.That way, you can find the terms that can deliver the best results for your business.
Information Retrieval Or Search/Content Synthesis
Business and Financial Operations/Management
null
null
null
null
null
null
news
Bernard Marr, Contributor, Bernard Marr, Contributor https://www.forbes.com/sites/bernardmarr/
How AI Is Transforming The Ancient Art Of Fine Winemaking
Discover how wineries like Chateau Montelena are using AI to optimize everything from vine orientation to grape harvesting while preserving the artistry and tradition.
https://www.forbes.com/sites/bernardmarr/2024/11/29/how-ai-is-transforming-the-ancient-art-of-fine-winemaking/
https://imageio.forbes.c…=1600&fit=bounds
2024-11-29T06:53:35Z
Discover how prestigious wineries like Chateau Montelena are using artificial intelligence to ... [+] optimize everything from vine orientation to grape harvesting while preserving the artistry and tradition that makes great wine unique.Adobe StockThe marriage of artificial intelligence and winemaking might seem like an unlikely pairing. Still, at Chateau Montelena the historic Napa Valley winery that helped put American wines on the world stage cutting-edge technology is transforming how premium wines are produced."AI in the wine industry is still in its infancy," says Matt Crafton, winemaker at Chateau Montelena. Yet the potential applications are already proving revolutionary, offering new insights into every aspect of wine production, from the vineyard to the cellar.Smart Monitoring In The VineyardAt Chateau Montelena, AI's impact begins with innovative vine monitoring systems. Using technology adapted from facial recognition software, vineyard managers can assess vine health by simply walking down rows with their smartphones. "They take all those images, upload them to their server, and based on data they have they can recognize how the leaf angles change based on sun exposure, they can correlate that to vine water stress," Crafton explains. This real-time data helps optimize irrigation and care for each vine individually.The winery also employs aerial imaging and pattern recognition. Using high-resolution photographs taken by planes or drones, AI algorithms can detect subtle changes in individual vines that might indicate problems like clogged irrigation lines before they become visible to the human eye."We're getting these little nudges saying like, 'Hey, go out to row 45 and head about eight vines in and double check that emitter is working,'" Crafton explains. "It's being able to recognize these subtle changes that would require tremendous amounts of man hours to determine."Reimagining Ancient Practices With Modern TechnologyPerhaps the most striking example of AI's impact comes from Chateau Montelena's recent replanting project. The winery used AI and solar positioning data to determine the optimal orientation for their vineyard rows a decision that could impact wine quality for decades to come.Traditional vineyard layouts often follow a simple rule: rows are planted perpendicular to the nearest road, but as Crafton explains, "there's no other large logic behind that. The people who set out the roads are the California Department of Transportation they don't really care about farming."Challenging this convention, Chateau Montelena turned to AI to analyze complex solar and weather data patterns. "Using AI and weather climate data, we determined that we should orient our rows approximately 25 degrees East of true North," says Crafton. This precise angle ensures that during the hottest part of the day, the sun shines on top of the canopy rather than directly on the grapes, protecting the delicate compounds that give the wine its flavor and character.The impact of this AI-guided decision has been significant. "We've seen a 10 to 15 degrees Fahrenheit temperature delta between berries that are in direct sunlight at the hottest time of the day versus berries that are in shade," Crafton notes. This temperature difference is crucial because "all of the really beautiful aromatics, the phenolics, the tannins that you taste, all of those wonderful flavors are very, very sensitive to heat." Too much heat can degrade these compounds, resulting in wines that lack character and complexity."We have two vineyard blocks that we planted in 2018 that are now in full production using this new system... the fruit is absolutely dynamite." This innovative approach demonstrates how AI can help reimagine practices that have remained largely unchanged for generations, leading to measurable improvements in wine quality.From Cork To Bottle: AI In ProductionEven the corks sealing Chateau Montelena's wines benefit from AI innovation. The winery uses specialized corks from a French company that employs AI modeling to predict how each closure will develop and mature over time. "They've modeled this using AI... they're confident enough, and we are too, seeing the data that each individual closure, in addition to being 100 percent clean, comes with a 30-year integrity guarantee," Crafton shares. This technology helps ensure that premium wines can age gracefully for decades.The Human Touch Remains EssentialWhile AI is proving invaluable for data analysis and pattern recognition, Crafton emphasizes that the technology serves to enhance rather than replace human expertise. "AI doesn't actually create," he notes. "There are really only two things in the universe that create: one is evolution, and the other one is human beings."This philosophy guides how Chateau Montelena integrates technology into its winemaking process. Rather than using AI to standardize the wines, it leverages it to better understand and express the unique characteristics of each vintage and vineyard block.The Future Of Fine WineLooking ahead, Crafton sees AI playing an increasingly important role in helping winemakers sort through the enormous amount of data generated during wine production."Unfortunately, I think there is this idea that more data are better all the time, but the reality is that it just gets overwhelming very quickly," he explains. The hope is that AI will help identify truly actionable insights, freeing up winemakers to focus on the creative aspects of their craft.A Toast To InnovationThe integration of AI into fine winemaking represents a fascinating balance between tradition and innovation. While the fundamental art of winemaking remains unchanged, AI is helping prestigious wineries like Chateau Montelena optimize their processes, reduce resource usage, and ultimately produce even better wines.For wine lovers, this technological evolution means that their favorite bottles are being crafted with unprecedented precision and care while maintaining the creativity and human touch that makes each vintage unique. It's a reminder that even in an industry steeped in tradition, there's always room for innovation especially when it enhances rather than diminishes the artistry at the heart of fine winemaking.
Decision Making/Detection and Monitoring/Process Automation
Management/Others
null
null
null
null
null
null
news
Justin Dawes
Airbnb’s AI Plan: No Chatbot, More Personalization
Airbnb is still working on integrating AI throughout the app, but that won’t include an AI chatbot for trip planning, at least not yet.
http://skift.com/2024/11/21/airbnbs-ai-plan-no-chatbot-more-personalization/
https://skift.com/wp-con…732230478845.jpg
2024-11-21T23:28:13Z
Airbnb is still working on integrating AI throughout its app, but it wont include an AI chatbot for trip planning at least not yet. Dave Stephenson, chief business officer for Airbnb, said the short-term rental giant worked with OpenAI to develop a chatbot for users.We were not satisfied with it, Stephenson said Thursday during the Phocuswright conference in Phoenix. I think the tools in the interface right now using chat as a method to plan a trip we just don’t think its actually sufficient, and it didn’t actually meet our design criteria.” Many other travel companies have released AI chatbots for trip planning and inventory search. Skift has analyzed many of them over the past two years, from both startups and large online travel agencies. None have provided a reliable experience.  Airbnb is still trying to figure out the future of its search. I don’t think a chatbot is going to be the way it’s going to work we’re actively working behind the scenes to figure that out, he said. Brian Chesky, CEO of Airbnb, said in August that it will take years to rebuild Airbnb into a fully AI-powered application. Airbnb is focused on implementing AI to improve other areas of the platform. That includes developing a better algorithm for matching guests with rentals. The app has 8 million properties and 5 million users. A new photo tour feature is meant to display property photos in an order that gives guests the best understanding of its layout.A major project underway involves gathering user data by asking them to add more details to their profiles. Airbnb is now asking users to answer questions like, What are you obsessed with? and Whats your most useless skill?Its part of an effort to enable more personalized experiences. Personalization is only possible with high levels of quality customer data. Then, the AI can access that data to match travelers with lodging that meets specific needs. The more we understand about you, the better we can match you up with the perfect place, the better experience and stay that you can have, Stephenson said.Chesky said during Airbnb’s first-quarter earnings call this year that the user profiles will be an important part of the future of the app, which he expects will include a lot more than just short-term rentals. Chesky said in September that he wants to consistently launch new lines of business. The company has already been acceprting new applications from experiences hosts and in May launched a pop culture experiences category called Icons.  I think in the future, the profile will be the center of the solar system of Airbnb, and the home will be one of many categories orbiting the profile, Chesky said in May.
Personalization
Unknown
null
null
null
null
null
null
news
Julian Horsey
Elon Musk’s Bold Predictions For the Future
Elon Musk, the entrepreneur behind companies like Tesla and SpaceX, who recently helped Donald Trump regain the American presidency, has continuously pushed the frontiers of technology. His future predictions span artificial intelligence, renewable energy, population dynamics, and space exploration—fields that could reshape societal structures, economies, and human potential. Imagine a world where AI equals human […]The post Elon Musk’s Bold Predictions For the Future appeared first on Geeky Gadgets.
https://www.geeky-gadgets.com/elon-musks-predictions/
https://www.geeky-gadget…-Predictions.jpg
2024-11-09T14:57:39Z
Elon Musk, the entrepreneur behind companies like Tesla and SpaceX, who recently helped Donald Trump regain the American presidency, has continuously pushed the frontiers of technology. His future predictions span artificial intelligence, renewable energy, population dynamics, and space explorationfields that could reshape societal structures, economies, and human potential.Imagine a world where AI equals human intelligence, solar energy powers entire cities, and humans explore Mars. Far from science fiction, this is Musk’s vision. Known for transforming ambitious ideas into reality, Musk sees a future where AI, sustainable energy, demographic shifts, and space travel drive global progress. As we approach these transformative shifts, a mix of excitement and uncertainty arises. Could these advancements lead us to a more sustainable, equitable, and innovative world?Interview with Elon MuskIn this insightful interview, Musk shares his forecasts, exploring how AI might rival human intelligence, why solar energy could become a primary power source, and what the future holds for space travel and autonomous technology. These innovations bring promise but also present complex challenges that demand careful, strategic thinking.Artificial Intelligence: The Dawn of Human-Level CapabilitiesMusk anticipates that artificial intelligence will reach human-level abilities by 2029, a prediction that has sparked both excitement and concern in the tech community. This rapid progress in AI capabilities brings up critical questions about AI safety and ethics. To address these pressing issues, Musk has launched the xAI project, which focuses on developing AI systems that are both beneficial and truth-seeking.The primary objectives of xAI include:Creating AI systems that align with human interestsMitigating potential risks associated with advanced AIPromoting transparency and accountability in AI developmentExploring the philosophical implications of artificial general intelligenceAs AI continues to evolve, it’s crucial to ensure that its development remains in harmony with human values and ethical considerations. Musk’s initiatives in this field aim to steer AI progress in a direction that maximizes benefits while minimizing potential harm.Solar Energy: Spearheading the Renewable RevolutionIn the realm of energy, Musk predicts that solar power will become the dominant source of electricity worldwide. With ongoing technological advancements and decreasing costs, solar energy is expected to fulfill a significant portion of global energy needs in the coming decades. This transition to renewable energy sources is vital for sustainable growth and combating climate change.Musk also highlights the potential role of global AI clusters in driving innovation and enhancing energy solutions. These AI-powered systems could optimize energy distribution, improve storage technologies, and accelerate the development of more efficient solar panels.Elons Predictions For the FutureExpand your understanding of Artificial Intelligence (AI) with additional resources from our extensive library of articles.Population Dynamics: Addressing Declining Birth RatesOne of Musk’s more controversial concerns revolves around declining birth rates, which he views as a major threat to global populations. He advocates for policies that promote higher birth rates to sustain population levels and ensure economic stability. Addressing this demographic issue is crucial for maintaining a balanced and thriving society.Some potential strategies to address declining birth rates include:Implementing family-friendly policies in the workplaceProviding financial incentives for having childrenImproving access to affordable childcarePromoting work-life balance to support family formationSpace Exploration: The Red Planet BeckonsSpace exploration remains a central focus for Musk, with ambitious plans to launch Starships to Mars in the near future. These missions face significant regulatory and technical hurdles but are viewed as essential for humanity’s long-term survival and expansion. Mars exploration represents a crucial step toward becoming a multi-planetary species, a concept Musk believes is vital for safeguarding human civilization.The Mars colonization effort involves several key components:Developing sustainable life support systems for long-duration space travelCreating habitats capable of withstanding the harsh Martian environmentEstablishing a self-sustaining Martian economyAddressing the psychological challenges of long-term space habitationAutonomous Vehicles and Robotics: Reshaping Mobility and LaborElon Musk predicts that Tesla’s full self-driving technology will soon transform the automotive industry, ushering in an era of safer and more efficient transportation. By 2040, he anticipates that humanoid robots will be commonplace, influencing various sectors from manufacturing to healthcare.The integration of autonomous vehicles and advanced robotics is expected to have significant economic impacts, potentially boosting efficiency and productivity across industries. However, this shift also raises important questions about the future of work and the need for societal adaptation to these technological changes.Economic and Societal Shifts: Envisioning a Post-Capitalist EraLooking further into the future, Musk envisions a world where technological advancements lead to an abundance of goods and services. This abundance could pave the way for a post-capitalist society, characterized by universal high income and reduced economic inequality.Such a shift would redefine economic structures, potentially fostering greater equity and sustainability. However, it also presents challenges in terms of:Redefining the concept of work and personal fulfillmentMaking sure equitable distribution of resources in an automated worldAddressing the psychological and social impacts of abundanceDeveloping new economic models that can function in a post-scarcity environmentElon Musk’s predictions outline a future driven by rapid advancements in AI, renewable energy, and space exploration. These technological leaps offer both unprecedented opportunities and significant challenges, with the potential to reshape society into a more sustainable, equitable, and technologically advanced world. As we move towards this envisioned future, it’s crucial to approach these changes with careful consideration, making sure that technological progress aligns with human values and societal well-being.Media Credit: Peter H. DiamandisFiled Under: AI, Technology News, Top NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Unknown
Unknown
null
null
null
null
null
null
news
Shelli Brunswick, Forbes Councils Member, Shelli Brunswick, Forbes Councils Member https://www.forbes.com/councils/forbestechcouncil/people/shellibrunswick/
The Future Of Space And AI: Exploring New Frontiers And Transforming Earth
As we stand on the cusp of unprecedented breakthroughs, the fusion of AI and space technology is poised to reshape our understanding of the cosmos.
https://www.forbes.com/councils/forbestechcouncil/2024/11/05/the-future-of-space-and-ai-exploring-new-frontiers-and-transforming-earth/
https://imageio.forbes.c…=1600&fit=bounds
2024-11-05T14:15:00Z
Shelli Brunswick is CEO & Founder of SB Global LLC and an international keynote speaker on tech used for the betterment of humanity.AFP via Getty ImagesArtificial intelligence (AI) and machine learning (ML) are rapidly transforming space exploration and Earth-based applications, marking a new era of discovery. As Advanced Micro Devices (AMD) CEO Lisa Su recently stated, "AI will be part of everything."This is especially true in space, where AI advancements enable autonomous missions, real-time data processing and decision-making. NASA's appointment of David Salvagnini as chief AI officer highlights the technology's growing influence.On Earth, AI is driving advancements in environmental monitoring, precision agriculture and disaster management, making it indispensable for tackling global challenges. The importance of AI in our evolving world is highlighted at global conferences like DeepFestand the World AI Summit, where industry leaders explore its transformative potential. Through worldwide AI initiatives and collaborations between space agencies and private companies, AI is set to drive groundbreaking progress in both space exploration and Earth applications, ensuring a future of innovation and discovery.AI: Enhancing Space ExplorationAI is transforming autonomous navigation and decision-making in space missions, allowing spacecraft and rovers to operate without constant human input. NASAs Perseverance Rover on Mars uses AI to autonomously navigate, select rock samples and avoid obstacles, overcoming communication delays with Earth. AI helps the rover analyze its surroundings in real time, making critical decisions faster than any human could from Earth. Similarly, the European Space Agencys (ESA) Mars Express mission relies on AI to prevent memory conflicts, preserve valuable scientific data and reduce mission workload by 50%. Alessandro Donati of ESAs Space Operations Centre emphasizes that AI boosts scientific returns while lowering resource costs.AI also plays a transformative role in analyzing the vast amounts of data generated by space telescopes like NASAs James Webb (JWST) and Hubble. These instruments produce massive datasets that require AI algorithms to filter out noise and identify patterns, such as detecting exoplanets. Penn State professor Joel Leja explains that machine learning is completely changing how astronomers process data, enabling rapid analysis of changes in star brightness and revealing orbiting planets. This underscores AI's growing importance in astronomy.NASA's collaboration with Google highlights AI's impact in space research, particularly in analyzing data from the Kepler mission, leading to the discovery of new exoplanets. AI's ability to rapidly sift through vast datasets allows it to uncover phenomena that might otherwise go unnoticed by human scientists. Additionally, AI is crucial in predicting celestial events, such as asteroid impacts and solar flares, safeguarding astronauts and spacecraft by analyzing astronomical data and forecasting potential hazards.AI enhances satellite operations by autonomously adjusting orientation and altitude to avoid space debris, optimize data collection and reduce operational risks. These autonomous capabilities also free up valuable human resources, allowing mission control teams to focus on more complex tasks. This technology improves data analysis and mitigates threats to space missions. Organizations like the SETI Institute use AI to analyze radio signals to search for extraterrestrial intelligence, identifying anomalies that could indicate intelligent life. As AI advances, it will play an increasingly critical role in space exploration, driving mission success and expanding our understanding of the universe.Space And AI: Benefits For EarthAI's integration into space-based Earth observation systems plays a critical role in addressing environmental and climate challenges, showcasing the value of space technology in protecting our planet. Companies like Planet Labs and agencies such as NASA and ESA are leading this effort, utilizing AI-powered satellites to monitor changes like deforestation, rising sea levels and ecosystem healthproviding insights that would otherwise be difficult to obtain.ESAs Copernicus and Discovery programs fund AI-driven projects to detect methane leaks and manage natural disasters. Methane, a potent greenhouse gas, is a significant contributor to climate change, and early detection through AI-equipped satellites is essential for mitigating its impact. Additionally, NASA has partnered with IBM Research to develop an AI model for more accurate weather and climate forecasting, enabling timely responses to climate-related events.Combined with satellite data, AI is revolutionizing precision agriculture and natural resource management. Companies like Descartes Labs use AI to analyze satellite imagery, providing real-time insights into soil health, crop yields and water availability. This helps farmers optimize resource use, increase productivity and reduce waste. In drought-stricken regions, AI-driven satellite technology tracks water availability and usage, enabling better resource planning and sustainability.The World Economic Forum emphasizes AI's crucial role in monitoring environmental factors and predicting resource shortages, which is essential for developing proactive climate strategies. NASA and Google have also partnered on AI-driven initiatives to enhance satellite data analysis for tracking environmental changes and predicting natural disasters. By processing vast amounts of Earth observation data, AI identifies patterns and anomalies, improving forecasting for hurricanes, floods and wildfires. These innovations are critical for mitigating climate impacts and enhancing disaster response.Projects like NASA's SERVIR use AI to monitor climate change and its effects on agriculture and water resources, offering real-time insights into rising sea levels, deforestation and ecosystem health. These systems support better resource planning and disaster mitigation. Additionally, NASA's collaboration with IBM Research on AI models for climate prediction continues to improve our ability to respond to climate-related events.AI And The Future Of Space ExplorationAs we stand on the cusp of unprecedented breakthroughs, the fusion of AI and space technology is poised to reshape our understanding of the cosmos and the future of life on Earth. AI's transformative power lies not just in the data it processes or the missions it autonomously steers, but in its ability to address humanity's most pressing challenges, from climate change to resource management. We are unlocking a future where discovery and innovation know no bounds through continued collaboration between space agencies, private companies and global AI initiatives. The journey ahead promises not only to push the limits of exploration but to redefine our relationship with the universe and, most importantly, to better protect our planet. The next era of space and AI is not just about reaching furtherit's about creating a future that is more sustainable, interconnected and inspired by the limitless potential of human ingenuity.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Decision Making/Prediction/Discovery/Process Automation
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
Maria Deutscher
Report: OpenAI drafts policy paper with suggestions for US AI strategy
OpenAI has drafted a policy paper with suggestions on how to foster the U.S. artificial intelligence sector, CNBC reported today. Chris Lehane, the company’s vice president of global affairs, is expected to present the document today in Washington, D.C. OpenAI’s paper reportedly contains five main policy suggestions. According to FedScoop, the first proposal calls on […]The post Report: OpenAI drafts policy paper with suggestions for US AI strategy appeared first on SiliconANGLE.
https://siliconangle.com/2024/11/13/report-openai-drafts-policy-paper-suggestions-us-ai-strategy/
https://d15shllkswkct0.c…24/11/OpenAI.png
2024-11-13T19:49:33Z
OpenAI has drafted a policy paper with suggestions on how to foster the U.S. artificial intelligence sector, CNBC reported today.Chris Lehane, the companys vice president of global affairs, is expected to present the document today in Washington, D.C.OpenAI’s paper reportedly contains five main policy suggestions. According to FedScoop, the first proposal calls on state governments and federal agencies to create AI-focused economic zones. The goal, OpenAI states in the document, would be to speed up the permitting work and other tasks involved in building AI infrastructure.The companys infrastructure proposals focus on not only data centers but also the energy generation facilities needed to power them. According to OpenAI, officials should prioritize the construction of wind and solar farms, as well as clear unutilized nuclear reactors for use. The policy paper suggests that the U.S. Navys nuclear power expertise could be conducive to the initiative. States that provide subsidies or other support for companies launching infrastructure projects could require that a share of the new compute be made available to their public universities to create AI research labs and developer hubs aligned with their key commercial sectors, the policy paper states. Speaking to CNBC, OpenAIs Lehane provided an example of how such university collaborations could be implemented. He said that states with large agricultural sectors could make a portion of local AI data centers computing capacity available to their university systems. Researchers at those institutions, in turn, could use the computing capacity and data from local farms to train large language models optimized for agricultural use cases.Another section of OpenAIs policy paper calls for a piece of legislation dubbed the National Transmission Highway Act. Its purpose, the document states, would be to streamline the planning, permitting and payment procedures involved in building transmission infrastructure. This is a term for the equipment used to transmit energy resources and network traffic.Private investors could help finance energy infrastructure projects, OpenAI argues in the document. The company says the government can encourage investor participation by committing to purchase some of the energy that will be generated.OpenAIs policy paper also covers a number of other topics. According to the company, the U.S. and allies should create an AI-focused economic bloc through a North American Compact for Artificial Intelligence. OpenAI says the goal of this bloc would be to compete with China. Additionally, the ChatGPT developer recommends that the government help train people for AI-related jobs in fields such as data center management.The policy paper goes on to list several of the benefits that the company expects AI investments to deliver. OpenAI says that initiatives of the kind its recommending could create tens of thousands of jobs, boost GDP growth and modernize the electrical grid. Furthermore, it believes that the AI sector could help drive the construction of new chip fabs. Your vote of support is important to us and it helps us keep the content FREE.One click below supports our mission to provide free, deep, and relevant content.  Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well – Andy JassyTHANK YOU
Unknown
Others
null
null
null
null
null
null
news
Devesh Beri
OpenAI Improves ChatGPT for Mac with Coding App Integration
OpenAI has announced a new feature for the ChatGPT app on Mac, allowing it to integrate with coding apps such as Xcode, VS Code, TextEdit, and Terminal. The "Work with Apps" update makes life easier by letting you skip the hassle of copying and pasting code between ChatGPT and your dev environment.The new functionality lets ChatGPT to interact directly with supported coding applications when permission is given. Users can select code in their development environment, and ChatGPT will automatically receive this code as context along with the user's prompt, as reported by MacRumors. The feature taps into macOS's Accessibility API, letting screen readers read text from supported apps.An OpenAI employee showed off the feature to TechCrunch by using a solar system project in Xcode. They asked ChatGPT to "add the missing planets," and it nailed it by generating code for Earth that fit perfectly with the existing format.ChatGPT can work well with coding setups, but it can’t directly write code in apps like Xcode. That’s more in line with what GitHub Copilot does, offering in-app coding help.Additionally, as the feature relies on the Accessibility API, it is limited to interpreting text and cannot process images or videos.The "Work with Apps" feature is currently available for ChatGPT Plus and Teams users. OpenAI plans to extend support to Enterprise and Education users in the near future. The company plans to expand this integration to other apps, especially those for writing tasks.
https://www.macobserver.com/news/openai-improves-chatgpt-for-mac-with-coding-app-integration/
https://www.macobserver.…/ChatGPT-Mac.png
2024-11-16T08:39:43Z
OpenAI has announced a new feature for the ChatGPT app on Mac, allowing it to integrate with coding apps such as Xcode, VS Code, TextEdit, and Terminal. The “Work with Apps” update makes life easier by letting you skip the hassle of copying and pasting code between ChatGPT and your dev environment.The new functionality lets ChatGPT to interact directly with supported coding applications when permission is given. Users can select code in their development environment, and ChatGPT will automatically receive this code as context along with the user’s prompt, as reported by MacRumors. The feature taps into macOS’s Accessibility API, letting screen readers read text from supported apps.An OpenAI employee showed off the feature to TechCrunch by using a solar system project in Xcode. They asked ChatGPT to “add the missing planets,” and it nailed it by generating code for Earth that fit perfectly with the existing format.ChatGPT can work well with coding setups, but it cant directly write code in apps like Xcode. Thats more in line with what GitHub Copilot does, offering in-app coding help.Additionally, as the feature relies on the Accessibility API, it is limited to interpreting text and cannot process images or videos.The “Work with Apps” feature is currently available for ChatGPT Plus and Teams users. OpenAI plans to extend support to Enterprise and Education users in the near future. The company plans to expand this integration to other apps, especially those for writing tasks.
Content Synthesis/Process Automation/Digital Assistance
Computer and Mathematical
null
null
null
null
null
null
news
Usman Kabir
Amazon.com (AMZN) Joins Clean Air Task Force in Pushing for AI-Driven Fusion Energy Innovation
We recently compiled a list of the 15 Trending AI Stocks on Latest News. In this article, we are going to take a look at where Amazon.com, Inc. (NASDAQ:AMZN)...
https://finance.yahoo.com/news/amazon-com-amzn-joins-clean-131358554.html
https://media.zenfs.com/en/insidermonkey.com/96b2a95f603866c28e00574448cae026
2024-11-21T13:13:58Z
We recently compiled a list of the 15 Trending AI Stocks on Latest News. In this article, we are going to take a look at where Amazon.com, Inc. (NASDAQ:AMZN) stands against the other trending AI stocks on latest news.Artificial intelligence has established deep roots within human society in a very short space of time. The latest manifestation of this is the latest advertisement by Coca-Cola, one of the most popular drink brands in the world, which was made entirely using AI features. However, these advertisements have not been well-received on social media. Per a report by news platform Forbes, the AI-generated Christmas ads have sparked mockery and caution from social media users. The ads were created by Secret Level, Silverside AI and Wild Card, who used AI models Leonardo, Luma, Runway, and Kling, for creative inspiration.Read more about these developments by accessing 10 Best AI Data Center Stocks and 10 Buzzing AI Stocks According to Goldman Sachs.Meanwhile, finance news publication Financial Times reports that Chinas biggest tech groups are building artificial intelligence teams in Silicon Valley. Per the report, the Chinese are hoping to poach staff from US rivals who could help them make up ground in the race to profit from generative AI. FT claims that Alibaba, ByteDance and Meituan are three Chinese firms that have been expanding their offices in California in recent months. This is part of a larger push that comes despite Washingtons efforts to slow down their work by restricting exports of the highest-end AI chips to the Asian country.READ ALSO: 30Most Important AI Stocks According to BlackRock and Beyond the Tech Giants: 35 Non-Tech AI Opportunities.For this article, we selected AI stocks by combing through news articles, stock analysis, and press releases. These stocks are also popular among hedge funds.Why are we interested in the stocks that hedge funds pile into? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletters strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here)A customer entering an internet retail store, illustrating the convenience of online shopping.Market Capitalization: $2.1 Trillion  Amazon.com, Inc. (NASDAQ:AMZN) operates as a technology conglomerate with core interests in the ecommerce business. On November 15, Clean Air Task Force (CATF), a non-profit dedicated to advancing carbon-free energy solutions, released a report detailing that AI can fast forward fusions energy development by enhancing research, optimizing designs, improving diagnostics, and streamlining workflows. Fusion is a natural process that generates energy and can be replicated in controlled environments. The report, supported by Amazon, also found that by advancing fusion energy technologies, AI can further power the digital transformation of industries by delivering scalable carbon-free energy to grids in a way that complements existing renewable energy technologies like wind and solar.Overall, AMZN ranks 4th on our list of trending AI stocks on latest news. While we acknowledge the potential of AMZN as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns, and doing so within a shorter time frame. If you are looking for an AI stock that is more promising than AMZN but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock.READ NEXT:8 Best Wide Moat Stocks to Buy Now and 30 Most Important AI Stocks According to BlackRock.Disclosure: None. This article is originally published at Insider Monkey.
Decision Making/Process Automation/Prediction
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
Daniel Miessler
UL NO. 459: New Active 0-day Exploitation, AI That Sees Your Open Apps, The RebootAI Project
A conversation with Rob Allen from ThreatLocker, UL's Black Friday sale, Finland's internet disrupted, and more...
https://danielmiessler.com/p/ul-459
https://beehiiv-images-p…png?t=1731889159
2024-11-18T23:58:40Z
SECURITY | AI | PURPOSEUNSUPERVISED LEARNING is a newsletter about upgrading to thrive in a world full of AI. Its original ideas, analysis, mental models, frameworks, and tooling to prepare you for the world thats coming. Hey there! Had a great conversation with Rob Allen from ThreatLocker about their Zero Trust approach: deny-by-default, dynamic ACLs, and blocking ransomware at every stage. The UL Black Friday Membership window is now open. GET ITIts time for turkey and cranberry sauce again, which means its also time for a Black Friday Sale of 20% off the first year of UL Membership.Heres what members get:Access to the smartest, most curious, and KINDEST community out thereDirect access to Daniel and hundreds of security and AI professionalsExclusive Member-Only contentAccess to the UL Book Club, which has run monthly since 2017!Access to our Mid-month Meetups, where we discuss career / lifeDEEP discounts on paid courses and productsBest of all is the people. Its seriously the best community Ive ever been a part of.Daniel has created a place for civil discussion in a world that frequently prefers argument over discussion.- Ben CollinsUse coupon code BLACKFRIDAY20 Upgraded all my Ubiquiti gear and am making progress towards a 10Gbit world. Heading to Saudi soon to speak at Blackhat MEA! This one didnt get nearly enough coverage last week. ChatGPT has a new feature that can read code from MacOS apps like VS Code, Xcode, and Terminal, making it easier for people to use AI in a live way without copy-pasting. The new feature called, "Work with Apps," uses MacOS's Accessibility API to read text right from your screen. MORE  This is getting closer to what some other startups are working on, where theyre watching your screen and AI is operating on it. That functionality scares the crap out of me, though, so Im only likely to use it with Apple and maybe Google if they haven an option to turn off the data harvesting / ads stuff.For startups, Im really worried about them getting all this data and then getting compromised. I see it as a virtual inevitability. I really only trust a handful of companies (mostly just Apple, actually) with this muchand this levelof data.Somethingor some onehas cut the data cable between Finland and Germany. Finland's internet access is currently routed through Sweden. Many are assuming shenanigans. MORE  Palo Alto Networks has released Indicators of Compromise (IoCs) for a new zero-day vulnerability affecting their firewalls. MORE VMware confirmed that threat actors are exploiting two vCenter Server vulnerabilities, CVE-2024-38812 and CVE-2024-38813, which were first disclosed at the 2024 Matrix Cup hacking competition. MORE  Anthropic has a new Prompt Improver, that takes a given prompt and writes a better one. This is an example of ecosystem improvement Ive been talking about. MORE OpenAI might launch an "AI agent" tool called "Operator" in January. Operator will compete with Anthropic's "Computer Use" and Google's rumored agent. MORE  Im anticipating that in 2025 the biggest thing in AI will be the maturation of Agents. They started getting decent in 2024, next year theyll get mature enoughand integrated enoughfor real-world use cases. The models will get smarter, but I think most of the benefit will be in the tooling and ecosystems around the modelsnot the models themselves.For agents, its helpful to remember what the actual milestone is, which is pretty simple to track.Constant monitoring of audio, video, text of everything youre doingThat means cameras and microphones on your bodyAnd full monitoring of the screens and I/O of your devices/computersThis is whats going to feed your personal and work DAs with the full context it needs to serve you best. And thats what all these efforts will eventually push towards, even if theyre not doing so yet.Sam Altman and Arianna Huffington's Thrive AI Health is an AI assistant that aims to offer personalized advice on sleep, food, fitness, and more. MORE  Google.org is putting $20 million in cash and $2 million in cloud credits into a new initiative to help researchers use AI for scientific breakthroughs. MORE Apple's M4 Max CPU transcribes audio twice as fast as Nvidia's RTX A5000 GPU while using significantly less power. In a user test, the M4 Max completed an audio transcode in 2:29 minutes using Whisper V3 Turbo, consuming just 25 watts, compared to the RTX A5000's 4:33 minutes and 190 watts. MORE  Really want one of these, but cant justify it yet. The real question is whether our next AI rigs should be a cluster of Mac Minis, or a standard big beefy NVIDIA-based box.Im thinking it might be big box for the next one, and then the one after that is probably some other architecture we cant see yet? Or perhaps an Exolab cluster of Apple-based systems?iOS 18.2's Music Recognition feature now logs where you were when you heard a song. This new "Musical Memories" feature geotags songs, so you can remember the exact location you discovered them. MORE  Pharma stocks have crashed due to RFK Jr. taking over Health and Human Services. Moderna is down close to 40%, and other stocks are suffering in a similar way. Not sure how this isnt a buy opportunity, though. I dont see how most people (and RFK) dont figure out how to tell the difference between good and bad stuff these companies are doing. MORE  Netflix hit a record 65 million concurrent streams during the Mike Tyson vs. Jake Paul fight, reaching 60 million households worldwide. But there were over 100,000 complaints about buffering and connection problems. MORE  A new study shows that treating bullying as a collective issue rather than an individual one can significantly reduce its occurrence in primary schools. The approach involves engaging the entire school community, including teachers, students, and parents, to address and prevent bullying. MORE  I love this concept, which reminds me of how some countries handle prostitution by going after the buyers rather than the sellers. Its an economics way of looking at a whole system, and not just the obvious place. With bullying, I think what needs to happen is some level of shaming of the kids who see it happen and do nothing about it, e.g., intervening, telling adults, etc.RebootAI An Offline AI Oracle for EmergenciesI want to build a local AI that can run offline in bad situations like earthquakes, meteor strikes, and any other scenario where we might have power (like from solar), but no internet. So the idea is that I want something I can ask how to do anything! Tourniquets, sterilizing water, building shelters, identifying edible plants, etc. So ideally this would be both text and image capable, and just as resilient an implementation as possible. Who wants to help me build it? Or does anyone know of one already out there? Even better if its its own standalone box, and you can just update the model used every once in a while. Cloudflare's robots.txt file is a mix of ASCII art and directives for web crawlers. It allows Twitterbot and DemandbaseWebsitePreview to access specific language pages, but blocks many others from accessing various parts of the site, like search results and feedback pages. MORE  Managing High Performers A guide on how to effectively manage high-performing employees. It covers strategies for keeping them motivated, providing the right challenges, and ensuring they feel valued within the organization. MORE  Ian's Secure Shoelace Knot is the best shoelace knot I know of. I actually tie this for my sneakers and mostly leave them that way and slip them on and off. MORE  RECOMMENDATION OF THE WEEK Check out the Aphorism of the Week below. Focus your efforts on being flexible after wrong notes, as opposed to being able to play perfect notes all the time. 2025 and the next few years are likely to be so crazy that we wont be able to plan or play the right notes. But what we can get good at doing is adapting once the wrong note is played. If you hit a wrong note, it's the next note you play that determines if it's good or bad.Thank you for reading. Please forward to a friend and/or share on socials to help support the work.Daniel
Content Synthesis/Decision Making/Process Automation
Computer and Mathematical
null
null
null
null
null
null
news
null
U.S. ahead in AI innovation, easily surpassing China in Stanford's new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index. There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the “vibrancy” of…
https://japantoday.com/category/tech/us-ahead-in-ai-innovation-easily-surpassing-china-in-stanford%27s-new-ranking
null
2024-11-21T21:37:25Z
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the vibrancy of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.The gap is actually widening between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. The U.S. is investing a lot more, at least at the level of firm creation and firm funding.The California-based university's Institute for Human-Centered AI which has ties to Silicon Valley's tech industry released the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures. Here's which countries made the top 10:The U.S. ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.China has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "Chinas focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse. China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.Coming in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and had more mentions of AI in parliamentary proceedings than any other country. Last year, the UK hosted the worlds first international AI safety summit.Close behind the UK was India, thanks to a strong AI research community, improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.The UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the countrys powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built whats considered the worlds leading Arabic-language AI model, known as Jais.Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany are part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.© Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
Unknown
Unknown
null
null
null
null
null
null
news
The Online Citizen
Amazon invests over S$2 billion in Singapore, creating over 10,000 jobs in 2023
Amazon invested over S$2 billion in Singapore in 2023, creating more than 10,000 indirect jobs. The investment spans retail and cloud services, supporting businesses and digital transformation. Initiatives in sustainability, community outreach, and employee inclusion further demonstrate Amazon's commitment to the nation.The post Amazon invests over S$2 billion in Singapore, creating over 10,000 jobs in 2023 appeared first on The Online Citizen.
https://www.theonlinecitizen.com/2024/11/28/amazon-invests-over-s2-billion-in-singapore-creating-over-10000-jobs-in-2023/
https://cdn.theonlinecit…on-Singapore.png
2024-11-28T04:56:06Z
Amazon reported on 28 November 2024 that it invested over S$2 billion in Singapore in 2023. This investment, which covered its retail and cloud operations, supported over 10,000 indirect jobs in sectors including construction, logistics, and professional services.The expenditure included infrastructure improvements such as fulfilment centres and data centres, alongside operational costs related to customer programmes, small and medium-sized enterprises (SMEs), and employee remuneration.Keystone Strategy, a consultancy firm, estimated that over 4,000 indirect roles were created, while an internal survey by Amazon attributed more than 6,000 additional jobs to business activities linked to the platform. S$12 billion to expand cloud infrastructure in SingaporeAt the 10th anniversary of the AWS Summit in Singapore in May 2024, Amazon Web Services (AWS) reaffirmed its support for Singapores Smart Nation and National AI Strategy 2.0 (NAIS 2.0) goals.AWS introduced the flagship AI programme, AWS AI Spring Singapore, to accelerate the adoption of artificial intelligence (AI) and generative AI across six strategic areas: Public Sector, Workforce, Enterprise, Startups, Communities, and Research and Development.To support these initiatives, AWS committed to investing an additional S$12 billion into Singapores cloud infrastructure from 2024 to 2028. This investment aims to meet growing customer demand and bolster Singapores digital transformation efforts.Supporting businesses and global reachAmazons presence in Singapore includes partnerships with thousands of businesses, particularly SMEs, which sell their products through the platform. Tools and programmes like Amazon Global Selling enable these businesses to expand internationally, providing access to global markets.Collaborations with local organisations, such as the 2023 Singapore Cross-border Brand Launchpad initiative with Enterprise Singapore and the Singapore Business Federation, aim to help SMEs access overseas opportunities. The initiative targets the US market and aims to benefit over 100 local businesses by 2025.Contribution to digital transformationAmazon Web Services (AWS) has supported organisations in modernising IT systems and adopting digital innovations. The AWS Asia Pacific (Singapore) Region serves as a hub for thousands of customers, including businesses and public agencies such as the Maritime & Port Authority of Singapore and Grab.AWS has also launched initiatives to encourage the use of artificial intelligence (AI) and generative AI, including its AI Spring programme. These efforts align with Singapores broader digital transformation goals and involve collaboration with government and private sector entities.Additionally, AWS has trained more than 400,000 people in Singapore on cloud technology since 2017, helping to build local digital skills. Other initiatives include educational workshops conducted in partnership with agencies such as the Infocomm Media Development Authority.Focus on sustainabilityAmazons investment in renewable energy includes projects with Sunseap and Sembcorp that are projected to generate enough solar power to support approximately 20,000 households annually. The company has also taken steps to reduce packaging waste, reporting an 80% increase in non-grocery orders shipped without additional packaging since 2021.Community and workplace effortsAs part of its community programmes, Amazon contributed S$100,000 in 2023 to non-profit organisations supporting children and youth in Singapore. The company also partnered with SG Enable to improve employment access for persons with disabilities.In workplace rankings, Amazon was listed among LinkedIns top 10 companies to work for in Singapore in 2023, and AWS Singapore was certified by Great Place To Work® for 2024.The S$2 billion investment reflects Amazons role in supporting business activities, digital initiatives, and workforce development in Singapore. The companys activities span multiple sectors, contributing to economic activity and aligning with national priorities for sustainability and technological advancement.
Content Synthesis/Prediction/Discovery
Business and Financial Operations
null
null
null
null
null
null
news
MATT O'BRIEN AP technology writer
US ahead in AI innovation, easily surpassing China in Stanford's new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index
https://abcnews.go.com/Technology/wireStory/us-ahead-ai-innovation-easily-surpassing-china-stanfords-116088547
https://i.abcnewsfe.com/…_16x9.jpg?w=1600
2024-11-21T13:02:58Z
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the vibrancy of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.The gap is actually widening between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. The U.S. is investing a lot more, at least at the level of firm creation and firm funding.The California-based university's Institute for Human-Centered AI which has ties to Silicon Valley's tech industry released the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures. Here's which countries made the top 10:The U.S. ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.China has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "Chinas focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse. China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.Coming in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and had more mentions of AI in parliamentary proceedings than any other country. Last year, the UK hosted the worlds first international AI safety summit.Close behind the UK was India, thanks to a strong AI research community, improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.The UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the countrys powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built whats considered the worlds leading Arabic-language AI model, known as Jais.Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany will be part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.
Unknown
Unknown
null
null
null
null
null
null
news
Jean Leon
Meta's nuclear plant for AI centers is at risk due to bees
The discovery of a rare species of bee is reportedly preventing the building of Meta's planned nuclear plant aimed at powering AI centers.
https://www.androidheadlines.com/2024/11/meta-nuclear-plant-bees.html
https://www.androidheadl…AM-AH-1-jpg.webp
2024-11-05T17:47:06Z
In the artificial intelligence era, the tech industry has become more energy-hungry than ever. As a result, several major players have been investing in new power generation centers to fuel their ongoing AI-based developments. Consequently, nuclear energy has gained popularity due to its emission-free nature. Meta was looking to build a nuclear-powered AI data center, but it seems that some bees have prevented it from doing so.Companies financing energy supply centers need to ensure that they not only provide a significant amount of power but also align with environmental objectives. Therefore, the importance of wind and solar farms, along with nuclear plants, have increased significantly. Google has already made large related investments, and Microsoft is also looking to reactivate a nuclear plant. Meanwhile, Meta faces an unforeseen obstacle that will prove challenging to overcome.Rare bee species reportedly prevents Meta from building its nuclear plant for AI centersThe Times reported that a rare species of bee lives on the land where Meta planned to build a nuclear power plant. In such cases, environmental laws prohibit further construction in the area to ensure the survival of the species. Due to their importance to the environment, bees are already highly protected insects. The discovery of a rare species further complicates matters for Mark Zuckerbergs company.According to the source, Zuckerberg himself informed the staff that they would not be able to continue with the project. At present, it remains unclear whether the company is actively seeking a new location to initiate the plants construction. In similar situations, there are alternative locations available in case a problem arises. However, there is no confirmation as to whether this is the case with Meta.Other big players are getting ahead of MetaCurrently, no tech brand has managed to successfully establish AI centers powered by nuclear energy. While Microsoft wants to reopen the Three Mile Island plant, it still needs to obtain prior approval. On the other hand, Google has already reached agreements with the startup Kairos Power to establish up to seven small nuclear reactors in the US. However, Googles investment will pay off starting in 2030.Amazon, another prominent player in AI development, has also secured agreements to build small modular reactors. Meanwhile, Meta is still looking for more agreements related to clean energy.
Unknown
Life, Physical, and Social Science/Architecture and Engineering
null
null
null
null
null
null
news
Breanne Warner
Cohere Embed multimodal embeddings model is now available on Amazon SageMaker JumpStart
The Cohere Embed multimodal embeddings model is now generally available on Amazon SageMaker JumpStart. This model is the newest Cohere Embed 3 model, which is now multimodal and capable of generating embeddings from both text and images, enabling enterprises to unlock real value from their vast amounts of data that exist in image form. In this post, we discuss the benefits and capabilities of this new model with some examples.
https://aws.amazon.com/blogs/machine-learning/cohere-embed-multimodal-embeddings-model-is-now-available-on-amazon-sagemaker-jumpstart/
https://d2908q01vomqb2.c…625-1120x630.jpg
2024-11-15T17:15:59Z
The Cohere Embed multimodal embeddings model is now generally available on Amazon SageMaker JumpStart. This model is the newest Cohere Embed 3 model, which is now multimodal and capable of generating embeddings from both text and images, enabling enterprises to unlock real value from their vast amounts of data that exist in image form.In this post, we discuss the benefits and capabilities of this new model with some examples.Overview of multimodal embeddings and multimodal RAG architecturesMultimodal embeddings are mathematical representations that integrate information not only from text but from multiple data modalitiessuch as product images, graphs, and chartsinto a unified vector space. This integration allows for seamless interaction and comparison between different types of data. As foundational models (FMs) advance, they increasingly require the ability to interpret and generate content across various modalities to better mimic human understanding and communication. This trend toward multimodality enhances the capabilities of AI systems in tasks like cross-modal retrieval, where a query in one modality (such as text) retrieves data in another modality (such as images or design files).Multimodal embeddings can enable personalized recommendations by understanding user preferences and matching them with the most relevant assets. For instance, in ecommerce, product images are a critical factor influencing purchase decisions. Multimodal embeddings models can enhance personalization through visual similarity search, where users can upload an image or select a product they like, and the system finds visually similar items. In the case of retail and fashion, multimodal embeddings can capture stylistic elements, enabling the search system to recommend products that fit a particular aesthetic, such as vintage, bohemian, or minimalist.Multimodal Retrieval Augmented Generation (MM-RAG) is emerging as a powerful evolution of traditional RAG systems, addressing limitations and expanding capabilities across diverse data types. Traditionally, RAG systems were text-centric, retrieving information from large text databases to provide relevant context for language models. However, as data becomes increasingly multimodal in nature, extending these systems to handle various data types is crucial to provide more comprehensive and contextually rich responses. MM-RAG systems that use multimodal embeddings models to encode both text and images into a shared vector space can simplify retrieval across modalities. MM-RAG systems can also enable enhanced customer service AI agents that can handle queries that involve both text and images, such as product defects or technical issues.Cohere Multimodal Embed 3: Powering enterprise search across text and imagesCoheres embeddings model, Embed 3, is an industry-leading AI search model that is designed to transform semantic search and generative AI applications. Cohere Embed 3 is now multimodal and capable of generating embeddings from both text and images. This enables enterprises to unlock real value from their vast amounts of data that exist in image form. Businesses can now build systems that accurately search important multimodal assets such as complex reports, ecommerce product catalogs, and design files to boost workforce productivity.Cohere Embed 3 translates input data into long strings of numbers that represent the meaning of the data. These numerical representations are then compared to each other to determine similarities and differences. Cohere Embed 3 places both text and image embeddings in the same space for an integrated experience.The following figure illustrates an example of this workflow. This figure is simplified for illustrative purposes. In practice, the numerical representations of data (seen in the output column) are far longer and the vector space that stores them has a higher number of dimensions.This similarity comparison enables applications to retrieve enterprise data that is relevant to an end-user query. In addition to being a fundamental component of semantic search systems, Cohere Embed 3 is useful in RAG systems because it makes generative models like the Command R series have the most relevant context to inform their responses.All businesses, across industry and size, can benefit from multimodal AI search. Specifically, customers are interested in the following real-world use cases:Graphs and charts Visual representations are key to understanding complex data. You can now effortlessly find the right diagrams to inform your business decisions. Simply describe a specific insight and Cohere Embed 3 will retrieve relevant graphs and charts, making data-driven decision-making more efficient for employees across teams.Ecommerce product catalogs Traditional search methods often limit you to finding products through text-based product descriptions. Cohere Embed 3 transforms this search experience. Retailers can build applications that surface products that visually match a shoppers preferences, creating a differentiated shopping experience and improving conversion rates.Design files and templates Designers often work with vast libraries of assets, relying on memory or rigorous naming conventions to organize visuals. Cohere Embed 3 makes it simple to locate specific UI mockups, visual templates, and presentation slides based on a text description. This streamlines the creative process.The following figure illustrates some examples of these use cases.At a time when businesses are increasingly expected to use their data to drive outcomes, Cohere Embed 3 offers several advantages that accelerate productivity and improves customer experience.The following chart compares Cohere Embed 3 with another embeddings model. All text-to-image benchmarks are evaluated using Recall@5; text-to-text benchmarks are evaluated using NDCG@10. Text-to-text benchmark accuracy is based on BEIR, a dataset focused on out-of-domain retrievals (14 datasets). Generic text-to-image benchmark accuracy is based on Flickr and CoCo. Graphs and charts benchmark accuracy is based on business reports and presentations constructed internally. ecommerce benchmark accuracy is based on a mix of product catalog and fashion catalog datasets. Design files benchmark accuracy is based on a product design retrieval dataset constructed internally.BEIR (Benchmarking IR) is a heterogeneous benchmarkit uses a diverse collection of datasets and tasks designed for evaluating information retrieval (IR) models across diverse tasks. It provides a common framework for assessing the performance of natural language processing (NLP)-based retrieval models, making it straightforward to compare different approaches. Recall@5 is a specific metric used in information retrieval evaluation, including in the BEIR benchmark. Recall@5 measures the proportion of relevant items retrieved within the top five results, compared to the total number of relevant items in the datasetCoheres latest Embed 3 models text and image encoders share a unified latent space. This approach has a few important benefits. First, it enables you to include both image and text features in a single database and therefore reduces complexity. Second, it means current customers can begin embedding images without re-indexing their existing text corpus. In addition to leading accuracy and ease of use, Embed 3 continues to deliver the same useful enterprise search capabilities as before. It can output compressed embeddings to save on database costs, its compatible with over 100 languages for multilingual search, and it maintains strong performance on noisy real-world data.Solution overviewSageMaker JumpStart offers access to a broad selection of publicly available FMs. These pre-trained models serve as powerful starting points that can be deeply customized to address specific use cases. You can now use state-of-the-art model architectures, such as language models, computer vision models, and more, without having to build them from scratch.Amazon SageMaker is a comprehensive, fully managed machine learning (ML) platform that revolutionizes the entire ML workflow. It offers an unparalleled suite of tools that cater to every stage of the ML lifecycle, from data preparation to model deployment and monitoring. Data scientists and developers can use the SageMaker integrated development environment (IDE) to access a vast array of pre-built algorithms, customize their own models, and seamlessly scale their solutions. The platforms strength lies in its ability to abstract away the complexities of infrastructure management, allowing you to focus on innovation rather than operational overhead.You can access the Cohere Embed family of models using SageMaker JumpStart in Amazon SageMaker Studio.For those new to SageMaker JumpStart, we walk through using SageMaker Studio to access models in SageMaker JumpStart.PrerequisitesMake sure you meet the following prerequisites:Make sure your SageMaker AWS Identity and Access Management (IAM) role has the AmazonSageMakerFullAccess permission policy attached.To deploy Cohere multimodal embeddings successfully, confirm the following: Your IAM role has the following permissions and you have the authority to make AWS Marketplace subscriptions in the AWS account used: aws-marketplace:ViewSubscriptionsaws-marketplace:Unsubscribeaws-marketplace:SubscribeAlternatively, confirm your AWS account has a subscription to the model. If so, skip to the next section in this post.Deployment starts when you choose the Deploy option. You may be prompted to subscribe to this model through AWS Marketplace. If youre already subscribed, then you can proceed and choose Deploy. After deployment finishes, you will see that an endpoint is created. You can test the endpoint by passing a sample inference request payload or by selecting the testing option using the SDK.Subscribe to the model packageTo subscribe to the model package, complete the following steps:Depending on the model you want to deploy, open the model package listing page for it.On the AWS Marketplace listing, choose Continue to subscribe.On the Subscribe to this software page, choose Accept Offer if you and your organization agrees with EULA, pricing, and support terms.Choose Continue to configuration and then choose an AWS Region.You will see a product ARN displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3.Subscribe to the Cohere embeddings model package on AWS Marketplace.Choose the appropriate model package ARN for your Region. For example, the ARN for Cohere Embed Model v3 English is:arn:aws:sagemaker:[REGION]:[ACCOUNT_ID]:model-package/cohere-embed-english-v3-7-6d097a095fdd314d90a8400a620cac54Deploy the model using the SDKTo deploy the model using the SDK, copy the product ARN from the previous step and specify it in the model_package_arn in the following code:from cohere_aws import Client import boto3 region = boto3.Session().region_name model_package_arn = "Specify the model package ARN here"Use the SageMaker SDK to create a client and deploy the models:co = Client(region_name=region)co.create_endpoint(arn=model_package_arn, endpoint_name="cohere-embed-english-v3", instance_type="ml.g5.xlarge", n_instances=1)If the endpoint is already created using SageMaker Studio, you can simply connect to it:co.connect_to_endpoint(endpoint_name="cohere-embed-english-v3")Consider the following best practices:Choose an appropriate instance type based on your performance and cost requirements. This example uses ml.g5.xlarge, but you might need to adjust this based on your specific needs.Make sure your IAM role has the necessary permissions, including AmazonSageMakerFullAccess2.Monitor your endpoints performance and costs using Amazon CloudWatch.Inference example with Cohere Embed 3 using the SageMaker SDKThe following code example illustrates how to perform real-time inference using Cohere Embed 3. We walk through a sample notebook to get started. You can also find the source code on the accompanying GitHub repo.Pre-setupImport all required packages using the following code:import requestsimport base64import osimport mimetypesimport numpy as npfrom IPython.display import Image, displayimport tqdmimport tqdm.autoCreate helper functionsUse the following code to create helper functions that determine whether the input document is text or image, and download images given a list of URLs:def is_image(doc): return (doc.endswith(".jpg") or doc.endswith(".png")) and os.path.exists(doc)def is_txt(doc): return (doc.endswith(".txt")) and os.path.exists(doc)def download_images(image_urls): image_names = []#print("Download some example images we want to embed") for url in image_urls: image_name = os.path.basename(url) image_names.append(image_name)if not os.path.exists(image_name): with open(image_name, "wb") as fOut: fOut.write(requests.get(url, stream=True).content)return image_namesGenerate embeddings for text and image inputsThe following code shows a compute_embeddings() function we defined that will accept multimodal inputs to generate embeddings with Cohere Embed 3:def compute_embeddings(docs): # Compute the embeddings embeddings = [] for doc in tqdm.auto.tqdm(docs, desc="encoding"): if is_image(doc): print("Encode image:", doc) # Doc is an image, encode it as an image# Convert the images to base64 with open(doc, "rb") as fIn: img_base64 = base64.b64encode(fIn.read()).decode("utf-8")#Get the mime type for the image mime_type = mimetypes.guess_type(doc)[0]payload = { "model": "embed-english-v3.0", "input_type": 'image', "embedding_types": ["float"], "images": [f"data:{mime_type};base64,{img_base64}"] }response = sagemaker_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload) )response = json.loads(response['Body'].read().decode("utf-8")) response = response["embeddings"]["float"][0] elif is_txt(doc): # Doc is a text file, encode it as a document with open(doc, "r") as fIn: text = fIn.read()print("Encode img desc:", doc, " - Content:", text[0:100]+"...")payload = { "texts": [text], "model": "embed-english-v3.0", "input_type": "search_document", }response = sagemaker_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload) ) response = json.loads(response['Body'].read().decode("utf-8")) response = response["embeddings"][0] else: #Encode as documentpayload = { "texts": [doc], "model": "embed-english-v3.0", "input_type": "search_document", }response = sagemaker_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload) ) response = json.loads(response['Body'].read().decode("utf-8")) response = response["embeddings"][0] embeddings.append(response) return np.asarray(embeddings, dtype="float")Find the most relevant embedding based on queryThe Search() function generates query embeddings and computes a similarity matrix between the query and embeddings:def search(query, embeddings, docs): # Get the query embeddingpayload = { "texts": [query], "model": "embed-english-v3.0", "input_type": "search_document", }response = sagemaker_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload) ) query_emb = json.loads(response['Body'].read().decode("utf-8")) query_emb = query_emb["embeddings"][0]# Compute L2 norms of the vector and matrix rows vector_norm = np.linalg.norm(query_emb) matrix_norms = np.linalg.norm(embeddings, axis = 1)# Compute the dot product between the vector and each row of the matrix dot_products = np.dot(embeddings, query_emb)# Compute cosine similarities similarity = dot_products / (matrix_norms * vector_norm)# Sort decreasing most to least similar top_hits = np.argsort(-similarity)print("Query:", query, "\n") # print(similarity) print("Search results:") for rank, idx in enumerate(top_hits): print(f"#{rank+1}: ({similarity[idx]*100:.2f})") if is_image(docs[idx]): print(docs[idx]) display(Image(filename=docs[idx], height=300)) elif is_txt(docs[idx]): print(docs[idx]+" - Image description:") with open(docs[idx], "r") as fIn: print(fIn.read()) #display(Image(filename=docs[idx].replace(".txt", ".jpg"), height=300)) else: print(docs[idx]) print("--------")Test the solutionLets assemble all the input documents; notice that there are both text and image inputs:# Download imagesimage_urls = [ "https://images-na.ssl-images-amazon.com/images/I/31KqpOznU1L.jpg", "https://images-na.ssl-images-amazon.com/images/I/41RI4qgJLrL.jpg", "https://images-na.ssl-images-amazon.com/images/I/61NbJr9jthL.jpg", "https://images-na.ssl-images-amazon.com/images/I/31TW1NCtMZL.jpg", "https://images-na.ssl-images-amazon.com/images/I/51a6iOTpnwL.jpg", "https://images-na.ssl-images-amazon.com/images/I/31sa-c%2BfmpL.jpg", "https://images-na.ssl-images-amazon.com/images/I/41sKETcJYcL.jpg", "https://images-na.ssl-images-amazon.com/images/I/416GZ2RZEPL.jpg"]image_names = download_images(image_urls)text_docs = [ "Toy with 10 activities including a storybook, clock, gears; 13 double-sided alphabet blocks build fine motor skills and introduce letters, numbers, colors, and more.", "This is the perfect introduction to the world of scooters.", "2 -IN-1 RIDE-ON TOY- This convertible scooter is designed to grow with your child.", "Playful elephant toy makes real elephant sounds and fun music to inspire imaginative play."]docs = image_names + text_docsprint("Total docs:", len(docs))print(docs)Generate embeddings for the documents:embeddings = compute_embeddings(docs)print("Doc embeddings shape:", embeddings.shape)The output is a matrix of 11 items of 1,024 embedding dimensions.Search for the most relevant documents given the query Fun animal toysearch("Fun animal toy", embeddings, docs)The following screenshots show the output.Query: Fun animal toy Search results:#1: (54.28)Playful elephant toy makes real elephant sounds and fun music to inspire imaginative play.--------#2: (52.48)31TW1NCtMZL.jpg--------#3: (51.83)31sa-c%2BfmpL.jpg--------#4: (50.33)51a6iOTpnwL.jpg--------#5: (47.81)31KqpOznU1L.jpg--------#6: (44.70)61NbJr9jthL.jpg#7: (44.36)416GZ2RZEPL.jpg--------#8: (43.55)41RI4qgJLrL.jpg--------#9: (41.40)41sKETcJYcL.jpg--------#10: (37.69)Learning toy with 10 activities including a storybook, clock, gears; 13 double-sided alphabet blocks build fine motor skills and introduce letters, numbers, colors, and more.--------#11: (35.50)This is the perfect introduction to the world of scooters.--------#12: (33.14)2 -IN-1 RIDE-ON TOY- This convertible scooter is designed to grow with your child.--------Try another query Learning toy for a 6 year old.Query: Learning toy for a 6 year old Search results:#1: (47.59)Playful elephant toy makes real elephant sounds and fun music to inspire imaginative play.--------#2: (41.86)61NbJr9jthL.jpg--------#3: (41.66)2 -IN-1 RIDE-ON TOY- This convertible scooter is designed to grow with your child.--------#4: (41.62)Toy with 10 activities including a storybook, clock, gears; 13 double-sided alphabet blocks build fine motor skills and introduce letters, numbers, colors, and more.--------#5: (41.25)This is the perfect introduction to the world of scooters.--------#6: (40.94)31sa-c%2BfmpL.jpg--------#7: (40.11)416GZ2RZEPL.jpg--------#8: (40.10)41sKETcJYcL.jpg--------#9: (38.64)41RI4qgJLrL.jpg--------#10: (36.47)31KqpOznU1L.jpg--------#11: (35.27)31TW1NCtMZL.jpg--------#12: (34.76)51a6iOTpnwL.jpg--------As you can see from the results, the images and documents are returns based on the queries from the user and demonstrates functionality of the new version of Cohere embed 3 for multimodal embeddings.Clean upTo avoid incurring unnecessary costs, when youre done, delete the SageMaker endpoints using the following code snippets:# Delete the endpointsagemaker.delete_endpoint(EndpointName='Endpoint-Cohere-Embed-Model-v3-English-1')sagemaker.close()Alternatively, to use the SageMaker console, complete the following steps:On the SageMaker console, under Inference in the navigation pane, choose Endpoints.Search for the embedding and text generation endpoints.On the endpoint details page, choose Delete.Choose Delete again to confirm.ConclusionCohere Embed 3 for multimodal embeddings is now available with SageMaker and SageMaker JumpStart. To get started, refer to SageMaker JumpStart pretrained models.Interested in diving deeper? Check out the Cohere on AWS GitHub repo.About the AuthorsBreanne Warner is an Enterprise Solutions Architect at Amazon Web Services supporting healthcare and life science (HCLS) customers. She is passionate about supporting customers to use generative AI on AWS and evangelizing model adoption. Breanne is also on the Women@Amazon board as co-director of Allyship with the goal of fostering inclusive and diverse culture at Amazon. Breanne holds a Bachelor of Science in Computer Engineering from University of Illinois at Urbana Champaign.Karan Singh is a Generative AI Specialist for third-party models at AWS, where he works with top-tier third-party foundation model (FM) providers to develop and execute joint Go-To-Market strategies, enabling customers to effectively train, deploy, and scale FMs to solve industry specific challenges. Karan holds a Bachelor of Science in Electrical and Instrumentation Engineering from Manipal University, a masters in science in Electrical Engineering from Northwestern University and is currently an MBA Candidate at the Haas School of Business at University of California, Berkeley.Yang Yang is an Independent Software Vendor (ISV) Solutions Architect at Amazon Web Services based in Seattle, where he supports customers in the financial services industry. Yang focuses on developing generative AI solutions to solve business and technical challenges and help drive faster time-to-market for ISV customers. Yang holds a Bachelors and Masters degree in Computer Science from Texas A&M University.Malhar Mane is an Enterprise Solutions Architect at AWS based in Seattle. He supports enterprise customers in the Digital Native Business (DNB) segment and specializes in generative AI and storage. Malhar is passionate about helping customers adopt generative AI to optimize their business. Malhar holds a Bachelors in Computer Science from University of California, Irvine.
Content Synthesis/Decision Making
Business and Financial Operations/Management
null
null
null
null
null
null
news
ET Online
Sadhguru warns against ‘excessive green talk', says Trump’s ‘drill, baby drill’ shows people won’t abandon oil so easily
At a UN climate conference, Sadhguru, a spiritual leader, argued against solely focusing on reducing fossil fuel use. He emphasized the need for viable alternatives to maintain living standards. Sadhguru highlighted the limited impact of solar and wind energy and electric vehicles. He stressed the importance of soil health and tree-based agriculture for farmer livelihoods and food security.
https://economictimes.indiatimes.com/news/india/sadhguru-warns-against-excessive-green-talk-says-trumps-drill-baby-drill-shows-people-wont-abandon-oil-so-easily/articleshow/115213603.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-12T10:23:04Z
Artificial Intelligence(AI)AI and Analytics based Business StrategyBy - Tanusree De, Managing Director- Accenture Technology Lead, Trustworthy AI Center of Excellence: ATCIView Program
Content Synthesis/Information Retrieval Or Search
Life, Physical, and Social Science/Management
null
null
null
null
null
null
news
PTI
Spacetech startup TakeMe2Space to launch AI lab in space in December
TakeMe2Space, a space technology firm based in Hyderabad, said its 'Orbital Infrastructure - Technology Demonstrator'(MOI-TD), billed as India's first AI lab in space, will demonstrate real-time data processing in orbit, making space research more affordable and accessible.
https://economictimes.indiatimes.com/tech/startups/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/articleshow/115701888.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-26T14:09:30Z
A Hyderabad-based startup is set to launch an artificial intelligence lab in space on board ISRO's PSLV rocket next month, the company said here on Tuesday. TakeMe2Space, a space technology firm based in Hyderabad, said its 'Orbital Infrastructure - Technology Demonstrator'(MOI-TD), billed as India's first AI lab in space, will demonstrate real-time data processing in orbit, making space research more affordable and accessible. The MOI-TD launch is scheduled for mid-December aboard ISRO's PSLV C60 mission. Satellites capture petabytes of data daily, of which up to 40 per cent is potentially unusable due to cloud cover or other factors, which has to be transmitted to Earth for processing. By processing data directly in space, MOI-TD delivers the relevant insights to users, drastically lowering both the cost of data transmission and latency, said Ronak Kumar Samantray, founder and CEO of TakeMe2Space. Users can access the satellite platform through OrbitLab, a web-based console to upload AI models for applications such as environmental monitoring, deforestation tracking, maritime activity observation, greenhouse gas emission detection and custom earth observation use cases. The satellite-as-a-service platform has already secured its inaugural research partners, including a leading Malaysian university and an enterprising group of 9th and 10th graders from an Indian school, demonstrating the potential for democratising space research. The mission represents a significant step towards building data centres in space, the company said. While the current mission focuses on Earth observation applications, the technology paves the way for future space-based computing capabilities that could support a wide range of applications, similar to current cloud computing services. All testing and evaluation of the MOI-TD platform was facilitated by the IN-SPACe Technical Center. IN-SPACe, the single window government agency for all space sector activities of private entities, played a key role in accelerating the development of the platform. MOI-TD includes control software and hardware components, including reaction wheels, magnetorquers, an advanced onboard computer, and an AI accelerator. The satellite also features flexible solar cells that could power future satellites, a company statement said.
Content Synthesis/Prediction
Others
null
null
null
null
null
null
news
Kalpana Pathak
Asian Development Bank to provide $4.25 billion to Tata Power for clean energy power projects
Tata Power and the Asian Development Bank have partnered to explore financing options for renewable energy projects in India. The collaboration aims to support India's clean energy transition by funding projects such as solar-wind hybrid systems and pumped hydro storage, totaling an estimated $4.25 billion.
https://economictimes.indiatimes.com/industry/renewables/asian-development-bank-to-provide-4-25-billion-to-tata-power-for-clean-energy-power-projects/articleshow/115530737.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-21T12:27:57Z
Artificial Intelligence(AI)AI and Analytics based Business StrategyBy - Tanusree De, Managing Director- Accenture Technology Lead, Trustworthy AI Center of Excellence: ATCIView Program
Prediction/Decision Making/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
Matt O'brien
US ahead in AI innovation, easily surpassing China in Stanford's new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.
https://techxplore.com/news/2024-11-ai-easily-surpassing-china-stanford.html
https://scx2.b-cdn.net/g…n-ai-innovat.jpg
2024-11-21T15:00:01Z
Attendees listen during a panel discussion with Ambassador Philip Thigo, Special Envoy on Technology for the Office of the President of Kenya, from top left, Hong Yuen Poon, Deputy Secretary of Singapore's Ministry of Digital Development and Information, Lucilla Sioli, Director of the Artificial Intelligence Office for the European Commission, and Dr. Seth Center, Acting Special Envoy Office of the Special Envoy for Critical and Emerging Technology, at the convening of the International Network of AI Safety Institutes at the Golden Gate Club at the Presidio in San Francisco, Wednesday, Nov. 20, 2024. Credit: AP Photo/Jeff ChiuThe U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the "vibrancy" of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm."The gap is actually widening" between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. "The U.S. is investing a lot more, at least at the level of firm creation and firm funding."The California-based university's Institute for Human-Centered AIwhich has ties to Silicon Valley's tech industryreleased the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures.Here's which countries made the top 10:United StatesThe U.S. ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.ChinaChina has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "China's focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse." China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.United KingdomComing in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and "had more mentions of AI in parliamentary proceedings" than any other country. Last year, the UK hosted the world's first international AI safety summit.IndiaClose behind the UK was India, thanks to a "strong AI research community," improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.United Arab EmiratesThe UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the country's powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built what's considered the world's leading Arabic-language AI model, known as Jais.The rest of the top 10Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany will be part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.Citation: US ahead in AI innovation, easily surpassing China in Stanford's new ranking (2024, November 21) retrieved 21 November 2024 from https://techxplore.com/news/2024-11-ai-easily-surpassing-china-stanford.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Unknown
Computer and Mathematical/Life, Physical, and Social Science/Management
null
null
null
null
null
null
news
MATT O'BRIEN AP technology writer
US ahead in AI innovation, easily surpassing China in Stanford's new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index
https://abcnews.go.com/US/wireStory/us-ahead-ai-innovation-easily-surpassing-china-stanfords-116088545
https://i.abcnewsfe.com/…_16x9.jpg?w=1600
2024-11-21T16:14:30Z
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the vibrancy of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.The gap is actually widening between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. The U.S. is investing a lot more, at least at the level of firm creation and firm funding.The California-based university's Institute for Human-Centered AI which has ties to Silicon Valley's tech industry released the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures. Here's which countries made the top 10:The U.S. ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.China has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "Chinas focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse. China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.Coming in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and had more mentions of AI in parliamentary proceedings than any other country. Last year, the UK hosted the worlds first international AI safety summit.Close behind the UK was India, thanks to a strong AI research community, improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.The UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the countrys powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built whats considered the worlds leading Arabic-language AI model, known as Jais.Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany are part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.
Unknown
Unknown
null
null
null
null
null
null
news
AP
US ahead in AI innovation, easily surpassing China in Stanford's new ranking
The US leads in global AI innovation, outpacing China in research, investment, and responsible AI development, according to Stanford University's AI Index. With $67.2 billion in private AI funding and key tech firms, the US dominates. China excels in patents and research output, while the UK, India, and UAE show strengths in workforce, investments, and niche advancements.
https://economictimes.indiatimes.com/tech/artificial-intelligence/us-ahead-in-ai-innovation-easily-surpassing-china-in-stanfords-new-ranking/articleshow/115565413.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-22T11:02:47Z
The US leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the "vibrancy" of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm."The gis actually widening" between the US and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. "The US is investing a lot more, at least at the level of firm creation and firm funding."The California-based university's Institute for Human-Centered AI - which has ties to Silicon Valley's tech industry - released the report Thursday as government AI officials from the US and several allies met in San Francisco this week to compare notes on AI safety measures. Top 10 countriesUnited States: The US ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the US last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The US also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.China:It has requested far more patents than any other country regarding generative AI, the UN intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "China's focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse." China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.United Kingdom: Coming in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose cofounder recently won a Nobel Prize; and "had more mentions of AI in parliamentary proceedings" than any other country. Last year, the UK hosted the world's first international AI safety summit.India: Close behind the UK was India, thanks to a "strong AI research community," improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.United Arab Emirates: The UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the country's powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built what's considered the world's leading Arabic-language AI model, known as Jais.Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany will be part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the US in developing a plan to expand semiconductor production within the bloc.
Unknown
Unknown
null
null
null
null
null
null
news
PTI
Renewable energy capacity rises 13.5pc to 203GW in October
India's renewable energy capacity surged by 13.5% to 203.18 GW in October 2024, driven by a significant increase in solar energy installations. The total non-fossil fuel capacity, including nuclear, reached 211.36 GW. The growth aligns with India's ambitious renewable energy targets, with substantial capacity in the pipeline for future development.
https://economictimes.indiatimes.com/industry/renewables/renewable-energy-capacity-rises-13-5pc-to-203gw-in-october/articleshow/115226086.cms
https://img.etimg.com/th…/articleshow.jpg
2024-11-12T16:28:37Z
Artificial Intelligence(AI)AI and Analytics based Business StrategyBy - Tanusree De, Managing Director- Accenture Technology Lead, Trustworthy AI Center of Excellence: ATCIView Program
Unknown
Unknown
null
null
null
null
null
null
news
MarketBeat News
Artificial Intelligence Technology Solutions (OTCMKTS:AITX) Issues Press Release Regarding Innovative AI-Based Security Innovation
Artificial Intelligence Technology Solutions Inc. recently announced through a Form 8-K filing that it will be releasing a press release titled “AITX’s RAD to Announce Potentially Revolutionary AI-Based Security Innovation” on November 18, 2024. This new innovation by the company is expected to introduce ground-breaking advancements in the field of AI-driven security and productivity solutions. […]
https://www.etfdailynews.com/2024/11/20/artificial-intelligence-technology-solutions-otcmktsaitx-issues-press-release-regarding-innovative-ai-based-security-innovation/
https://www.americanbankingnews.com/wp-content/timthumb/timthumb.php?src=https://www.marketbeat.com/logos/artificial-intelligence-technology-solutions-inc-logo-1200x675.png?v=20211103153737&w=240&h=240&zc=2
2024-11-20T09:07:01Z
Artificial Intelligence Technology Solutions Inc. recently announced through a Form 8-K filing that it will be releasing a press release titled “AITXs RAD to Announce Potentially Revolutionary AI-Based Security Innovation” on November 18, 2024. This new innovation by the company is expected to introduce ground-breaking advancements in the field of AI-driven security and productivity solutions.The press release revealed that the subsidiary of Artificial Intelligence Technology Solutions, Robotic Assistance Devices (RAD), is set to unveil a cutting-edge technology named ‘SARA’ on Wednesday, November 20, 2024, at 8:00 am ET. SARA promises to bring significant improvements in cost-efficiency and quality to a crucial function within the security industry.Want More Great Investing Ideas?RAD is planning to showcase the capabilities of SARA through at least one video demonstration during the announcement, providing insights into how this AI-driven solution can revolutionize the security industry by offering smarter and more efficient security solutions. This technological advancement is expected to have a substantial impact on the market segment and redefine industry standards in the security sector.CEO/CTO of AITX and RAD, Steve Reinharz, expressed excitement about the potential of SARA, highlighting its utilization of advanced artificial intelligence and its ability to bring a fundamental change in the cost-efficiency vs quality dynamic sought by businesses in the security realm. This new development further solidifies RAD’s position as a leader in integrating AI into security technology.To witness the official unveiling of this game-changing security technology, interested parties are encouraged to tune in to the announcement on November 20, 2024, at 8:00 am ET via AITXs YouTube channel and the official websites of AITX (www.aitx.ai) and RAD (www.radsecurity.com).AITX, along with its subsidiary RAD, is committed to reshaping the security industry through innovative AI-driven Solutions-as-a-Service models, targeting cost savings and operational efficiency for businesses. The company’s technologies aim to replace traditional manned security guarding models with AI-powered solutions, delivering significant cost benefits while enhancing operational performance.As per the exhibit attached to the filing, AITX has a promising sales pipeline, including opportunities with Fortune 500 companies, demonstrating a strong market demand for its AI-driven solutions. The company’s technological offerings cater to various industries such as enterprises, government, transportation, critical infrastructure, education, and healthcare, providing enhanced operational capabilities at reduced costs.Please note that the information contained in the press release and the filing should not be considered as guarantees of future performance, and readers are advised to approach forward-looking statements with caution.For further details and updates, individuals are encouraged to refer to the official sources mentioned above.###This article was generated by an automated content engine and was reviewed by a human editor prior to publication. For additional information, read Artificial Intelligence Technology Solutions’s 8K filing here.Artificial Intelligence Technology Solutions Company Profile (Get Free Report)Artificial Intelligence Technology Solutions Inc focuses on the delivery of artificial intelligence and robotic solutions for operational, security, and monitoring needs. It develops front-end and back-end software solutions. The company also provides RIO, a solar-powered trailer; ROSA-P; RADDOG; ROSS, a video management system;and ROAMEO, an unmanned ground vehicle.Featured Stories
Unknown
Management
null
null
null
null
null
null
news
Investing.com
RWE to Harness AI-driven Weather Modelling With HPE Private Cloud AI
RWE to Harness AI-driven Weather Modelling With HPE Private Cloud AI
https://www.investing.com/news/press-releases/rwe-to-harness-aidriven-weather-modelling-with-hpe-private-cloud-ai-93CH-3734029
https://i-invdn-com.inve…ting_300X300.png
2024-11-21T09:04:16Z
Leading company in renewables to leverage HPE's unique turnkey AI infrastructure solution to improve forecast accuracy and optimize energy resource management LONDON--(BUSINESS WIRE)--Hewlett Packard Enterprise (HPE) today announced that RWE (LON:0HA0), a leading company in the field of renewable energy, has selected HPE Private Cloud AI to unlock new intelligence from weather data. Researchers at RWE's Artificial Intelligence (AI) Research Laboratory will utilize the AI-optimized, turnkey private cloud to evaluate, fine tune and inference weather models, harnessing cutting-edge AI to enhance operational decision-making and their global renewable energy leadership.As a leading provider of renewable energy, RWE is at the forefront of balancing the increasing power demands of the modern world with the societal challenge of protecting the climate, said Simon Tarplee, Head of Commercial AI at RWE Supply & Trading. We see AI as a key enabler that allows us to optimize our processes and create new, sustainable solutions, and set the ambitious goal to have a production training capable solution in-place by the end of 2024. With HPE Private Cloud AI, we will have a turnkey solution that helps us to improve our forecast accuracy and optimize energy resource management globally.With its investment and growth strategy Growing Green, RWE is contributing significantly to the success of the energy transition and is expanding its generation portfolio by building new Onshore and Offshore Wind and Solar parks and batteries. The company is decarbonizing its business in line with the 1.5-degree reduction pathway and will be net zero by 2040. To achieve this, RWE is tapping into the vast potential of advanced AI to reach new levels of efficiency. Here, the advancement of weather modelling is a top priority, as weather is a fundamental driver of energy supply and demand, be it for consumption or production.The ability to fine tune and inference AI-driven weather models will be a key enabler to unlock higher efficiencies for RWE's renewable energy portfolio and will give them a significant competitive advantage in the market, said Fidelma Russo, EVP & GM, Hybrid Cloud & CTO at HPE. HPE Private Cloud AI is a unique turnkey solution that enables RWE's researchers to reach the market quicker and handle massive datasets.With HPE Private Cloud AI, a key offering of the NVIDIA (NASDAQ:NVDA) AI Computing by HPE portfolio, RWE chose a private cloud solution that can be deployed in just three clicks, seamlessly handle the volume of data required for weather models and efficiently extract information with retrieval-augmented generation (RAG). With an on-premise solution, RWE's researchers can quickly and easily deploy new private cloud AI infrastructure within their own policies, making it possible for them to experiment and scale AI projects and models more quickly across domains and regions.HPE Private Cloud AI, which was co-developed with NVIDIA, will provide RWE with a platform-based experience through the HPE GreenLake cloud. This will enable the team to manage and observe infrastructure, automate processes, and orchestrate endpoints, workloads and data, including sustainability metrics. With this, the research team can fully focus on the model development, accelerating the time to market, providing RWE with a competitive advantage in their market.About Hewlett Packard Enterprise (NYSE:HPE)Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software (ETR:SOWGn), and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit www.hpe.com.About RWERWE is leading the way to a clean energy world. With its investment and growth strategy Growing Green, RWE is contributing significantly to the success of the energy transition and the decarbonisation of the energy system. Around 20,000 employees work for the company in almost 30 countries worldwide. RWE is already one of the leading companies in the field of renewable energy. RWE is investing billions of euros in expanding its generation portfolio, in particular in offshore and onshore wind, solar energy and batteries. It is perfectly complemented by its global energy trading business. RWE is decarbonising its business in line with the 1.5-degree reduction pathway and will phase out coal by 2030. RWE will be net zero by 2040. Fully in line with the company's purpose - Our energy for a sustainable life.View source version on businesswire.com: https://www.businesswire.com/news/home/20241121555594/en/Editorial Contact:Laura von [email protected]: Hewlett Packard Enterprise
Prediction/Decision Making/Content Synthesis
Computer and Mathematical/Life, Physical, and Social Science
null
null
null
null
null
null
news
The Associated Press
US ahead in AI innovation, easily surpassing China in Stanford’s new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index. There’s no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the “vibrancy” of the AI industry across various […]
https://financialpost.com/pmn/us-ahead-in-ai-innovation-easily-surpassing-china-in-stanfords-new-ranking
https://smartcdn.gprod.p…ciated-press.jpg
2024-11-21T13:04:15Z
Author of the article:Attendees listen during a panel discussion with Ambassador Philip Thigo, Special Envoy on Technology for the Office of the President of Kenya, from top left, Hong Yuen Poon, Deputy Secretary of Singapore's Ministry of Digital Development and Information, Lucilla Sioli, Director of the Artificial Intelligence Office for the European Commission, and Dr. Seth Center, Acting Special Envoy Office of the Special Envoy for Critical and Emerging Technology, at the convening of the International Network of AI Safety Institutes at the Golden Gate Club at the Presidio in San Francisco, Wednesday, Nov. 20, 2024.Photo by Jeff Chiu /THE ASSOCIATED PRESSArticle contentThe U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.Theres no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the vibrancy of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.This advertisement has not loaded yet, but your article continues below.THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLYSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.SUBSCRIBE TO UNLOCK MORE ARTICLESSubscribe now to read the latest news in your city and across Canada.Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.Daily content from Financial Times, the world's leading global business publication.Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.Daily puzzles, including the New York Times Crossword.REGISTER / SIGN IN TO UNLOCK MORE ARTICLESCreate an account or sign in to continue with your reading experience.Access articles from across Canada with one account.Share your thoughts and join the conversation in the comments.Enjoy additional articles per month.Get email updates from your favourite authors.THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.Create an account or sign in to continue with your reading experience.Access articles from across Canada with one accountShare your thoughts and join the conversation in the commentsEnjoy additional articles per monthGet email updates from your favourite authorsThe gap is actually widening between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanfords AI Index. The U.S. is investing a lot more, at least at the level of firm creation and firm funding.The California-based universitys Institute for Human-Centered AI _ which has ties to Silicon Valleys tech industry released the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures.Heres which countries made the top 10:United StatesThe U.S. ranks No. 1 on Stanfords list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. Its no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.Get the latest headlines, breaking news and columns.By signing up you consent to receive the above newsletter from Postmedia Network Inc.We encountered an issue signing you up. Please try againThis advertisement has not loaded yet, but your article continues below.ChinaChina has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of Chinas strong growth in AI innovation but not enough to lead the pack. Still, the report says that Chinas focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse. Chinas universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.United KingdomComing in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. Its also home to Googles AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and had more mentions of AI in parliamentary proceedings than any other country. Last year, the UK hosted the worlds first international AI safety summit.This advertisement has not loaded yet, but your article continues below.IndiaClose behind the UK was India, thanks to a strong AI research community, improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.United Arab EmiratesThe UAEs deliberate focus on AI appears to have paid off in the Middle Eastern nations fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the countrys powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built whats considered the worlds leading Arabic-language AI model, known as Jais.The rest of the top 10Rounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany will be part of the European Unions sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.Share this article in your social networkPostmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.
Unknown
Unknown
null
null
null
null
null
null
news
Research and Markets
AI Data Centers Networks and Power Requirements Report 2024, with Case Studies from Major Equipment Companies and Service Providers
Examining bandwidth requirements for AI data centers and clusters and the arrival of 1600 Gigabit transceivers and 400G signaling and co-packaged optics to support the bandwidth surge Examining bandwidth requirements for AI data centers and clusters and the arrival of 1600 Gigabit transceivers and 400G signaling and co-packaged optics to support the bandwidth surge
https://www.globenewswire.com/news-release/2024/11/20/2984210/28124/en/AI-Data-Centers-Networks-and-Power-Requirements-Report-2024-with-Case-Studies-from-Major-Equipment-Companies-and-Service-Providers.html
https://ml.globenewswire.com/Resource/Download/908fb457-7f8e-4a08-9081-5565e3dfb3d7
2024-11-20T10:01:00Z
Dublin, Nov. 20, 2024 (GLOBE NEWSWIRE) -- The "Networks and Power Requirements for AI Data Centers: A Ten-year Market Forecast and Technology Assessment" report has been added to ResearchAndMarkets.com's offering.This report provides an assessment and ten-year forecast of the burgeoning market for networks and related connectivity products for AI data centers and AI clusters. It contains detailed ten-year volume and value forecasts of all the products and markets that are core to the AI data center and cluster market. In addition, the strategic analysis provided throughout the report is illustrated with case studies from the recent history of major equipment companies and service providers.The goal of this report is to identify and quantify the emerging AI opportunities that are appearing on the road to a network/distributed AI future. It examines bandwidth requirements for AI data centers and clusters and the arrival of 1600 Gigabit transceivers and 400G signaling and co-packaged optics to support the bandwidth surge.Such new technologies bring efficiencies, but the AI data centers still have huge power and cooling needs and in this report CIR examines what role new power and cooling technologies - nuclear power, wind energy, miniaturized nuclear reactors and liquid cooling, for example - will play in the future of AI data centers.AI will be the key driver for bandwidth, power and storage in AI data centers over the next ten years, according to the report. What we seeing is a major re-think in the design, layout and equipment choices for major data centers to meet the special needs of AI. Much of the money for these changes will come from the hyperscale data center owners (AWS, Google, Meta, Microsoft, etc.) as well as data center landlords, but smaller data centers and data centers will generate AI-driven demand too.This report will be essential reading for networking vendors, service providers, AI software firms, computer companies and investors.Key Topics Covered:Chapter One: IntroductionThe unstoppable rise of AI: AI is real at last!Overview of data requirements for AI data centers/AI clustersTraining vs. inferenceDemand: Where the money for AI will come fromGoal of this reportSummary of ten-year market forecastsMethodology of this reportPlan of this reportChapter Two: Demand: Roadmap for AI Network Deployment and Distributed AIAI and the hyperscalers: Many paths to AIOpportunities in the smaller AI data centers and AI clustersData center landlordsAI-as-a service providersAI and edge networksAI data centers in the cloudThe emergence of distributed AIChapter 3: Supply - AI Networking and the Available TechnologiesEvolution of AI hardware: Of servers, CPUs, GPUs and QPUsBandwidth requirement for AI networks and data centers Are we entering 1600 Gbps era?400G signalingHow AI drives latency in data networksWill AI lead to a resurgence of InfiniBand?Ethernet in the AI data center P802.3djUltra Ethernet ConsortiumAI and high-speed interconnects Co-packaged optics and beyondStorage requirements for AI data centersAI and HPCChapter Four: Power and Cooling Requirements for AI Data CentersPower and cooling requirements for AI data centersSustainability: Wind and solar optionsNuclear options Miniaturized reactor for data centersNuclear fusion?Cooling Air coolingLiquid coolingHeat sinks for transceiversChapter Five: Ten-year Market ForecastsForecasting methodologyTen-year forecast of AI data centers and servers worldwideTen-year forecast of AI data center connectivity Ethernet vs. IBCo-packaged optics and high-speed interconnectTen-year forecast of data storage for AI data centersTen-year forecast of AI data center power sources Sustainable: Wind vs. solarThe nuclear optionTen-year forecast of AI cooling technologyMarket size, market growth and applicable marketing strategiesFor more information about this report visit https://www.researchandmarkets.com/r/2an9gsAbout ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Unknown
Computer and Mathematical/Architecture and Engineering
null
null
null
null
null
null
news
Akash Dutta, David Delima
Google Reveals How AI Helped the Company Make Notable Scientific Breakthroughs in 2024
Google highlighted nine major scientific breakthroughs made in the ongoing year that were possible due to advancements in artificial intelligence (AI). On Monday, Google DeepMind co-hosted the inaugural edition of the AI for Science Forum in London alongside the Royal Society. During the event, the Mountain View-based tech giant recapped achievements such as using an AI model to predict protein structures, expanding its flood forecasting system, and wildfire detection and tracking system.
https://www.gadgets360.com/ai/news/google-ai-nine-major-scientific-breakthroughs-2024-deepmind-7055735
https://i.gadgets360cdn.…732016006632.jpg
2024-11-19T11:36:33Z
Google has revealed notable scientific breakthroughs made this year that were possible due to advancements in artificial intelligence (AI) technology. On Monday, Google DeepMind co-hosted the inaugural edition of the AI for Science Forum in London alongside the Royal Society. During the event, the Mountain View-based tech giant recapped achievements such as using an AI model to predict protein structures, expanding its flood forecasting system, and wildfire detection and tracking system. DeepMind was also able to build a system that is capable of controlling plasma with a nuclear fusion reactor.The tech giant stated that AI has played a key role in the ongoing year in solving many confounding problems in science using its computational techniques. The company also emphasised that AI is not a replacement for scientists but can become a crucial assistant for them.One of the biggest achievements of Google DeepMind was when the AI research wing of the tech giant used its AlphaFold 2 AI model to predict structures of 200 million proteins. The company highlighted that this discovery pushed the scientific community decades ahead as the determination of the 3D structure of a single protein can take up to a year. Notably, Demis Hassabis and John Jumper, the individuals behind the project were awarded the Nobel Prize in Chemistry 2024 for this discovery.Google also partnered with Lichtman Lab at Harvard to map a piece of the human brain of an unprecedented level of detail. This project was released this year and revealed structures within the human brain that were previously unseen.In 2024, AI also helped Google improve its prediction and tracking systems. The company's riverine flood forecasting system was expanded at scale in 2024 and now covers 100 countries and 700 million people globally. The tech giant also partnered with the US Forest Service to develop the FireSat AI model that can detect and track wildfires as small as the size of a classroom within 20 minutes.GraphCast, a machine learning research model developed by Google DeepMind, can now predict the tracks of cyclones. The company claims it can predict such weather-based disturbances faster and more accurately compared to traditional weather simulation systems.Advances were also made in mathematical reasoning and quantum computing. DeepMind's AlphaGeometry AI system, which was launched in 2024, solved complex geometry problems at a level similar to a human Olympiad gold medallist. Google researchers also worked with UC Berkeley and Columbia University to perform chemical simulations on a quantum computer to predict chemical reactivity and kinetics.Looking towards sustainable energy, the tech giant announced the Graph Networks for Materials Exploration (GNoME) which discovered 3,80,000 materials that are stable at low temperatures, opening new avenues to make better solar cells, batteries, and potential superconductors.The tech giant also made breakthroughs in nuclear fusion, which is considered the energy of the future. Collaborating with the Swiss Plasma Center at the Swiss Federal Institute of Technology Lausanne, Google DeepMind announced the development of an AI system that can control the plasma inside a nuclear fusion reactor without any manual assistance. This is still a work in progress, but the company said it is a critical step towards stable fusion and abundant clean energy.
Prediction/Detection and Monitoring
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
OilPrice.com
Can Natural Gas Power Pennsylvania's AI Boom?
Authored by Tim Ryan via RealClearEnergy, Artificial Intelligence (AI) is more than the latest buzzword. It’s rising rapidly, permeating across industries, and is already present in our daily lives. Netflix uses AI to personalize recommendations to users, 50% of global organizations reported adopting AI in at least one business area in 2022, and more than half of Americans use voice assistants to receive information. Behind this tech revolution are electricity-thirsty data centers dotting America’s landscape, processing AI, crypto,…
https://oilprice.com/Energy/Natural-Gas/Can-Natural-Gas-Power-Pennsylvanias-AI-Boom.html
https://d32r1sh890xpii.c…1_q4spll50rk.jpg
2024-11-03T16:00:00Z
Just over two weeks ago,…Natural gas prices experience a…Veson Nauticals latest report, VesselsValue,…By ZeroHedge - Nov 03, 2024, 10:00 AM CSTAuthored by Tim Ryan via RealClearEnergy,Artificial Intelligence (AI) is more than the latest buzzword. Its rising rapidly, permeating across industries, and is already present in our daily lives. Netflix uses AI to personalize recommendations to users, 50% of global organizations reported adopting AI in at least one business area in 2022, and more than half of Americans use voice assistants to receive information.Behind this tech revolution are electricity-thirsty data centers dotting Americas landscape, processing AI, crypto, e-commerce, and cloud computing. The collective rise in demand to our power grid is something not seen in decades.An AI Google search, for example, needs 10 times the amount of energy as a normal Google search, and all of the current data centers worldwide combined consume more power than all but 16 countries.AI alone is expected to add 20% more to US electricity demand by 2030 and Goldman Sachs projects natural gas will cover 60% of demand. Our grid is bound to hit a limit in its current state, according to Microsoft leadership. To provide the steady, reliable, and affordable power these facilities need, natural gas-powered electricity is increasingly the obvious choice.As the second-largest natural gas production state, Pennsylvania is uniquely positioned to capitalize on this opportunity quickly, benefit from new job creation and investment, and power our high-tech future if we collectively embrace natural gas as part of that solution. Doing so would be welcome news for trade unions and high-tech professionals alike, alongside local communities who benefit from new tax revenues.Its clear Vice President Kamala Harris thinking has evolved on energy, along with other Democrats across the country.Harris had a front row seat as American natural gas rapidly secured our allies abroad against the fallout of Russias invasion of Ukraine. And she understands how critical natural gas is in achieving her Administrations goals of reshoring critical manufacturing jobs, alongside renewables, that benefit Americas heartland in states like Pennsylvania. Notably natural gas, more than renewables, is the primary source of Americas world-leading carbon reductions over the last two decades and will continue to be a low-carbon solution deployed abroad to replace coal and fight climate change.It is this debate on natural gas: balancing economic strength, technology, global competitiveness, staying ahead of China, and fighting climate change, where Harris can cement her political position as a sensible Democrat who uniquely understands Pennsylvania.The Keystone State is already home to 71 data centers, with hubs in Pittsburgh and Philadelphia, and has potential to attract even more given its proximity to the Marcellus Shale gas formations that have led Pennsylvanias energy revolution in recent years. Democratic Governor Josh Shapiro has taken notice of this potential, speaking at an AI forum at Carnegie Mellon this month.Data centers that power AI are so energy-intensive and desperate to meet these power demands that a mothballed nuclear plant once set for decommissioning, Three Mile Island Unit 1 near Harrisburg, will be restarted as part of a 20-year power purchase agreement with Microsoft. This is a positive development, but its potential to be replicated is limited. Natural gas is abundant, flexible, and affordable.Some will say to build renewables only instead, but that is simplistic thinking.Were already far behind the massive and costly 60% expansion of Americas power grid that Princeton University says is needed just to transition our existing grid to a net-zero future. While solar and wind are vital to a clean energy future, their weather dependence cannot fulfill 24/7 power needs.Harris has a track record on aligning natural gas with opportunity. Under her Administration, the U.S. became the global leader of liquefied natural gas (LNG) exports, and was the largest LNG supplier to Europe in 2022 and 2023, stabilizing the economies of our Allies after Russias Ukraine invasion.Pennsylvania workers were part of that victory.Other statewide Democrats get it. Senators Bob Casey (D-PA) and John Fetterman (D-PA) have stood up to their Party and supported the natural gas industry, and Governor Shapiro has laid out plans for a diverse, resilient electricity grid. Pennsylvanians agree: 74% support building more natural gas infrastructure and 79% said natural gas drilling is important to the states economy.Pennsylvania has the natural resources, the infrastructure, and the know-how to power the AI boom and benefit so many across the Commonwealth. Natural gas remains the obvious choice to scale up fast to meet new demands, protect our environment, and support Pennsylvania jobs. Balance is key. AI is the future, and the United States can only lead on it with practical energy policy that starts in key states like Pennsylvania.Presidential candidate Harris would be smart to embrace it.Tim Ryan served ten terms in the U.S. House of Representatives from 2003 to 2023. He serves as the co-chair of the Natural Allies for a Clean Energy Future Leadership Council.By Zerohedge.com More Top Reads From Oilprice.com
Personalization/Information Retrieval Or Search
Unknown
null
null
null
null
null
null
news
Caroline Donnelly
RWE bets on HPE Private Cloud AI for green energy push
Energy firm RWE has selected the HPE Private Cloud AI setup to help it model and glean insights from weather data with greater ease
https://www.computerweekly.com/news/366616253/RWE-bets-on-HPE-Private-Cloud-AI-for-green-energy-push
https://www.computerweek…olar-fotolia.jpg
2024-11-21T07:23:00Z
Renewable energy supplier RWE is tapping into HPEs private cloud and artificial intelligence (AI) technologies so it can use weather data more efficiently to predict changes in energy supply and demand.German multi-national RWE is a major developer of renewable energy in the UK, operating a mix of wind, hydro and biomass generation sites around the country that it claims are capable of powering around 12 million homes.The company will deploy the HPE Private Cloud AI infrastructure, which is a scalable, pre-tested stack equipped with Nvidias AI compute capabilities, to bolster its ability to analyse and model weather data.It also features HPE GreenLake cloud capabilities to help the RWE team manage and observe the infrastructure with ease, automate processes, and orchestrate endpoints, workloads and data, including sustainability metrics, said HPE.The setup is an on-premise offering that HPE said will allow RWEs researchers to quickly scale in line with their compute demands across the different geographies the company operates in.As a leading provider of renewable energy, RWE is at the forefront of balancing the increasing power demands of the modern world with the societal challenge of protecting the climate, said Simon Tarplee, head of commercial AI at RWE Supply and Trading.We see AI as a key enabler that allows us to optimise our processes and create new, sustainable solutions, and set the ambitious goal to have a production training capable solution in place by the end of 2024.Tarplee continued: With HPE Private Cloud AI, we will have a turnkey solution that helps us to improve our forecast accuracy and optimise energy resource management globally.News of the deployment comes more than three years after RWE went public with its Growing Green investment and growth programme, which has seen the firm commit to investing 55bn worldwide in wind, solar, hydrogen, flexible energy and storage projects.The programme is geared towards helping accelerate the energy transition and decarbonisation efforts in the main countries the company operates, which include Germany, the US and the UK.The company has also set itself a target of becoming a net-zero entity by 2040 with the help of AI, with RWE describing weather modelling as being a top priority in this process.The ability to fine-tune and inference AI-driven weather models will be a key enabler to unlock higher efficiencies for RWEs renewable energy portfolio and will give them a significant competitive advantage in the market, said Fidelma Russo, HPE chief technology officer, and executive vice-president and general manager for hybrid cloud.HPE Private Cloud AI is a unique turnkey solution that enables RWEs researchers to reach the market quicker and handle massive datasets.
Discovery/Decision Making
Life, Physical, and Social Science/Business and Financial Operations
null
null
null
null
null
null
news
James Bridle
More-Than-Human Aesthetics
An interview concerning AI, Ways of Being, and much else, with Bijutsu-Techo.
http://booktwo.org/notebook/more-than-human-aesthetics/
null
2024-11-14T08:19:00Z
I was recently interviewed by Kanoko Tamura for the Japanese art magazine art journal “Bijutsu-Techo”. The interview will be published, in Japanese, in a forthcoming issue of the magazine, dedicated to Generative AI. I’m publishing an English-language transcript, only lightly edited, of our discussion, because it touches on so many things I’m interested in, and raises one particularly interesting question, which I attempted to answer: “what is a non-human aesthetic?”.Kanoko Tamura:For the next issue that we are working on, it’s going to be about generative AI and art. Becaus now ChatGPT and all these technologies have been spread among general people, a lot of people have started using those technologies. But it’s also a very particular time in the history because generative art has its own long history. But what is happening now is something very unique. So I want to use this issue as an opportunity to analyze what is really our current situation?I think a lot of us are having been learning pros and cons of AI and if it’s actually useful for art. And there’s a huge discussion in the art world as well if we should use AI or not, and how should we face the AI from the point of view of the art world. So we are doing different interviews with artists to hear how people think about it and also in a way to try to find a positive way to look at the current situation. So being positive not accepting everything nor denying everything, but trying to find a positive/productive way of looking at the technology from the artist’s point of view.I will be interviewing you today, and I wish to talk about stuff based on Ways of Being where you talk about non-human intelligence and how you find values and possibilities in it? Also connecting that to what’s going on in the art world. If you, as an artist, see any possibilities or issues or challenges in usage of AI in terms of creative work. If you could start by maybe talking about how you see this current situation at the moment and also if you see any possibilities toward the future?James Bridle:Certainly, and thanks. It’s a vast thing to talk about, right? Because one thing to start talking about from the outside is really: what are we talking about? One of the first splits to talk about is between the AI that most people imagine and the AI that actually somewhat exists. We have AI as in the kind of vast science fiction imagining of everyone, which is fed by science fiction films, and by the marketing propaganda of companies that engage in this. That is something that’s so huge and so culturally powerful that it really shapes so much of our thinking about AI, in the art world, but also everywhere.The reality of the actual tools that we have in the present moment are pretty different from what immediately pops into our head when we think about AI. So that’s the first thing: making a distinction between AI as it is in the popular imagination, and in the artistic imagination; and in the actuality of the tools that we have.The second thing I would say is that there’s a really important distinction between the AI youre talking about – generative AI and large language models – and all the different kinds of AI there actually are in existence. Because I have made projects over the years using neural networks, which underlie generative AI and large language models, but I haven’t used those specific technologies in my work. And these emerge from a whole history of other attempts to build AI that have different outcomes. The idea that AI is just something that can make images or make text out of these kind of huge corpora is only one thing that might be possible with them. There’s a range of practices around AI.The other split I’m quite keen to make, that cuts across those other ones and doesn’t just apply to imagining other forms of AI, but trues to ask what we’re talking about when we talk about intelligence in general. Where does AI sit within our imagination of who gets to think, who gets to create. A lot of my work, and particularly in Ways of Being, has been about trying to figure out what we mean when we talk about AI or rather, let’s call it machine intelligence rather than artificial intelligence. Because when we talk about intelligence, we mostly, subconsciously or consciously, mean what humans do. So all of our thinking about AI is contained within this very narrow idea that AI is somehow mimicking some or all aspects of what we consider to be human intelligence. It’s like a subset of human intelligence. One of my questions is: what if it’s not? What if it’s actually a radically different kind of thinking about the world? Therefore, what other modes of thinking the world might we be able to come up with to compare it to, to make sense of it, to measure it against?I’ve always found it incredibly striking that AI, this big amorphous conceptual thing, is having such a huge moment in the popular imagination just as in a lot of other ways, in the art world and elsewhere in politics, in our relationship to the planet, we’re also rethinking the centrality of the human. We’re recognizing both our overwhelming power to organize, control, change, damage, perhaps improve the planet, but also our intense limitations, and part of that is recognizing that there’s a whole world of other beings that have always inhabited this planet, who also have their own forms of intelligence that are not lesser than ours. They’re just different ways of thinking the world. And so is machine intelligence like human intelligence? Is it like the intelligence of some other living being? Is it something entirely new? How does it partake in the world?That brings up really fascinating philosophical questions concerning what we can know about the intelligence of others. Not just the intelligence of other species, but the intelligence of other people, the intelligence of people who’ve existed within different cultures, different cosmologies. There’s so many different ways of thinking in the world. That’s the beginning.We can say some things about what kind of intelligence we are dealing with. One thing we can say about the contemporary forms of AI that most of us have access to is that it is what I call corporate AI. It is AI that is made by large corporations, mostly American and some Chinese and others, all within these very particular structures of power and capitalism. That is like a plant growing within a particular ecosystem. It will take on the qualities of the life it has to survive, what it considers success from that niche in which it grows up. This is why so much attention is paid to AIs that play games. It’s because that’s the only way we have to tell if it’s winning. But that makes it think that winning is the most important thing, which is a particular value of capitalism and of some other systems. It comes freighted with these kind of values, biases, concerns. It comes with its own view of the world that it acts upon.Personally, I’m not interested in art involving AI that doesn’t contain some of that critique, that doesn’t contain some of that thinking. Because what is this thing we’re talking about? Is it a way of painting or is it a way of thinking about painting? Or is it a way of seeing the whole world? Is it a tool? Is it a context? Is it a framework? Or is it a whole cosmology, a whole model of the world that we’re interacting with?And very, very finally and briefly, I want to make a case that AI is very boring. It’s not as interesting or as important as so many other things, including art. It has to be secondary to the important things we’re discussing, which in this case might be: what is art? What can art be? What can art do? We have to talk about the art first and then see if maybe, just perhaps, AI is an interesting component of that.Does it help us answer some of these questions? Does it help us do something or not?Kanoko Tamura:Thank you for that. With that said, I’m interested in hearing about your AI Chair Project because I see a lot of potentials in that project. Even though it’s a very simple and maybe small project that you recently started on, I think that it shows or it tries to explore possibilities of humans and the other. How can we collaborate with someone who is not human? It also shows that the values that you reach together in the conversation with AI. It’s not always the standard beauty or function. It questions the values that humans think are good. I also think about the use of the used materials or the materials that are often usually considered useless. So I think AI Chair Project asks very crucial and fundamental question, and that’s how I interpreted the project. If you could maybe talk about your intention behind it or what did you get out of it after you started the project?James Bridle:So one of the ways that I think about AI is thinking about the fact that you are in communication with another being, a system or whatever it is that has a way of thinking the world that is different with the way that I have of thinking the world. And most of the discourse in AI is around the discordances that it creates. Things like AI hallucinations, the way it does seem to understand the world differently in a way that doesn’t make sense to humans, that comes across as a mistake or even sometimes as a lie. Thats often how these systems are critiqued as seen as being wrong, as being lying, when really that’s just how they understand the world, and it’s different to how humans understand the world.In particular, they’re good at things that humans are bad at, like thinking about huge numbers or very large amounts of information. We’re just not very good at that. We need machines to help us with that or that’s what machines are good for, anyway. But then they’re not good at things that humans are good at, like certain other types of creative thinking, guessing, hunches, another kind of thinking in the world, just a different way of thinking. So we have these different ways of thinking. There’s different roles to be had here, and in particular rather than assuming that AI is something that will outcompete humans, we can at least start to think about it as being something that accompanies humans, that collaborates with humans, that we can put those ways of thinking into some kind of productive alignment.One thing that the computer systems are really good at that humans aren’t so good at is optimization and efficiency. One of the long term uses for computer systems within architecture, for example, is something like how do we make a bridge with fewer supports, or less material? How do we build a more efficient roof on a house? These kind of questions. Complex mathematical problems. So that seemed like the thing to push on. I had a load of spare wood sitting around, scrap wood from other projects. So I was like, okay, well, I know how to build a chair, and chairs are useful, direct, you know. We’re probably not going to improve on the design of the chair: humans know how to make the best form of the chair for the human body. It’s just something that holds you off the ground. But I thought that the AI could perhaps do this kind of creative work that I wasn’t so good at, which was to imagine which materials would be the most efficient in different places. That’s what I was really hoping for from the project, that I would be able to give this machine a bunch of descriptions of pieces of wood that I actually had, and it would tell me how to combine them in ways that I wouldn’t have thought of, that wouldn’t have occurred to me.The result was actually less interesting than that. It told me how to build a very simple chair. It did use the materials very efficiently. There was also work for me to do, to figure out how to follow its instructions. I had to do some creative work here. That was interesting to me because the AI didn’t learn anything from this experience, but I learned something practical and definite from this experience. My carpentry improved slightly. That’s more than collaboration. That’s commensalism, which is a term for a symbiotic relationship where something gets better on both sides. You’re not just surviving. You’re actually improving. I think that’s really key, in that I want to have relationships with my tools that is commensurate. Where we’re improved by our use of them rather than merely doing as we’re told or using them instead of our brains. It’s trying to find the places in which we have either hierarchical relationships or negative parasitic relationships, which most of our relationships with technology are, and replace them with commensal ones. Where we are all uplifted by this experience.There’s a bunch of other stuff in there about like treating the AI as the other and that kind of stuff. But that was really my main thing: how do I build something greater out of this? Also, how do I get a better understanding of the thing that I’m dealing with? Because buried in there is a very literal description of how this kind of AI functions. The chairs that I’ve built by asking the AI to help me build chairs are the sort of chairs that are imagined by someone who has only ever read about chairs. They’ve never sat in one, which is a very important thing to understand about how AI understands things. It’s such a beautifully direct description of what this thing is that I’m dealing with that becomes so apparent when you actually make a thing.That’s slightly in counterpoint to my last point about how you have to talk about the art first. Sometimes you just have to do the thing and then figure out what the art of it is.Kanoko Tamura:Yes. It’s very interesting the last thing you talked about. By making the chair, you kind of understand what exactly AI is, because it produces something not from the experience of sitting, and but out of all the information. I think in Ways of Being, you also talk about there are lots of hints as to how humans can build relationship with non-humans, including AI and other animals in nature. I think you could imagine that relationship being the kind of relationship that you’re building with AI as well. Do you have any specific ideas or examples or imagination on how exactly we can build that kind of relationship with non-humans? I’m interested in animals and plants and all the other beings that you talked about in the book as well. Maybe you could focus on those instead of AI.James Bridle:Happily. Very happily. What to say? One of the main points of Ways of Being is that everything is alive. Everything has its own being. Everything exists in this world equally with everything else, with its own presence, its own awareness, its own living force, whatever that is. Some of that is knowable and recognizable by us. Some of it is so strange and different to us that it’s almost impossible for us to grasp. How do we change or shape our awareness in recognition of that? How do we enter into a world in which most of the life that’s occurring is radically different to us and yet shares the world with us? There’s many possible answers to that. One of them is to directly experience, to pay attention. As I said about the AI chair, sometimes you have to do it. Most of what I’m talking about exists in the realm of direct experience. It can’t be spoken of.What do I mean when I say these things cannot be spoken of? Well, what I mean is that language is a system for making distinctions between things. So you have to divide things up in order to be able to talk about them meaningfully. If I say this mountain, I’m describing a particular mountain. I’m separating it from other things. But if I say the mountain is alive, what does that mean? Does it mean that the mountain breathes like a mammal? Does it exist like a bacteria? Does it live like the wind lives? The words stop having meanings in ways that are useful to us. But the mountain is alive. It’s just really hard to talk about. So the only really meaningful relationship one can have is one that goes beyond language. That concerns direct experience. Things just happen: that is the realm of direct experience. And that’s also the realm of quite a lot of art. So we’re talking about things that have a relationship. That go beyond everyday language and quite often beyond language itself in order to be able to conceive, feel and experience things that are truer than what we can put into any form of words.The question then becomes what kind of relationships do we want to have. What do we want these to be for? Do we want these to be relationships to build a better world? A world of mutual flourishing, a world in which everyone gets to live to their fullest possibilities? And by everyone, I do also mean the birds, the animals, the seas, the mountains. How do we build a world of mutual flourishing? It comes from these kinds of forms of attention, from caring about these things, and by the gift of thinking the world as being fully alive.A lesser question, but one that I am still fascinated with is: do computers count? Does AI count? Is AI alive? Could it be? Which is this question that keeps coming up. I find it fascinating. This is, for me, the best use of AI because it makes me ask this question Is AI alive, in the terms that I’m talking about? If so, how? Because I understand that aliveness has some kind of a resonance, but also something that descends from something greater, from the universe. Everything comes out of the universe. That’s what makes it alive. Does AI come out of that? Or did we build it in just in the lab, and therefore, it has no life? But everything is alive. I don’t know. Im thinking about it.What is AI made out of? It’s literally made out of the bodies of creatures that died billions of years ago. It’s made out of plastic and oil and electrons. So it is part of the universe as much as anything else. What does it mean to think and have these relationships? How does it change us to have these relationships? How is the world changed by the relationships that we have? These are the questions that I’m interested in.Kanoko Tamura:I think that’s a very, very important message and question that you raised that is important not only to this topic that we want to discuss in the magazine issue, but it’s a very fundamental question that we all need to face because when we try to ask questions about AI, we tend to start by questioning and trying to criticize the corporate AI. If it’s useful or not or if it’s helpful or not. But I think if you really think about why AI is here and is AI alive, we can actually get to a very fundamental question in art as well because instead of talking about all the games happening in the art market, we start to ask questions like why do we start drawing in first place, or why do we start dancing and singing? I think it connects to those more important questions.James Bridle:Yes, absolutely. That for me is the most interesting thing about AI. It’s the idea of it, not whatever we write in code or systems. Why as a species are we so fascinated by the idea of there being non-human intelligence?I should really qualify that by saying why as particular human cultures are we fascinated by non-human intelligence? Because for many cultures, the idea of non-human intelligence is not so complicated. For someone acculturated to, in particular, a Western European post-enlightenment scientific position, we’ve been told that there’s no such thing. And we’ve forced most of the world, if not to accept that position, then at least to live within it. We are living within the consequences of that imperial, colonial, epistemological action. Whatever anyone else in the world thinks, we’ve set the world up to run as though there’s nothing but the human and really nothing but certain types of human that matter. So the project, the only important project at the present, is to change that situation.I think thats quite a long shot, given how entrenched our colonialist, capitalist, and Western Enlightenment scientific cultures are. But it does seem to me that the attention given to AI quite clearly speaks to a yearning within the human to meet and learn from the non-human.Kanoko Tamura:With that said or from that point of view, do you think it’s possible to kind of come back to the art and talk about aesthetics with non-human intelligence? Because aesthetics is important, it’s being discussed over years by philosophers, and then there is also a discussion of new aesthetics. But do you think it’s possible to discuss non-human aesthetics, or aesthetics of intelligence, or aesthetics of being?James Bridle:My engagement with aesthetics has always been somewhat accidental. My project, The New Aesthetic, which still continues, is very much a sort of amateur, outsider one, in which I responded quite viscerally to things that I was encountering. It’s always been a research project rather than a doing project in that sense. But in another sense, it’s something I’ve been doing for almost 20 years now. So I guess I can call it meaningful.The first thing to say about a non-human aesthetic would be that it can’t be an aesthetic, as we understand it, because aesthetics is about human sense and human judgment and experience. Its about that we encounter, process, think of the world primarily in terms of appearances because we’ve got these great big eyes in the front of our head that are our primary sense. But that’s not how most of the universe encounters itself.One of the things I always think about is the way, as humans, we live within such a narrow band of space time, in multiple ways. Within time, we live within the ticking of a clock. Our day-to-day time, the 24-hour cycle, and the cycle of our attention, which is also determined by things outside of us. We also live within the time scale of our lives, and we live within the time scale of our culture, what our culture remembers, which is a very small fraction of the history of the planet, let alone the universe, let alone, a number of other things. We live within this really narrow time and we can’t see that far ahead. Our experience is so bounded just by time. And then regarding space, we can see a little, we can hear a little, but that is limited by the frequencies we can perceive. There are huge numbers of beings that live on this planet that can hear things that we can’t hear. So even there, immediately, this other aesthetic, a non-human aesthetic, is bigger, broader, deeper, wider, stranger than a human aesthetic because it extends into all these different potential realizations, awarenesses, and therefore, things to process and therefore, things to think with. It increases the number of objects to think with.So I guess that actually makes for me quite a neat definition of a non-human aesthetic: to increase the number of things we have to think with. Every culture has its own set of aesthetics that it thinks with and humans have collectively a set things to think with. The most interesting thing for me in the world is trying to expand the things, or rather beings, that we have to think with. I add that beings to it because I think a big part of this is the transformation of things into beings. The process by which we stop regarding only the people that we know, or humans in general as being people, beings who matter, and we start to see everything as being people who are in their own process of passing through space time on a trajectory that occasionally intersects with our own. Because the moment of thinking happens when those things collide. The moment of living happens when those things collide.In Ways of Being, I write about intelligence as being a cooperative process. But I’d go a lot further than that, and I would say this kind of living and thinking, this being or mattering is what happens when beings interact, when they meet. How do we increase the number of relationships? That is not a non-human, but a more-than-human aesthetics, that includes all these possible and effectively infinite different intersections between beings to produce entirely new perspectives, new ways of perceiving and thinking the world.Kanoko Tamura:You gave us a lot of hints to really digest and think on, so thank you for that. Do you have any plans for near future or anything that you’re working on at the moment that you could tell us? Any current project that you’re interested in?James Bridle:My work at present is largely focused on quite practical work. I spent, you know, a lot of the last 10, 15 years looking at questions of technology. That’s been my main focus. Writing Ways of Being was was part of, I should say, a conscious transition from a technological focus to an ecological focus. It’s explicit in the book, but it’s part of my whole practice and part of my whole life. I think for me and for a lot of other people, the most urgent work at present is to address the planetary ecological crisis. And an incredibly important question for artists is: what is our role within that?One of the ways that I think about that is to ask questions of art itself, of what it can do, of how it operates. In my work with technology, I found myself running up a lot against essentially a problem of representation, which is where you make work with technology about technology that just does more technology in various ways that just recreates the same systems you’re critiquing. That problem is even greater when it comes to ecological work. It’s very easy to make work about climate change. It’s also pointless and irresponsible in the present moment.The only work that really matters is work that actually does something. I’m not about to claim that my work succeeds in doing that. It’s hard. But it is my intention always that the work actually does some work. Now that can be work to inform people just to talk about things, to raise these as issues, to increase awareness or imagination, but that is mostly representation. It can also be super practical. I build solar panels. I build windmills. I make these things as works of art that actually do the thing, that actually work. I call them works that work or works that do work because they’re actively part of the transformation that I’d like to see in the world. They’re not solutions; I don’t believe in solutions. But I do believe in a kind of active engagement that the work, the artworks themselves, are part of. They don’t stand alone. They’re not representations of anything anymore. They’re actually getting involved in some way. That’s probably of a piece with my feelings about ecology, about the world around me, about the aliveness of everything. The artworks partake as much as anything else. That’s where I’m at these days. I’m making things and I’m making things in the world and I’m eager to see what the world thinks about it.
Content Creation/Discovery
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
IEEE Spectrum
Trump's Second Term Will Change AI, Energy, and More
U.S. presidential administrations tend to have big impacts on tech around the world. So it should be taken as a given that when Donald Trump returns to the White House in January, his second administration will do the same. Perhaps more than usual, even, as he staffs his cabinet with people closely linked to the Heritage Foundation, the Washington, D.C.-based conservative think tank behind the controversial 900-page Mandate for Leadership (also known as Project 2025). The incoming administration will affect far more than technology and engineering, of course, but here at IEEE Spectrum, we’ve dug into how Trump’s second term is likely to impact those sectors.Read on to find out more, or click to navigate to a specific topic. This post will be updated as more information comes in.Artificial IntelligenceConsumer ElectronicsCryptocurrenciesEnergyTelecomTransportationArtificial IntelligenceDuring Trump’s campaign, he vowed to rescind President Joe Biden’s 2023 executive order on AI, saying in his platform that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Experts expect him to follow through on that promise, potentially killing momentum on many regulatory fronts, such as dealing with AI-generated misinformation and protecting people from algorithmic discrimination. However, some of the executive order’s work has already been done; rescinding it wouldn’t unwrite reports or roll back decisions made by various cabinet secretaries, such as the Commerce secretary’s establishment of an AI Safety Institute. While Trump could order his new Commerce secretary to shut down the institute, some experts think it has enough bipartisan support to survive. “It develops standards and processes that promote trust and safety—that’s important for corporate users of AI systems, not just for the public,” says Doug Calidas, senior vice president of government affairs for the advocacy group Americans for Responsible Innovation.As for new initiatives, Trump is expected to encourage the use of AI for national security. It’s also likely that, in the name of keeping ahead of China, he’ll expand export restrictions relating to AI technology. Currently, U.S. semiconductor companies can’t sell their most advanced chips to Chinese firms, but that rule contains a gaping loophole: Chinese companies need only sign up for U.S.-based cloud computing services to get their computations done on state-of-the-art hardware. Trump may close this loophole with restrictions on Chinese companies’ use of cloud computing. He could even expand export controls to restrict Chinese firms’ access to foundation models’ weights—the numerical parameters that define how a machine learning model does its job. —Eliza StricklandBack to topConsumer ElectronicsTrump plans to implement hefty tariffs on imported goods, including a 60 percent tariff on goods from China, 25 percent on those from Canada and Mexico, and a blanket 10 or 20 percent tariff on all other imports. He’s pledged to do this on day 1 of his administration, and once implemented, these tariffs would hike prices on many consumer electronics. According to a report published by the Consumer Technology Association in late October, the tariffs could induce a 45 percent increase in the consumer price of laptops and tablets, as well as a 40 percent increase for video game consoles, 31 percent for monitors, and 26 percent for smartphones. Collectively, U.S. purchasing power for consumer technology could drop by US $90 billion annually, the report projects. Tariffs imposed during the first Trump administration have continued under Biden.Meanwhile, the Trump Administration may take a less aggressive stance on regulating Big Tech. Under Biden, the Federal Trade Commission has sued Amazon for maintaining monopoly power and Meta for antitrust violations, and worked to block mergers and acquisitions by Big Tech companies. Trump is expected to replace the current FTC chair Lina Khan, though it remains unclear how much the new administration—which bills itself as anti-regulation—will affect the scrutiny Big Tech is facing. Executives from major companies including Amazon, Alphabet, Apple, Meta, Microsoft, OpenAI, Intel, and Qualcomm congratulated Trump on his election on social media, primarily X. (The CTA also issued congratulations.) —Gwendolyn RakBack to topCryptocurrenciesOn 6 November, the day the election was called for Trump, Bitcoin jumped 9.5 percent, closing at over US $75,000—a sign that the cryptocurrency world expects to boom under the next regime. Donald Trump marketed himself as a pro-crypto candidate, vowing to turn America into the “crypto capital of the planet” at a Bitcoin conference in July. If he follows through on his promises, Trump could create a national bitcoin reserve by holding on to bitcoin seized by the U.S. government. Trump also promised to remove Gary Gensler, the chair of the Securities and Exchanges Commission, who has pushed to regulate most cryptocurrencies as securities (like stocks and bonds), with more government scrutiny.While it may not be within Trump’s power to remove him, Gensler is likely to resign when a new administration starts. It is within Trump’s power to select the new SEC chair, who will likely be much more lenient on cryptocurrencies. The evidence lies in Trump’s pro-crypto cabinet nominations: Howard Lutnick as Commerce Secretary, whose finance company oversees the assets of the Tether stablecoin; Robert F. Kennedy Jr. as the Secretary of Health and Human Services, who has said in a post that “Bitcoin is the currency of freedom”; and Tulsi Gabbard for the Director of National Intelligence, who had holdings in two cryptocurrencies back in 2018. As Trump put it at that Bitcoin conference, “the rules will be written by people who love your industry, not hate your industry.” —Kohava MendelsohnBack to topEnergyTrump’s plans for the energy sector focus on establishing U.S. “energy dominance,” mainly by boosting domestic oil and gas production, and deregulating those sectors. To that end, he has selected oil services executive Chris Wright to lead the U.S. Department of Energy. “Starting on day 1, I will approve new drilling, new pipelines, new refineries, new power plants, new reactors, and we will slash the red tape,” Trump said in a campaign speech in Michigan in August.Trump’s stance on nuclear power, however, is less clear. His first administration provided billions in loan guarantees for the construction of the newest Vogtle reactors in Georgia. But in an October interview with podcaster Joe Rogan, Trump said that large-scale nuclear builds like Vogtle “get too big, and too complex and too expensive.” Trump periodically shows support for the development of advanced nuclear technologies, particularly small modular reactors (SMRs).As for renewables, Trump plans to “terminate” federal incentives for them. He vowed to gut the Inflation Reduction Act, a signature law from the Biden Administration that invests in electric vehicles, batteries, solar and wind power, clean hydrogen, and other clean energy and climate sectors. Trump trumpets a particular distaste for offshore wind, which he claims will end “on day 1” of his next presidency.The first time Trump ran for president, he vowed to preserve the coal industry, but this time around, he rarely mentioned it. Coal-fired electricity generation has steadily declined since 2008, despite Trump’s first-term appointment of a former coal lobbyist to lead the Environmental Protection Agency. For his next EPA head, Trump has nominated former New York Representative Lee Zeldin—a play expected to be central to Trump’s campaign pledges for swift deregulation. —Emily WaltzBack to topTransportationThe incoming administration hasn’t laid out too many specifics about transportation yet, but Project 2025 has lots to say on the subject. It recommends the elimination of federal transit funding, including programs administered by the Federal Transit Administration (FTA). This would severely impact local transit systems—for instance, the Metropolitan Transportation Authority in New York City could lose nearly 20 percent of its capital funding, potentially leading to fare hikes, service cuts, and project delays. Kevin DeGood, Director of Infrastructure Policy at the Center for American Progress, warns that “taking away capital or operational subsidies to transit providers would very quickly begin to result in systems breaking down and becoming unreliable.” DeGood also highlights the risk to the FTA’s Capital Investment Grants, which fund transit expansion projects such as rail and bus rapid transit. Without this support, transit systems would struggle to meet the needs of a growing population.Project 2025 also proposes spinning off certain Federal Aviation Administration functions into a government-sponsored corporation. DeGood acknowledges that privatization can be effective if well-structured, and he cautions against assuming that privatization inherently leads to weaker oversight. “It’s wrong to assume that government control means strong oversight and privatization means lax oversight,” he says.Project 2025’s deregulatory agenda also includes rescinding federal fuel-economy standards and halting initiatives like Vision Zero, which aims to reduce traffic fatalities. Additionally, funding for programs designed to connect underserved communities to jobs and services would be cut. Critics, including researchers from Berkeley Law, argue that these measures prioritize cost-cutting over long-term resilience.Trump has also announced plans to end the US $7,500 tax credit for purchasing an electric vehicle. —Willie D. JonesBack to top
https://spectrum.ieee.org/trump-tech-policy
https://spectrum.ieee.or…%2C300%2C0%2C300
2024-11-27T12:00:04Z
U.S. presidential administrations tend to have big impacts on tech around the world. So it should be taken as a given that when Donald Trump returns to the White House in January, his second administration will do the same. Perhaps more than usual, even, as he staffs his cabinet with people closely linked to the Heritage Foundation, the Washington, D.C.-based conservative think tank behind the controversial 900-page Mandate for Leadership (also known as Project 2025). The incoming administration will affect far more than technology and engineering, of course, but here at IEEE Spectrum, weve dug into how Trumps second term is likely to impact those sectors.Read on to find out more, or click to navigate to a specific topic. This post will be updated as more information comes in.During Trumps campaign, he vowed to rescind President Joe Bidens 2023 executive order on AI, saying in his platform that it hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. Experts expect him to follow through on that promise, potentially killing momentum on many regulatory fronts, such as dealing with AI-generated misinformation and protecting people from algorithmic discrimination. However, some of the executive orders work has already been done; rescinding it wouldnt unwrite reports or roll back decisions made by various cabinet secretaries, such as the Commerce secretarys establishment of an AI Safety Institute. While Trump could order his new Commerce secretary to shut down the institute, some experts think it has enough bipartisan support to survive. It develops standards and processes that promote trust and safetythats important for corporate users of AI systems, not just for the public, saysDoug Calidas, senior vice president of government affairs for the advocacy group Americans for Responsible Innovation.As for new initiatives, Trump is expected to encourage the use of AI for national security. Its also likely that, in the name of keeping ahead of China, hell expand export restrictions relating to AI technology. Currently, U.S. semiconductor companies cant sell their most advanced chips to Chinese firms, but that rule contains a gaping loophole: Chinese companies need only sign up for U.S.-based cloud computing services to get their computations done on state-of-the-art hardware. Trump may close this loophole with restrictions on Chinese companies use of cloud computing. He could even expand export controls to restrict Chinese firms access to foundation models weightsthe numerical parameters that define how a machine learning model does its job. Eliza StricklandBack to topTrump plans to implement hefty tariffs on imported goods, including a 60 percent tariff on goods from China, 25 percent on those from Canada and Mexico, and a blanket 10 or 20 percent tariff on all other imports. Hes pledged to do this on day 1 of his administration, and once implemented, these tariffs would hike prices on many consumer electronics. According to a report published by the Consumer Technology Association in late October, the tariffs could induce a 45 percent increase in the consumer price of laptops and tablets, as well as a 40 percent increase for video game consoles, 31 percent for monitors, and 26 percent for smartphones. Collectively, U.S. purchasing power for consumer technology could drop by US $90 billion annually, the report projects. Tariffs imposed during the first Trump administration have continued under Biden.Meanwhile, the Trump Administration may take a less aggressive stance on regulating Big Tech. Under Biden, the Federal Trade Commission has sued Amazon for maintaining monopoly power and Meta for antitrust violations, and worked to block mergers and acquisitions by Big Tech companies. Trump is expected to replace the current FTC chair Lina Khan, though it remains unclear how much the new administrationwhich bills itself as anti-regulationwill affect the scrutiny Big Tech is facing. Executives from major companies including Amazon, Alphabet, Apple, Meta, Microsoft, OpenAI, Intel, and Qualcomm congratulated Trump on his election on social media, primarily X. (The CTA also issued congratulations.) Gwendolyn RakBack to topCryptocurrenciesOn 6 November, the day the election was called for Trump, Bitcoin jumped 9.5 percent, closing at over US $75,000a sign that the cryptocurrency world expects to boom under the next regime. Donald Trump marketed himself as a pro-crypto candidate, vowing to turn America into the crypto capital of the planet at a Bitcoin conference in July. If he follows through on his promises, Trump could create a national bitcoin reserve by holding on to bitcoin seized by the U.S. government. Trump also promised to remove Gary Gensler, the chair of the Securities and Exchanges Commission, who has pushed to regulate most cryptocurrencies as securities (like stocks and bonds), with more government scrutiny.While it may not be within Trumps power to remove him, Gensler is likely to resign when a new administration starts. It is within Trumps power to select the new SEC chair, who will likely be much more lenient on cryptocurrencies. The evidence lies in Trumps pro-crypto cabinet nominations: Howard Lutnick as Commerce Secretary, whose finance company oversees the assets of the Tether stablecoin; Robert F. Kennedy Jr. as the Secretary of Health and Human Services, who has said in a post that Bitcoin is the currency of freedom; and Tulsi Gabbard for the Director of National Intelligence, who had holdings in two cryptocurrencies back in 2018. As Trump put it at that Bitcoin conference, the rules will be written by people who love your industry, not hate your industry. Kohava MendelsohnBack to topEnergyTrumps plans for the energy sector focus on establishing U.S. energy dominance, mainly by boosting domestic oil and gas production, and deregulating those sectors. To that end, he has selected oil services executive Chris Wright to lead the U.S. Department of Energy. Starting on day 1, I will approve new drilling, new pipelines, new refineries, new power plants, new reactors, and we will slash the red tape, Trump said in a campaign speech in Michigan in August.Trumps stance on nuclear power, however, is less clear. His first administration provided billions in loan guarantees for the construction of the newest Vogtle reactors in Georgia. But in an October interview with podcaster Joe Rogan, Trump said that large-scale nuclear builds like Vogtle get too big, and too complex and too expensive. Trump periodically shows support for the development of advanced nuclear technologies, particularly small modular reactors (SMRs).As for renewables, Trump plans to terminate federal incentives for them. He vowed to gut the Inflation Reduction Act, a signature law from the Biden Administration that invests in electric vehicles, batteries, solar and wind power, clean hydrogen, and other clean energy and climate sectors. Trump trumpets a particular distaste for offshore wind, which he claims will end on day 1 of his next presidency.The first time Trump ran for president, he vowed to preserve the coal industry, but this time around, he rarely mentioned it. Coal-fired electricity generation has steadily declined since 2008, despite Trumps first-term appointment of a former coal lobbyist to lead the Environmental Protection Agency. For his next EPA head, Trump has nominated former New York Representative Lee Zeldina play expected to be central to Trumps campaign pledges for swift deregulation. Emily WaltzBack to topTransportationThe incoming administration hasnt laid out too many specifics about transportation yet, but Project 2025 has lots to say on the subject. It recommends the elimination of federal transit funding, including programs administered by the Federal Transit Administration (FTA). This would severely impact local transit systemsfor instance, the Metropolitan Transportation Authority in New York City could lose nearly 20 percent of its capital funding, potentially leading to fare hikes, service cuts, and project delays. Kevin DeGood, Director of Infrastructure Policy at the Center for American Progress, warns that taking away capital or operational subsidies to transit providers would very quickly begin to result in systems breaking down and becoming unreliable. DeGood also highlights the risk to the FTAs Capital Investment Grants, which fund transit expansion projects such as rail and bus rapid transit. Without this support, transit systems would struggle to meet the needs of a growing population.Project 2025 also proposes spinning off certain Federal Aviation Administration functions into a government-sponsored corporation. DeGood acknowledges that privatization can be effective if well-structured, and he cautions against assuming that privatization inherently leads to weaker oversight. Its wrong to assume that government control means strong oversight and privatization means lax oversight, he says.Project 2025s deregulatory agenda also includes rescinding federal fuel-economy standards and halting initiatives like Vision Zero, which aims to reduce traffic fatalities. Additionally, funding for programs designed to connect underserved communities to jobs and services would be cut. Critics, including researchers from Berkeley Law, argue that these measures prioritize cost-cutting over long-term resilience.Trump has also announced plans to end the US $7,500 tax credit for purchasing an electric vehicle. Willie D. JonesBack to top
Content Synthesis/Decision Making/Detection and Monitoring
Management/Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
Ray Le Maistre
SKT unveils AI infrastructure superhighway plan
South Korea’s SK Telecom is very focused on being an AI leader in its home market and beyondIt is preparing to launch its AI datacentre testbed in South Korea …
https://www.telecomtv.com/content/telcos-and-ai-channel/skt-unveils-ai-infrastructure-superhighway-plan-51680/
https://assets.telecomtv…19174.jpg?w=1200
2024-11-04T14:56:06Z
The AI drumbeat at South Korea’s SK Telecom (SKT) is getting louder by the month: The network operator’s CEO, Ryu Young-sang, put more flesh on the company’s AI strategy bones during a keynote speech at its own AI Summit in Seoul, where he presented SKT’s vision for an “AI infrastructure superhighway”. The plan is based on three pillars – build out a national network of regional AI datacentres; enhance that with edge AI infrastructure; and launch a GPU-as-a-service (GPUaaS) offering to enterprise and government users before expanding the reach of the resulting portfolio of AI infrastructure services into other geographic markets via high-capacity submarine network links in collaboration with partners. SKT will start by opening an AI datacentre testbed in Pangyo, which is just south of Seoul, in December. That facility will be underpinned by technology, which has been developed within the SK group of companies, including GPU (graphics processing unit) virtualisation solutions, AI energy optimisation tools and SK Hynix’s HBM (high-bandwidth memory) chips – as well as products from key partners, such as Nvidia. The testbed will also use three types of liquid-cooling solutions, namely direct liquid cooling, immersion cooling, and precision liquid cooling, explained the CEO. The insights and experience from that facility will then be used in the construction of 100 MW (megawatts) hyperscale AI datacentres in multiple regions across the country, with plans to expand the IT capacity of those sites to GW (gigawatts) in an effort to develop an AI datacentre “hub in the Asia Pacific region”. SKT believes these AI datacentres will be able to source a “stable power supply through the utilisation of new renewable energy sources, such as hydrogen, solar and wind power, and easily expand to global markets through submarine cables”. One of the earliest AI datacentre sites will be in Gasan, on the outskirts of Seoul, where an existing datacentre facility will be upgraded through the deployment of a cluster of Nvidia H100 Tensor Core GPUs that, in partnership with San Francisco-based AI infrastructure developer Lambda, will be used to launch a GPUaaS offering. SKT and Lambda announced a partnership in August – see SK Telecom to deploy Nvidia GPU cluster with Lambda.Then in March next year, SKT plans to deploy clusters of Nvidia H200 Tensor Core GPUs to further meet user demand. SKT also plans to use its experience in planning, building and running these AI datacentre sites to develop an AI infrastructure solution “that combines AI semiconductors, datacentres, and energy solutions through collaboration with AI companies in Korea and abroad, with the aim of entering the global market.” But it’s not all about hyperscale datacentres: SKT also plans to integrate AI infrastructure with its mobile network infrastructure to enable “edge AI” that can offer “reduced latency, enhanced security, and improved privacy” compared with large hyperscale datacentres. “Additionally, it enables large-scale AI computing, complementing the existing AI infrastructure, compared to on-device AI,” noted SKT in this announcement. The company says it is “carrying out various proof-of-concept (PoC) projects across six areas, including healthcare, AI robots, and AI CCTV, to discover specialised edge AI services,” and collaborating with global partners in edge AI infrastructure. As TelecomTV reported recently, SKT is one of five telcos to have submitted an application to join the AI-RAN Alliance, which is focused on the integration of AI and radio access network infrastructure – to find out more, watch our exclusive video interview with the AI-RAN Alliance chair, Alex Jinsung Choi – see Under the hood of the AI-RAN Alliance.“So far, the competition in telecommunications infrastructure has been all about connectivity, namely speed and capacity, but now the paradigm of network evolution should be changed,” stated Ryu Young-sang in his keynote speech. “The upcoming 6G will evolve into a next-generation AI infrastructure where communication and AI are integrated,” added SKT’s CEO. - Ray Le Maistre, Editorial Director, TelecomTV
Unknown
Business and Financial Operations/Management
null
null
null
null
null
null
news
By MATT O'BRIEN, AP Technology Writer
US ahead in AI innovation, easily surpassing China in Stanford's new ranking
The U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index. There’s no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the “vibrancy” of the AI industry across a variety of dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm. Following the U.S. and China were the United Kingdom, India and the United Arab Emirates.
https://www.seattlepi.com/business/article/us-ahead-in-ai-innovation-easily-surpassing-19933897.php
https://s.hdnux.com/phot…1/3/rawImage.jpg
2024-11-21T13:08:59Z
Attendees listen during a panel discussion with Ambassador Philip Thigo, Special Envoy on Technology for the Office of the President of Kenya, from top left, Hong Yuen Poon, Deputy Secretary of Singapore's Ministry of Digital Development and Information, Lucilla Sioli, Director of the Artificial Intelligence Office for the European Commission, and Dr. Seth Center, Acting Special Envoy Office of the Special Envoy for Critical and Emerging Technology, at the convening of the International Network of AI Safety Institutes at the Golden Gate Club at the Presidio in San Francisco, Wednesday, Nov. 20, 2024.Jeff Chiu/APThe U.S. leads the world in developing artificial intelligence technology, surpassing China in research and other important measures of AI innovation, according to a newly released Stanford University index.There's no surefire way to rank global AI leadership but Stanford researchers have made an attempt by measuring the vibrancy of the AI industry across various dimensions, from how much research and investment is happening to how responsibly the technology is being pursued to prevent harm.The gap is actually widening between the U.S. and China, said computer scientist Ray Perrault, director of the steering committee that runs Stanford's AI Index. The U.S. is investing a lot more, at least at the level of firm creation and firm funding.AdvertisementArticle continues below this adThe California-based university's Institute for Human-Centered AI which has ties to Silicon Valley's tech industry released the report Thursday as government AI officials from the U.S. and several allies met in San Francisco this week to compare notes on AI safety measures.Here's which countries made the top 10:The U.S. ranks No. 1 on Stanford's list and has consistently held that position since 2018 when it overtook China. It has far outpaced China in private AI investment, which hit $67.2 billion in the U.S. last year compared to $7.8 billion in China, according to the report. It also leads in publishing responsible AI research. It's no surprise that the home of commercial AI powerhouses such as Google and Meta, along with relative newcomers like OpenAI and Anthropic, has produced many notable AI models that have influenced how the technology is being developed and applied. The U.S. also gets some points for having a number of AI-related laws on the books, though Congress has yet to pass any broad AI regulations.AdvertisementArticle continues below this adChina has requested far more patents than any other country regarding generative AI, the U.N. intellectual property agency said earlier this year. Stanford researchers counted that as one measure of China's strong growth in AI innovation but not enough to lead the pack. Still, the report says that "Chinas focus on developing cutting-edge AI technologies and increasing its R&D investments has positioned it as a major AI powerhouse. China's universities have produced a large number of AI-related research publications and it has commercial leaders developing notable AI models, such as Baidu and its chatbot Ernie.Coming in at No. 3 is the UK, which also ranked high in research and development, and educational infrastructure due to top computer science universities churning out a skilled AI workforce. It's also home to Google's AI subsidiary DeepMind, whose co-founder recently won a Nobel Prize; and had more mentions of AI in parliamentary proceedings than any other country. Last year, the UK hosted the worlds first international AI safety summit.AdvertisementArticle continues below this adClose behind the UK was India, thanks to a strong AI research community, improvements in economic investments tied to AI and a robust public discourse about AI on social media, according to the report.The UAE's deliberate focus on AI appears to have paid off in the Middle Eastern nation's fifth-place score. It was one of the top locations for AI investments. Microsoft earlier this year said it was investing $1.5 billion in UAE-based tech firm G42, which is overseen by the countrys powerful national security adviser. Based in Abu Dhabi, G42 runs data centers and has built whats considered the worlds leading Arabic-language AI model, known as Jais.AdvertisementArticle continues below this adRounding out the top 10 are France at No. 6, followed by South Korea, Germany, Japan and Singapore. France, home to the buzzy AI startup Mistral, ranked high in AI policy and governance. Both it and Germany will be part of the European Union's sweeping new AI Act that places safeguards on a range of AI applications based on how risky they are. The EU also follows the U.S. in developing a plan to expand semiconductor production within the bloc.
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
Michelle Lewis
Hitachi Energy launches an AI tool to revolutionize renewable energy forecasting
Hitachi Energy debuts Nostradamus AI, which delivers accurate renewable energy forecasts, load predictions, and market pricing insights.
http://electrek.co/2024/11/12/hitachi-energy-ai-tool-renewable-energy-forecasting/
https://i0.wp.com/electr…&strip=all&ssl=1
2024-11-12T19:57:30Z
Photo by AS Photography on Pexels.comHitachi Energy debuts Nostradamus AI, which delivers fast, accurate renewable energy forecasts, load predictions, and market pricing insights.Nostradamus AI is one of the first AI forecasting tools designed specifically for the energy sector. Hitachi says it offers forecasts that are over 20% more accurate than some industry targets. It helps optimize energy investments, improve trading strategies, streamline operations, and ensure regulatory compliance. Hitachi Energy’s tool integrates grid performance data, market forecasts, and asset monitoring, all powered by machine learning to provide reliable forecasts.Outdated forecasting methods are often costly, limited, and labor-intensive. Nostradamus AI aims to change that by enabling more accurate data processing and better decision-making, which is crucial for managing renewable energy sources and growing energy demands.Hitachi Energy’s AI tool is cloud-native, scalable, and flexible, capable of handling predictions ranging from a single wind turbine or solar farm to thousands of energy load points. It’s algorithm-agnostic and today supports several of the highest-performing AI models, making deployment and scaling straightforward for energy operators, traders, and developers.“Embracing AI in the power sector is not just a choice; it is an imperative. Thats why its critical that organizations have a dedicated tool designed specifically for analyzing the massive amount of data generated across an evolving power grid,” said Andy Howell, Hitachi Energy’s global head of enterprise software solutions product group, business unit grid automation.In November 2022, Electrek reported on Hitachi Energy’s launch of wireless Spark Prevention Unit indicators that help prevent wildfires by enabling remote monitoring. The company is investing billions to expand and upgrade the electricity grid in multiple countries, including the SunZia project in the US.What do you think about the use of AI forecasting tools in the energy sector? Let us know in the comments section below.If you live in an area that has frequent natural disaster events, and are interested in making your home more resilient to power outages, consider going solar and adding a battery storage system. To make sure you find a trusted, reliable solar installer near you that offers competitive pricing, check out EnergySage, a free service that makes it easy for you to go solar. They have hundreds of pre-vetted solar installers competing for your business, ensuring you get high quality solutions and save 20-30% compared to going it alone. Plus, its free to use and you wont get sales calls until you select an installer and share your phone number with them.Your personalized solar quotes are easy to compare online and youll get access to unbiased Energy Advisers to help you every step of the way. Get started here. trusted affiliate link*FTC: We use income earning auto affiliate links.More.
Decision Making/Prediction
Business and Financial Operations/Management
null
null
null
null
null
null
news
null
Ditch all the negativity: Here's what can go right to lift each of our 34 stocks
Jim Cramer likes to remind investors of the positives in the market. We applied that mindset to our Club portfolio stocks.
https://www.cnbc.com/2024/11/29/heres-what-can-go-right-to-lift-each-of-our-34-portfolio-stocks-.html
https://image.cnbcfm.com…24&w=1920&h=1080
2024-11-29T16:19:52Z
Sometimes, there can be an overwhelming amount of negativity and noise on Wall Street. To counter that, Jim Cramer has said investors should not lose sight of what can go right for their stocks. That doesn't mean ignoring risks and investing on autopilot. It does mean investors should remember the wall of worry can be surmounted. As Wall Street starts to look ahead to 2025, here's a look at a few things that can go right for all 34 Club holdings. Abbott Laboratories Legal overhang dissipates: Lawsuits over its specialized formula for premature infants have kept a lid on Abbott shares since March. However, the company's surprise win in a case a few weeks ago increases the likelihood that a settlement could be reached. The positive stock reaction to that decision hints at what a complete resolution could do for shares. More momentum in medical devices: Abbott's strong portfolio, led by its flagship FreeStyle Libre for diabetes, has been a bright spot, turning in multiple quarters in a row of double-digit sales growth. Abbott's over-the-counter continuous glucose monitoring system called Lingo, which recently launched in the U.S., is a key product to watch. Early momentum in sales is promising. Advanced Micro Devices Finding its lane: AMD's foray into artificial intelligence chips for data centers with its MI300 has gone well, with executives hiking their sales forecast multiple times this year. If huge clients like Microsoft keep investing in AI hardware, AMD should be able to further carve out a lane as a strong No. 2 player behind market leader Nvidia. Smooth chip updates: AMD needs to successfully carry out its annual release cycle for AI chips. The upcoming release of its next-generation MI350x chip, scheduled for 2025, could attract additional customers who want to diversify away from Nvidia chips. Alphabet AI ROI: The Google parent must keep showing that its hefty spending on AI is growing sales and making the company more efficient. Checking both boxes will quiet concerns that its capital expenditures are excessive and that Google Search is ceding share to AI chatbots. Easing on antitrust action: A more lenient regulatory environment under a second Donald Trump presidency could reduce the risk of major antitrust actions. Alphabet recently lost an antitrust case brought by the Justice Department, which argued the company maintained a monopoly in online search and recommended it sell Chrome, its web browser. However, Trump has expressed skepticism about breaking up the company . Waymo expansion: Increased adoption of Waymo's self-driving technology in new cities and the potential for a spin-off in the future would represent big wins for Alphabet's money-losing "Other Bets" segment. Amazon Retail margin expansion: The e-commerce giant needs to show that it can continue lowering logistics and shipping costs, which would keep alive the improving profitability trend that has been key to the bullish narrative around the stock. Cloud growth: Investors want Amazon Web Services to show accelerating topline growth, fueled in part by demand for AI computing, along with improved profitability. That combination will help assuage concerns about AI spending levels. Less scrutiny: Deregulation under Trump could allow Amazon to focus on scaling its core businesses without the distraction of legal battles. Apple AI-led sales: The introduction of new AI capabilities in Apple Intelligence needs to spark a larger-than-normal device upgrade cycle, boosting sales of the iPhone 16 and the next few models. If AI can spur more revenue in its high-margin services unit, that would be a cherry on top. New deals: A looser regulatory environment would allow management to expand Apple's strategic partnerships and focus on other initiatives, including its push into health-care wearables. Best Buy Device upgrades: Best Buy's same-store sales need a jolt, and that could come from people sitting on older computers and devices who want the latest and greatest in AI-powered personal computers and smartphones, including the new iPhone 16. Rate play: Mortgage rates haven't come down since the Federal Reserve's first rate cut in September. But when we do see a decline, it should lead to more homebuilding. That means new homeowners will need to fill up their places with big-ticket appliances and flat-screen TVs. BlackRock New growth prospects: The asset manager has had a great year of net inflows, and the market wants to see that momentum sustained. Its move into alternative investments like infrastructure will hopefully drive significant growth and open new revenue streams. Lower rates: If the Fed and other central banks keep cutting rates, that should enhance inflows into BlackRock's fixed-income and ETF offerings. That's because existing bonds become more attractive as rates fall. Bristol Myers Squibb Cobenfy launch succeeds: The company's new treatment for schizophrenia in adults was approved in September, and a better-than-expected rollout would be positive for Bristol Myers shares. Wall Street currently projects $187 million in revenue in 2025 and $620 million in 2026, according to FactSet. Broadcom AI stays hot: Broadcom's leadership in providing essential components for AI infrastructure, including co-designing custom chips for tech giants such as Alphabet, makes it a key beneficiary of the growing demand for AI technologies. So, the AI boom continuing apace would be good for Broadcom, like it would be for AMD. Smartphone market improves: This area has lately been a drag on Broadcom, so evidence that global smartphone shipments are recovering, especially for the iPhone, would be a welcome development. Broadcom provides connectivity chips for the iPhone. Constellation Brands Wine-and-spirits comeback: That business has hurt Constellation's overall growth rate during a period of strength for its top-selling Mexican beers, including Modelo and Corona. However, if management's recent strategy to focus on higher-end wines pays off, the stock could bounce. Divesting from this segment entirely, as Jim Cramer has suggested, is another option. Improving beer sales: Its beer unit needs to show that pockets of softness in the most recent quarter were just a short-term blip, not a festering issue that curtails topline growth. Cash flow bounty: Once capital investments for expanding brewing capacity peak, Constellation will be able to ramp up cash returns to shareholders through higher dividends and buybacks. That could begin in a few quarters. Costco More stores around the world: Costco's runway to open more warehouses outside of the U.S. is an underappreciated growth driver. The company has said it expects more than 10 new locations outside the U.S. next year. Membership growth quickens: Evidence that Costco's card-scanner rollout, designed to crack down on multiple people using the same membership, is creating a "Netflix moment" would be a clear-cut positive. Coterra Energy LNG export approvals: Trump making good on its reported desire to restart export permits for LNG would play right into Coterra's hands. President Joe Biden paused them. Deregulation in general could lower Coterra's operational costs. The big data center buildout: Booming power consumption from data centers in the coming years offers a growing market for Coterra's natural gas . CrowdStrike IT outage in rear view: While CrowdStrike stands to benefit from the increase in cyber-attacks and threats, the company needs to move past the global IT outage it caused this summer. To judge this, analysts are keeping a close eye on topline growth. Customer churn hasn't been a big issue, but some deals have been paused. Danaher China recovery: Economic stimulus in China needs to start showing up in Danaher's order book, which would provide a major boost to growth in 2025. IPO floodgates open: A resurgence of biotech IPOs would create a cash windfall for one of Danaher's key customer bases. Some of the money will surely go toward buying Danaher's tools and products used in the drug development process. Dover More energy, more cooling: Continued spending on data center overhauls should translate into more orders for Dover's thermal connectors, which are used in the liquid cooling of AI servers. It's one of Dover's key growth areas, and investors want more evidence that its topline is picking up speed. Bioprocessing bounces back : The still-nascent recovery in the biopharmaceutical industry needs to show further progress, translating into more orders for Dover's pumps and single-use components for manufacturing. DuPont The breakup: DuPont is on track to split by December 2025 into three standalone public companies: a water business, an electronics-focused firm, and the remaining DuPont, serving health care and construction markets, among others. It's the best way to unlock significant value. A sharper focus on AI : Unleashing DuPont's electronics assets will allow the standalone company to better serve customers tied to the AI boom by enabling smart technologies as well as next-generation semiconductors and circuit boards. Eaton More power needed: Eaton is helping companies meet the increased electricity demand fueled by the rapid expansion of AI, with its electrical equipment playing a vital role in powering data centers and AI infrastructure. Megatrend momentum: Eaton's products are used in a bunch of big growth trends like reindustrialization and electrification that should keep sales humming for a long time. Only 16% of the 504 projects in its backlog have been started, as of its late October earnings report. Eli Lilly Wider GLP-1 adoption: Eli Lilly's GLP-1 drugs, Mounjaro for diabetes and Zepound for treating obesity, are best sellers and should be for many years to come. That's especially true if the active ingredient in these drugs gets approval for other conditions such as heart health and sleep apnea. Solving supply shortages: Lilly has invested billions of dollars in its GLP-1 manufacturing operations. Availability of the drugs, which require highly specialized factories and workers, is still tight. Ramping up manufacturing capacity will help bring more supply to the market and end the ability of other companies to compound knockoffs. GE Healthcare Easier sell: Declining interest rates support GE Healthcare's growth by lowering borrowing costs for its customers who must shell out big bucks for its expensive MRI and CT scanners. More China: Health care stimulus measures in China working their way into the market and recovering demand in the world's second-largest economy could drive a rebound in orders there for GEHC. Home Depot Lower mortgages: Mortgage rates, which have been going in the wrong direction since the Fed has been cutting rates, will eventually come down. That will lift the housing market and spur homebuilding and improvement projects. Home Depot will be right there to serve both the pro and the do-it-yourself customers. Tailwinds into 2025 : Third-quarter sales related to Hurricanes Helene and Milton were a tailwind to revenue growth. The company also raised its full-year 2024 outlook across several key metrics. It appears that Home Depot is on the verge of an earnings rebound heading into next year. Honeywell Business split : Honeywell shares surged following Elliott Management's disclosure of a $5 billion stake and push for a breakup of the industrial conglomerate. Splitting up Honeywell into two companies aerospace and automation could unlock significant value, with Elliott estimating up to 75% upside over the next two years. Linde Economic improvement: Linde's stronghold as an industrial gas leader with what Jim calls "oligopolistic" pricing power ensures the company can withstand an uncertain economy. As interest rates decline, economic activity could accelerate, increasing demand for industrial gases and boosting Linde's volumes and earnings. Beating conservative guidance : Any uptick in the economy would help keep Linde's under-promise, over-deliver run intact as management issued a fourth-quarter outlook assuming an economic contraction. Linde normally gives guidance assuming a neutral economy. Meta Platforms AI monetization: Meta has successfully used AI to keep users on Instagram and Facebook longer, thanks to its suggested Reels and other posts. AI also has made ad targeting better, so marketers want to spend more dollars across Meta's apps. That needs to be sustained to justify Meta's heavy spending on AI chips. Microsoft Azure capacity meeting demand: Microsoft's cloud-computing service Azure has faced the high-quality problem of too much demand for its availability capacity. Its AI services are contributing to that dynamic. Nevertheless, correcting this dynamic should translate into faster revenue growth rates. Artificial Intelligence ROI: Microsoft's strategic investments in AI, including its CoPilot suite of AI-powered tools, are beginning to bear fruit. While it has pressured short-term profits, the monetization of these tools should lead to more sales. Morgan Stanley Lower rates: The Federal Reserve in September lowered interest rates for the first time in four years, beginning a loosening cycle that's expected to continue into 2025 as the central bank looks to achieve a soft landing for the U.S. economy. A rebound in the IPO market is likely as stocks become more attractive to own than bonds. It should lower the cost of capital for would-be acquirers, thus increasing M & A activity. Both trends play to Morgan Stanley's strength in investment banking. Deregulation: The Trump administration is likely to usher in deregulation and a more deal-friendly environment than under the Biden administration another boost to M & A. Nextracker Renewable energy adoption: On the face of it, the GOP sweeping this election should spell disaster for Nextracker , a key provider of solar tracker systems. But there's hope the incoming Trump administration provides clarity on its policy toward renewable energy and specifically keeps in place some of the favorable tax credits under the Biden administration. Demand dynamics: Artificial intelligence and the data centers that fuel it require multiples of the current energy output, creating a greater need for solar. Nvidia Accelerating AI demand: Nvidia's recent earnings call made it clear that we're still in the early innings of the AI revolution that will fuel demand for the company's chips well into 2025 and beyond despite the concerns of some on Wall Street. As Jim Cramer recently pointed out on "Mad Money," the demand simply isn't slowing down. "The demand is accelerating because the payoff is so great," he said. "According to [CEO Jensen Huang], for every dollar their customers put in, they're making five smackers. That means they have no choice but to buy Nvidia's chips." Hyperscaler spending: Some of Nvidia's biggest customers, like Microsoft , Meta , Amazon and Tesla will have to keep buying the chips to build out their AI infrastructure. Palo Alto Networks Bigger deals: Cybersecurity is a secular growth market: As the number of bad actors grows, companies can't afford to not invest in defense. Industry leader Palo Alto stands out for its advanced cybersecurity solutions and strategy of bundling them altogether (what it calls "platformization"), which is leading to megadeals. During its most recent quarter, the company said it signed a transaction worth more than $50 million with a large technology firm and a more than $20 million deal with a financial services firm, among other deals. Stanley Black & Decker Housing rebound: Falling interest rates are likely to kickstart the sagging homebuilding market, increasing demand for Stanley's tools. Cost cuts keep bearing fruit: Ongoing cost-cutting measures are improving operational efficiency and profitability, positioning the company for stronger performance as the economic cycle turns. Starbucks Improvement in global sales: Under CEO Brian Niccol's leadership, efforts to revitalize Starbucks' sales through a simplified menu and rebranding as a community-focused coffee house should drive global sales growth, leading to more investor confidence that a turnaround is taking shape. Better margins: By focusing on profitable growth, while continuing strategic investments, Starbucks is positioning itself for stronger margins, which should translate to a higher stock price. Walt Disney Parks bounce back: A recovery in Disney's theme parks business is expected to lift overall revenue and profitability, providing a strong tailwind for the stock. Streaming profits accelerating: Continued growth in streaming profits could become a key driver of stock gains since it would offset the languishing performance in linear television. CEO replacement: While not an immediate catalyst, the eventual announcement of a new CEO by 2026 is expected to improve investor sentiment and signal fresh direction for the company's future leadership. TJX Companies Cautious consumer spending: TJX's off-price model does well as consumers turn cautious, as shoppers prioritize value and turn to TJX for affordable high-quality gifts. Overseas expansion: The company's gradual expansion into international markets offers a new avenue for revenue and profit growth, which could help sustain momentum and drive the stock higher as it captures market share abroad. Salesforce Adoption of AI agents: Salesforce's new AI agent, Agentforce, is driving strong demand as it automates tasks and boosts productivity for customers, positioning the company for accelerated growth in deals. Wells Fargo Lifting of the asset cap: The removal of the Fed-imposed asset cap, implemented in 2018, would enable Wells Fargo to grow revenues and expand its balance sheet. While the exact timing remains uncertain, there's some hope that it could occur in 2025 . (See here for a full list of the stocks in Jim Cramer's Charitable Trust, the portfolio the Club uses) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.Traders work on the floor of the New York Stock Exchange (NYSE) at the opening bell on November 6, 2024, in New York City.  Images)Sometimes, there can be an overwhelming amount of negativity and noise on Wall Street. To counter that, Jim Cramer has said investors should not lose sight of what can go right for their stocks. That doesn't mean ignoring risks and investing on autopilot. It does mean investors should remember the wall of worry can be surmounted. As Wall Street starts to look ahead to 2025, here's a look at a few things that can go right for all 34 Club holdings.
Content Synthesis/Decision Making
Business and Financial Operations
null
null
null
null
null
null
news
james_robert
Show HN: The Future of Work Collection – which tool would you use?
Hi HN,I’m from the product team at Autonomous, working on tools to empower you to work from anywhere—enhancing both mind and body focus to supercharge productivity.Because let’s face it: your workspace plays a huge role in bringing your next big idea to life.Check out our Future of Work Collection: https://www.autonomous.ai/future-of-workWe’re experimenting with:1.WorkPod Solar – a backyard office powered by the sun.2.ErgoChair Ultra 2 – ergonomic support for marathon coding sessions.3.SmartDesk 5 – designed for dynamic workflows.4.AnonAI – on-premise AI for hackers and builders.We’d love your thoughts:- What stands out to you here?- What would you improve?- What tools do you need to build your future?We’re here to listen and learn. Thanks for helping us shape what’s next!Comments URL: https://news.ycombinator.com/item?id=42204985Points: 1# Comments: 0
https://www.autonomous.ai/future-of-work
null
2024-11-21T15:08:23Z
The Future of Work Collection
Unknown
Unknown
null
null
null
null
null
null
news
Hayden Field
OpenAI to present plans for U.S. AI strategy and an alliance to compete with China
OpenAI's official blueprint for U.S. AI infrastructure involves AI economic zones and government projects funded by private investors, according to a document.
https://www.cnbc.com/2024/11/13/openai-to-present-plans-for-us-ai-strategy-and-an-alliance-to-compete-with-china.html
https://image.cnbcfm.com…67&w=1920&h=1080
2024-11-13T14:25:27Z
OpenAI's official "blueprint for U.S. AI infrastructure" involves artificial intelligence economic zones, tapping the U.S. Navy's nuclear power experience and government projects funded by private investors, according to a document viewed by CNBC, which the company plans to present on Wednesday in Washington, D.C.The blueprint also outlines a North American AI alliance to compete with China's initiatives and a National Transmission Highway Act "as ambitious as the 1956 National Interstate and Defense Highways Act."In the document, OpenAI outlines a rosy future for AI, calling it "as foundational a technology as electricity, and promising similarly distributed access and benefits." The company wrote that investment in U.S. AI will lead to tens of thousands of jobs, GDP growth, a modernized grid that includes nuclear power, a new group of chip manufacturing facilities and billions of dollars in investment from global funds.Now that Donald Trump is President-elect, OpenAI has made clear its plans to work with the new administration on AI policy, and the company's Wednesday presentation outlines its plans.Trump plans to repeal President Biden's executive order on AI, according to his campaign platform, stating that it "hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology" and that "in its place, Republicans support AI Development rooted in Free Speech and Human Flourishing."OpenAI's presentation outlines AI economic zones co-created by state and federal governments "to give states incentives to speed up permitting and approvals for AI infrastructure." The company envisions building new solar arrays and wind farms and getting unused nuclear reactors cleared for use."States that provide subsidies or other support for companies launching infrastructure projects could require that a share of the new compute be made available to their public universities to create AI research labs and developer hubs aligned with their key commercial sectors," OpenAI wrote.OpenAI also wrote that it foresees a "National Transmission Highway Act" that could expand power, fiber connectivity and natural gas pipeline construction. The company wrote it needs "new authority and funding to unblock the planning, permitting, and payment for transmission," and that existing procedures aren't keeping pace with AI-driven demand.The blueprints say, "The government can encourage private investors to fund high-cost energy infrastructure projects by committing to purchase energy and other means that lessen credit risk."OpenAI also foresees a North American AI alliance of Western countries that could eventually expand to a global network, such as a "Gulf Cooperation Council with the UAE and others in that region."The company also outlined its vision for nuclear power, writing that although China "has built as much nuclear power capacity in 10 years as the US built in 40," the U.S. Navy operates about 100 small modular reactors (SMRs) to power naval submarines, and leveraging the Navy's expertise could lead to building more civilian SMRs.OpenAI's infrastructure blueprint aligns with what Chris Lehane, OpenAI's head of global policy, told CNBC in a recent interview. He sees the Midwest and Southwest as potential core areas for AI investment."Parts of the country that have been 'left behind,' as we enter the digital age, where so much of the economics and particularly economic benefits flow to the two coasts... Areas like the midwest and the southwest are going to be the types of places where you have the land and ability to do wind farms and to do solar facilities, and potentially to do some part of the energy transition potentially do nuclear facilities," Lehane said.The infrastructure, Lehane explained, is contingent on the U.S. maintaining a lead over China in AI."[In] Kansas and Iowa, which sits on top of an enormous amount of agricultural data, think about standing up a data center," Lehane said. "One gigawatt, which is a lot, taking, you know, 200-250 megawatts, a quarter of that, and doing something with their public university systems to create an agricultural-based LLM or inference model that would really serve their community but also make them a center of agricultural AI."Lehane cited an estimate that the US will need 50 gigawatts of energy by 2030 to support the AI industry's needs and to compete against China, especially when the country approved 20 nuclear reactors over the past two years and 11 more for next year."We don't have a choice," Lehane said. "We do have to compete with that."WATCH: OpenAI launches ChatGPT search
Unknown
Life, Physical, and Social Science/Others
null
null
null
null
null
null
news
null
I can now run a GPT-4 class model on my laptop
Meta’s new Llama 3.3 70B is a genuinely GPT-4 class Large Language Model that runs on my laptop. Just 20 months ago I was amazed to see something that felt …
https://simonwillison.net/2024/Dec/9/llama-33-70b/
https://static.simonwill…ebench-llama.jpg
2024-12-09T15:16:47Z
9th December 2024Metas new Llama 3.3 70B is a genuinely GPT-4 class Large Language Model that runs on my laptop.Just 20 months ago I was amazed to see something that felt GPT-3 class run on that same machine. The quality of models that are accessible on consumer hardware has improved dramatically in the past two years.My laptop is a 64GB MacBook Pro M2, which I got in January 2023two months after the initial release of ChatGPT. All of my experiments running LLMs on a laptop have used this same machine.In March 2023 I wrote that Large language models are having their Stable Diffusion moment after running Metas initial LLaMA release (think of that as Llama 1.0) via the then-brand-new llama.cpp. I said:As my laptop started to spit out text at me I genuinely had a feeling that the world was about to changeI had a moment of déjà vu the day before yesterday, when I ran Llama 3.3 70B on the same laptop for the first time.Meta claim that:This model delivers similar performance to Llama 3.1 405B with cost effective inference thats feasible to run locally on common developer workstations.Llama 3.1 405B is their much larger best-in-class model, which is very much in the same weight class as GPT-4 and friends.Everything Ive seen so far from Llama 3.3 70B suggests that it holds up to that standard. I honestly didnt think this was possibleI assumed that anything as useful as GPT-4 would require many times more resources than are available to me on my consumer-grade laptop.Im so excited by the continual efficiency improvements were seeing in running these impressively capable models. In the proprietary hosted world its giving us incredibly cheap and fast models like Gemini 1.5 Flash, GPT-4o mini and Amazon Nova. In the openly licensed world its giving us increasingly powerful models we can run directly on our own devices.How I ran Llama 3.3 70B on my machine using OllamaI dont expect that this model would work well with much less than my 64GB of RAM. The first time I tried it consumed every remaining bit of available memory and hard-crashed my Mac! For my second attempt I made sure not to have Firefox and VS Code running at the same time and it worked just fine.I used Ollama. Install that, then run this command to fetch the model:This fetched 42GB of data into my ~/.ollama/models/blobs directory.I used my LLM tool to run prompts against the model, since that way both the prompt and the response would be automatically logged to SQLite. First I installed the llm-ollama plugin:Then with the Ollama local server running I could run this:And see this in the list:Ollama: llama3.3:latest (aliases: llama3.3)Putting the model through its pacesThis means I can run prompts like so:llm -m llama3.3 'Write a letter to the mayor of Half Moon Bay CA advocating for the installation of cosy boxes for Pelicans in the harbor'And get back a response that started like this:[Your Name][Your Address]Half Moon Bay, CA [Zip Code][Email Address][Phone Number][Date]The Honorable Mayor, Deborah PenroseCity of Half Moon Bay501 Main StreetHalf Moon Bay, CA 94019Dear Mayor Penrose,I am writing to propose an innovative and unique initiative that would not only enhance the beauty of our picturesque harbor but also provide a safe haven for one of our beloved coastal residents: the California Brown Pelican. I am excited to advocate for the installation of cozy boxes specifically designed for pelicans in the Half Moon Bay harbor. [...](Deborah Penrose was the mayor of Half Moon Bay for a single year from December 2016 to December 2017so a hint of some quite finely grained world knowledge there, even if its not relevant for the present day.)Heres the full transcript, including the response I got when I ran the follow-up llm -c 'translate that to spanish'.Next I tried a simple coding challenge. I love using Claude Artifacts to build simple single page applications, can Llama 3.3 on my laptop do that too?llm -m llama3.3 'produce HTML with embedded CSS and JavaScript - no react, no dependencies - for an app that takes a URL to a YouTube video and outputs images along with visible URLs for all known thumbnails for that video'Heres the code it produced (the full transcript is here):<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"><title>YouTube Thumbnail Extractor</title><style> body { font-family: Arial, sans-serif; margin: 20px; } #thumbnails { display: flex; flex-wrap: wrap; } .thumbnail { margin: 10px; } img { max-width: 150px; max-height: 150px; object-fit: cover; } </style></head><body><h2>YouTube Thumbnail Extractor</h2><inputtype="text" id="youtubeUrl" placeholder="Enter YouTube video URL"><buttononclick="extractThumbnails()">Extract Thumbnails</button><divid="thumbnails"></div><script>functionextractThumbnails(){constyoutubeUrl=document.getElementById('youtubeUrl').value;constvideoId=getYoutubeVideoId(youtubeUrl);if(!videoId){alert("Could not parse YouTube video ID from the URL.");return;}constthumbnailSizes=['default','mqdefault','hqdefault','sddefault','maxresdefault'];constthumbnailsDiv=document.getElementById('thumbnails');thumbnailsDiv.innerHTML='';// Clear previous resultsthumbnailSizes.forEach(size=>{constthumbnailUrl=`https://img.youtube.com/vi/${videoId}/${size}.jpg`;constimg=document.createElement('img');img.src=thumbnailUrl;img.alt=`${size} thumbnail`;img.className='thumbnail';constlink=document.createElement('a');link.href=thumbnailUrl;link.text=size;link.style.display='block';constdiv=document.createElement('div');div.appendChild(img);div.appendChild(link);thumbnailsDiv.appendChild(div);});}functiongetYoutubeVideoId(url){constregExp=/^.*((youtu.be\/)|(v\/)|(\/u\/\w\/)|(embed\/)|(watch\?))\??v?=?([^#&?]*).*/;constmatch=url.match(regExp);return(match&&match[7].length==11) ? match[7] : false;}</script></body></html>Again, were illustrating some world knowledge here: it knows that YouTube videos are generally available as default, mqdefault, hqdefault, sddefault and maxresdefault.Heres the hosted page, and this is what it looks like when run against this YouTube URL:Its not as good as the version I iterated on with Claude, but this still shows that Llama 3.3 can one-shot a full interactive application while running on my Mac.How does it score?Its always useful to check independent benchmarks for this kind of model.One of my current favorites for that is LiveBench, which calls itself a challenging, contamination-free LLM benchmark and tests a large array of models with a comprehensive set of different tasks.llama-3.3-70b-instruct-turbo currently sits in position 19 on their table, a place ahead of Claude 3 Opus (my favorite model for several months after its release in March 2024) and just behind Aprils GPT-4 Turbo and Septembers GPT-4o.Honorable mentionsLlama 3.3 is currently the model that has impressed me the most that Ive managed to run on my own hardware, but Ive had several other positive experiences recently.Last month I wrote about Qwen2.5-Coder-32B, an Apache 2.0 licensed model from Alibabas Qwen research team that also gave me impressive results with code.A couple of weeks ago I tried another Qwen model, QwQ, which implements a similar chain-of-thought pattern to OpenAIs o1 series but again runs comfortably on my own device.Metas Llama 3.2 models are interesting as well: tiny 1B and 3B models (those should run even on a Raspberry Pi) that are way more capable than I would have expectedplus Metas first multi-modal vision models at 11B and 90B sizes. I wrote about those in September.Is performance about to plateau?Ive been mostly unconvinced by the ongoing discourse around LLMs hitting a plateau. The areas Im personally most excited about are multi-modality (images, audio and video as input) and model efficiency. Both of those have had enormous leaps forward in the past year.I dont particularly care about AGI. I want models that can do useful things that I tell them to, quickly and inexpensivelyand thats exactly what Ive been getting more of over the past twelve months.Even if progress on these tools entirely stopped right now, the amount I could get done with just the models Ive downloaded and stashed on a USB drive would keep me busy and productive for years.Bonus: running Llama 3.3 70B with MLXI focused on Ollama in this article because its the easiest option, but I also managed to run a version of Llama 3.3 using Apples excellent MLX library, which just celebrated its first birthday.Heres how I ran the model with MLX, using uv to fire up a temporary virtual environment:uv run --with mlx-lm --python 3.12 pythonThis gave me a Python interpreter with mlx-lm available. Then I ran this:frommlx_lmimportload, generatemodel, tokenizer=load("mlx-community/Llama-3.3-70B-Instruct-4bit")This downloaded 37G from mlx-community/Llama-3.3-70B-Instruct-4bit to ~/.cache/huggingface/hub/models--mlx-community--Llama-3.3-70B-Instruct-4bit.Then:prompt="Generate an SVG of a pelican riding a bicycle, start with <svg>"messages= [{"role": "user", "content": prompt}]prompt2=tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True)response=generate( model, tokenizer, prompt=prompt2, verbose=True)Which gave me:<svgxmlns="http://www.w3.org/2000/svg"width="400"height="300"viewBox="0 0 400 300"> <!-- Sky --> <rectx="0"y="0"width="400"height="300"fill="#87CEEB"rx="0" /><!-- Ground --> <rectx="0"y="250"width="400"height="50"fill="#964B00"rx="0" /><!-- Pelican --> <gtransform="translate(150, 150)"> <!-- Body --> <ellipsecx="0"cy="0"rx="30"ry="40"fill="#FFFFFF"stroke="#000000"stroke-width="2" /><!-- Beak --> <pathd="M 0 -40 L 20 -60 L 40 -40 Z"fill="#FFFF00"stroke="#000000"stroke-width="2" /><!-- Legs --> <linex1="0"y1="40"x2="-20"y2="80"stroke="#000000"stroke-width="5" /> <linex1="0"y1="40"x2="20"y2="80"stroke="#000000"stroke-width="5" /><!-- Eyes --> <circlecx="-15"cy="-20"r="5"fill="#000000" /> <circlecx="15"cy="-20"r="5"fill="#000000" /> </g><!-- Bicycle --> <gtransform="translate(200, 220) scale(0.7, 0.7)"> <!-- Frame --> <rectx="-50"y="-20"width="100"height="40"fill="#808080"rx="10" /><!-- Wheels --> <circlecx="-35"cy="20"r="30"fill="#808080" /> <circlecx="35"cy="20"r="30"fill="#808080" /><!-- Pedals --> <rectx="-45"y="40"width="10"height="20"fill="#808080"rx="5" /> <rectx="35"y="40"width="10"height="20"fill="#808080"rx="5" /><!-- Seat --> <rectx="-20"y="-40"width="40"height="20"fill="#808080"rx="5" /> </g><!-- Pelican on Bicycle --> <gtransform="translate(200, 150)"> <!-- Pelican's hands on handlebars --> <linex1="-20"y1="0"x2="-40"y2="-20"stroke="#000000"stroke-width="5" /> <linex1="20"y1="0"x2="40"y2="-20"stroke="#000000"stroke-width="5" /><!-- Pelican's feet on pedals --> <linex1="0"y1="40"x2="-20"y2="60"stroke="#000000"stroke-width="5" /> <linex1="0"y1="40"x2="20"y2="60"stroke="#000000"stroke-width="5" /> </g></svg>Followed by:Prompt: 52 tokens, 49.196 tokens-per-secGeneration: 723 tokens, 8.733 tokens-per-secPeak memory: 40.042 GBHeres what that looks like:Honestly, Ive seen worse.
Unknown
Unknown
null
null
null
null
null
null
news
Andrew Liszewski
LG’s lightweight Gram laptops get new Intel chips and offline AI features
LG is upgrading its lightweight Gram laptop lineup with Intel Arrow Lake and Lunar Lake processors, plus LG AI software with both cloud and local offline features.
https://www.theverge.com/2024/12/31/24332438/lg-gram-pro-laptops-2025-ai-copilot
https://cdn.vox-cdn.com/…0931/lggram1.jpg
2024-12-31T09:00:00Z
LGs lightweight Gram laptops get new Intel chips and offline AI featuresLGs lightweight Gram laptops get new Intel chips and offline AI features / The new Gram lineup includes on-device AI-powered features, as well as Intel Arrow Lake and Lunar Lake CPUs.ByAndrew Liszewski, a senior reporter who's been covering and reviewing the latest gadgets and tech since 2011, but has loved all things electronic since he was a kid.Dec 31, 2024, 9:00 AM UTCIf you buy something from a Verge link, Vox Media may earn a commission.See our ethics statement.LG has announced additions to its ultra-light Gram and Gram Pro laptop lineup, adding cloud-based and on-device AI-powered features that go beyond its current Gram laptops.The 16-inch Gram Pro will also be the first Copilot Plus PC in the LG Gram lineup and is further distinguished as the only model using the Intel Lunar Lake Core Ultra V-Series processors. The 17-inch Gram Pro and 16-inch 2-in-1 use Intels Arrow Lake Core Ultra H-Series processors.The LG Gram Pro will be available in 17-inch and 16-inch models featuring 2,560 x 1,600 displays, up to 32GB of LPDDR5X memory, and up to 2TB Gen4 NVMe SSDs. The 16-inch Gram Pro and the 16-inch LG Gram Pro 2-in-1 will have Intel Arc GPUs, while the 17-inch Gram Pro will instead feature an Nvidia GeForce RTX4050 graphics card.Gram Chat On-Device, which uses a small language model derived from LG AI Researchs EXAONE large language model, powers offline features, including Time Travel, which lets users quickly revisit web pages, documents, videos and audio files theyve recently accessed. LGs software is adding tools similar to Microsofts Copilot Plus suite and Apple Intelligence, but given the trouble Microsoft has had with Recall, well have to wait and see how it all measures up. It also might make those features available on more PCs, but LG hasnt specified which AI features will be available on which laptops in the new Gram lineup.Gram Chat Cloud is powered by OpenAIs GPT-4o and responds to inquiries using vast web-based datasets for detailed and comprehensive responses while integrating with calendar and email services. It requires an active internet connection and will only be free for the first year.All of the new Gram laptops also support LGs Gram Link 2.0, which streamlines document and file sharing with other PCs and iOS or Android-based smartphones. It also allows incoming phone calls to be answered through the Gram laptops, so you dont have to swap headsets or Bluetooth headphones to another device temporarily.LG will also introduce its entry-level Gram Book to the US market next year. Powered by an Intel Core i5 processor, it features a 15.6-inch 60Hz full HD display, a 720p webcam, and configurations of up to 1TB of SSD storage and 16GB of DDR4 memory.
Digital Assistance/Content Synthesis
Unknown
null
null
null
null
null
null
news
Jean Paik,Julia Hood
Business leaders share 5 ways they're taking AI from pilot to use case
Leaders from Mastercard, Infosys, IBM, and more share how they created AI use cases at their companies.
https://www.businessinsider.com/business-leaders-share-how-they-integrated-ai-company-pilot-use-2024-12
https://i.insider.com/6764a4556c531c315b897c6c?width=1200&format=jpeg
2024-12-23T18:42:31Z
Getty Images; Andrius Banelis for BIIn the business world, there are few areas that artificial intelligence hasn't touched. Many industries are rushing to adopt AI, and the technology is changing how employees collaborate and complete tasks.Generative AI is a major buzzword for business leaders. But actually integrating AI can be a different story."A lot of our clients have dozens of AI pilots everywhere," Jack Azagury, the group chief executive for consulting at Accenture, said at one Workforce Innovation roundtable. "Very few have a coherent business case and a true reinvention and transformation."How do companies move forward as the novelty of AI wears off? Business Insider's Julia Hood asked members of the Workforce Innovation board how they transitioned their AI pilots into real-world use cases. Board members shared five major ways their companies were moving AI from theory to operations."Before we go and tell our clients to embark on AI fully, we want to be an AI-first organization," said Anant Adya, an executive vice president, service-offering head, and head of Americas delivery at Infosys. "We want to show our clients we are using AI, whether it is in HR when it comes to driving better employee experience or when it comes to recruitment."Members also highlighted employee training and peer-to-peer learning opportunities.The roundtable participants were:Anant Adya, an executive vice president, service-offering head, and head of Americas Delivery at Infosys.Lucrecia Borgonovo, a chief talent and organizational-effectiveness officer at Mastercard.Neil Murray, the CEO of Work Dynamics at JLL.Justina Nixon-Saintil, a vice president and chief impact officer at IBM.Marjorie Powell, a chief HR officer and senior vice president at AARP.The following has been edited for length and clarity.Identify early adopters, like human resourcesNixon-Saintil: Because we provide these platforms and solutions to clients, we are usually client zero. We implemented AI across our business and multiple functions, and one of the first things we did was our AskHR product, which I think answered over 94% of questions employees had.HR employees now spend time doing higher-order work and partnerships with business units instead of answering basic questions that a virtual assistant can answer. I think that's when you start seeing a lot of the benefits of it.Borgonovo: HR has been leading the way in terms of embedding AI to enhance the employee experience end to end, right before you hire somebody all the way to after they leave the organization. There are tons of opportunities to improve performance and productivity and provide greater personalization.Invest in ongoing trainingAdya: There are certain AI certifications and courses that everybody has to take to be knowledgeable about AI. So we are driving education in terms of what is the impact of AI, what is gen AI, what are LLMs, and how you look at use cases. And certainly educating everybody that it's not about job losses but about amplifying your potential to do more.Powell: We have hands-on skill building. This past year we posted over 20 AI workshops helping teams integrate AI into their work. We really encourage our staff to participate. We have a product we're using behind our firewall, so they can engage and play with it. We're just telling them go ahead and try to break it, so they can give us feedback on what's working.There was a team of people who said we want to see how you could use AI with PowerPoint or Excel. And they're finding, well, it's not so good in those things. But as it continues to grow, they'll be ready for that, and they'll know what it was able to do and what it wasn't. I think it's just making it fun, and that way it's not so scary.Murray: Our internal large language model is now a widget on everybody's dashboard that is accessible on your landing page. Training is super important here to make people comfortable with it. Even if it's just an online module, you have to get people comfortable.Nixon-Saintil: We've also done companywide upskilling. We had two Watsonx challenges. Watsonx is our AI data platform. This is one of the ways we've upskilled a majority of the organization. The outcome of that is there are some great ideas that employees actually ideated, and they're now implementing those ideas and solutions in different functions.Borgonovo: Employees want to use AI, and I think they're eager to learn how to use AI to augment their jobs. For that, we built a three-tiered learning approach. One is democratizing access for everybody and building general knowledge of AI.The second tier is much more role-specific. How do we drive new ways of working by having people in different roles embrace AI tools? Software engineering, consulting, sales — you name it. And then something we definitely want to build for the future is thinking proactively about how you re-skill people whose roles may be impacted by AI so they can become more comfortable doing high-level tasks or can shift to a different type of role that is emerging within the organization.The other piece is where we're seeing the greatest demand internally, which is for knowledge management. It's gathering information from a lot of different sources in a very easy way.Another job family that is very eager to get their hands on new AI technology is software engineering. We have taken a very measured approach in deploying coding assistants within the software-engineering community. This year we did a pilot with a subset of them using coding assistants. The idea is to just learn and, based on our learning, scale more broadly across our software-engineering community in 2025.One of the really interesting learnings from this pilot was that the software engineers who were using the coding assistants probably the best were people who had received training. What we're learning is that before you start rolling out all of these technologies or AI-specific platforms for different job families, you have got to be really intentional about incorporating prompt training.Unlock peer-to-peer learningPowell: We have idea pitch competitions and a year-round idea pipeline program where people can put in ideas on how to use AI and share what they've learned. It sparks a lot of peer learning and creativity on our digital-first capabilities to help us with our digital transformation.Then we collaborate through community. We have a generative-AI community of practice. This is somewhat like how companies have employee resource groups; we have communities of practice as well. They give employees a space to share their techniques and learn from each other and stay ahead of evolving trends. They meet monthly, they have an executive sponsor, and they have all kinds of activities and learning opportunities.Murray: As we monitored AI use and what sort of questions were being asked, we identified super users across all departments — so the people who were capable of developing the most evolved prompts. I suppose those prompts are now appearing in pull-down menus to help people who maybe aren't as advanced in their use of it, because prompting is a really important part of this. And so the super users are driving everybody else to show them what's possible across the organization.Find customer pain points to solveBorgonovo: One of the use cases that drives not only knowledge management but also efficiencies is around customer support. Customer support is probably one of the areas that has been leading the way.We have a customer onboarding process that can be very lengthy, very technical, involving hundreds of pages of documentation and reference materials. It was our first use case for a chat-based assistant that we processed in terms of streamlining and creating greater efficiency and a much better customer experience.Reinforce responsible leadershipPowell: We want our leaders, people leaders particularly, to guide employees to use AI effectively and responsibly. We want to make sure they're emphasizing privacy, policy, and efficiency. So we encourage managers to point the staff toward training that we offer, and we offer quite a bit of training..insider-raw-embed + p { display: none; }// Workforce Innovationconst seriesTitle = "Workforce Innovation";// Presented Byconst text = "Presented by";// 6667548ce07904901817825aconst sponsorLogoID = "6667548ce07904901817825a";// Indeed Transparent Logoconst altText = "Indeed Transparent Logo";//https://www.businessinsider.com/category/workforce-innovationconst hubOrCatURL = "https://www.businessinsider.com/category/workforce-innovation";document.documentElement.classList.add("gi-sponsor-module");if ( document.querySelector(".gi-sponsor-module") && document.querySelector("article section:first-of-type") && !document.querySelector(".full-bleed-hero") && !document.querySelector(".enhanced-story-byline")) { document.querySelector(".summary-list").insertAdjacentHTML( "beforebegin", ` ${seriesTitle}${text}` );}if ( document.querySelector(".gi-sponsor-module") && document.querySelector(".full-bleed-hero") && document.querySelector(".enhanced-story-byline")) { document.querySelector(".enhanced-story-byline").insertAdjacentHTML( "beforeend", ` ${seriesTitle}${text}` );}if ( document.querySelector(".gi-sponsor-module") && document.querySelector(".post-meta")) { document.querySelector( ".post-meta" ).innerHTML = `${seriesTitle}`;}Read the original article on Business Insider
Process Automation/Decision Making/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
Thomas Maxwell
Nvidia’s New $250 ‘Jetson Computer’ Lets Hobbyists Play Around With AI Locally
The palm-sized Orin Nano is ideal for hobbyists and small companies looking to run AI applications locally, such as for use in industrial robots.
https://gizmodo.com/nvidias-new-250-jetson-computer-lets-hobbyists-play-around-with-ai-locally-2000539783
https://gizmodo.com/app/…ang-Nvidia-1.jpg
2024-12-17T17:05:36Z
Nvidia has released a new $249 version of its Jetson computer meant for running artificial applications locally. The palm-sized Orin Nano is said to double the speed and efficiency of its predecessor at half the price, and it can process roughly 70% more computational tasks, according to Nvidia.The Orin Nano is ideal for hobbyists looking to train their own artificial intelligence applications, or for developers of robots and other industrial tools to run sophisticated applications without connecting to the cloud.In a brief video on YouTube announcing the product, Nvidia CEO Jensen Huang stands in his kitchen as he pulls a tray out of his oven to reveal the small, palm-sized computer. Huang goes on to say the computer can process almost “seventy trillion” operations per second and draws just 25 watts of power.Nvidia CEO Jensen Huang on Tuesday debuted Nvidia’s latest Jetpack computer for running AI programs locally. Credit: Nvidia“A long time ago, we created a brand new type of processor, it was a robotics processor,” Huang says. “Nobody understood what we were building at the time, and we imagined that someday these deep learning models would evolve and we would have robots for everything.” The Jetpack computers were intended to power robots, but today can also run large language models like Meta’s Llama.Sam Altman’s startup World (formerly Worldcoin), which aims to authenticate human identity with iris scans, uses a Jetpack module inside its Orb scanning device. In a blog post back in October, World said, In its newest iteration, the Orb is equipped with the most advanced NVIDIA Jetson module with nearly 5x the AI performance over the previous version to enable even faster, more seamless proof of human verifications.In essence, the Orin Nano is a portable brain that can be plugged into other hardware to power its AI functionality. Cloud hyperscalers like Amazon and Google charge for access to servers and AI models, and those costs can add up. Certain applications might need guaranteed uptime and minimal latency, such as warehouse robots. Connecting to a remote cloud hosting provider is not ideal. That being said, a computer like the Orin Nano will only be capable of running more lightweight AI applicationsit will not replace Nvidia’s high-end GPUs that cost tens of thousands of dollars and are capable of training and inferring from large-scale AI models.Still, if you are a tinkerer who wants to create some type of new AI-powered robot, the Orin Nano might be for you.
Unknown
Others
null
null
null
null
null
null
news
Danny Gallagher
Yelp adds AI-powered scores to business pages
The crowd-sourced review site Yelp unveiled a new feature that uses AI and customer reviews to rate common facets of nightlife and food-related business. The new Review Insights feature is available now on the iOS version of the Yelp app, according to the company’s official blog.Review Insights aggregates customer reviews and feeds them into a large language model (LLM), which will assign specific aspects — like the vibe or service time — a rating out of 100. Supposedly it will be able to infer customer sentiment about these parts of a business "even when a review doesn’t explicitly mention one of the topics." Yelp will also be adding an AI-powered homepage in the coming weeks.Yelp has been implementing new AI features over the past year. The business review site added an auto-generated summary feature in January and an AI-powered Yelp Assistant in April.This article originally appeared on Engadget at https://www.engadget.com/ai/yelp-adds-ai-powered-scores-to-business-pages-195131103.html?src=rss
https://consent.yahoo.com/v2/collectConsent?sessionId=1_cc-session_2aad1360-f5e2-4c38-a7c4-86445d224330
null
2024-12-10T19:51:33Z
If you click 'Accept all', we and our partners, including 237 who are part of the IAB Transparency & Consent Framework, will also store and/or access information on a device (in other words, use cookies) and use precise geolocation data and other personal data such as IP address and browsing and search data, for personalised advertising and content, advertising and content measurement, and audience research and services development.If you do not want us and our partners to use cookies and personal data for these additional purposes, click 'Reject all'.If you would like to customise your choices, click 'Manage privacy settings'.You can change your choices at any time by clicking on the 'Privacy & cookie settings' or 'Privacy dashboard' links on our sites and apps. Find out more about how we use your personal data in our privacy policy and cookie policy.
Content Synthesis/Prediction/Decision Making
Unknown
null
null
null
null
null
null
news
AJ Dellinger
OpenAI Announces New $200 Per Month ChatGPT Pro Subscription
The first question you should ask ChatGPT Pro: "Is this really worth $200 a month?"
https://gizmodo.com/openai-announces-new-200-per-month-chatgpt-pro-subscription-2000535006
https://gizmodo.com/app/…s-2185275106.jpg
2024-12-05T22:20:30Z
Computing ultimately comes down to 1s and 0s. For example: if you tack an extra 0 onto the $20 price tag for ChatGPT Plus, you’ll have the cost of ChatGPT Prothe newest premium tier of OpenAI’s chatbot platform that offers access to the company’s new “reasoning” model.The new subscription option, marketed for engineers and researchers, will cost $200 per month and offers unlimited use of the company’s GPT-4o and o1 models, as well as full access to the o1 model’s “pro mode”, which is designed to mimic human reasoning to answer more complex questions. The announcement was made as part of OpenAI’s “12 Days of Shipmas” during which it plans to show off 12 new products in the lead-up to the holiday season.OpenAI previewed o1 earlier this year with limited access to its o1-preview and o1-mini models, previously known by the codename Strawberry. The next-generation large language models showed off the company’s new approach to complex computations through chain-of-thought reasoning. Basically, it is a chatbot capable of “thinking” before it responds to questions. Whereas chatbots operating on models like GPT-4o or GPT-4 might require refining prompts and questions in order to deliver a meaningful answer, o1 is designed to do all that work behind the scenes before responding.The results, according to the company’s benchmarking tests, are impressive. OpenAI claims the o1 model scored in the 89th percentile in programming competitions held by Codeforces and was able to correctly answer 83 percent of questions from an International Mathematics Olympiad qualifying test. By contrast, GPT-4o only managed to get 14 percent right.But the model has its fair share of shortcomings, too. It’s both slower and more expensive than GPT-4o and other models. In the preview version of o1, input tokensessentially units of text that the model uses to parse a promptcost about three times the price of tokens for GPT-4o. An analysis of o1 conducted by the AI developer platform Vellum found the reasoning model is 30 times slower than its predecessor.Reviews of the model also found that while it does seem more capable of tackling complex math problems and coding tasks, it’s no betterand in some cases, worseat answering simple questions. In fact, OpenAI’s own help pages admit, “GPT-4o is still the best option for most prompts.”Of course, the new ChatGPT Pro subscription gives users access to both the GPT-4o and new o1 reasoning model, so you could always switch between options depending on your needs. Or, alternatively, you could spend $200 on anything else.
Decision Making/Content Synthesis/Prediction
Computer and Mathematical
null
null
null
null
null
null
news
null
Things we learned out about LLMs in 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past …
https://simonwillison.net/2024/Dec/31/llms-in-2024/
https://static.simonwill…ena-dec-2024.jpg
2024-12-31T18:11:06Z
31st December 2024A lot has happened in the world of Large Language Models over the course of 2024. Heres a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.This is a sequel to my review of 2023.In this article:The GPT-4 barrier was comprehensively brokenIn my December 2023 review I wrote about how We dont yet know how to build GPT-4OpenAIs best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didnt?Im relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)70 models in total.The earliest of those was Googles Gemini 1.5 Pro, released in February. In addition to producing GPT-4 level outputs, it introduced several brand new capabilities to the fieldmost notably its 1 million (and then later 2 million) token input context length, and the ability to input video.I wrote about this at the time in The killer app of Gemini Pro 1.5 is video, which earned me a short appearance as a talking head in the Google I/O opening keynote in May.Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Googles Gemini series accepts up to 2 million.Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.Getting back to models that beat GPT-4: Anthropics Claude 3 series launched in March, and Claude 3 Opus quickly became my new favourite daily-driver. They upped the ante even more in June with the launch of Claude 3.5 Sonneta model that is still my favourite six months later (though it got a significant upgrade on October 22, confusingly keeping the same 3.5 version number. Anthropic fans have since taken to calling it Claude 3.6).Then theres the rest. If you browse the Chatbot Arena leaderboard todaystill the most useful single place to get a vibes-based evaluation of modelsyoull see that GPT-4-0314 has fallen to around 70th place. The 18 organizations with higher scoring models are Google, OpenAI, Alibaba, Anthropic, Meta, Reka AI, 01 AI, Amazon, Cohere, DeepSeek, Nvidia, Mistral, NexusFlow, Zhipu AI, xAI, AI21 Labs, Princeton and Tencent.Training a GPT-4 beating model was a huge deal in 2023. In 2024 its an achievement that isnt even particularly notable, though I personally still celebrate any time a new organization joins that list.Some of those GPT-4 models run on my laptopMy personal laptop is a 64GB M2 MackBook Pro from 2023. Its a powerful machine, but its also nearly two years old nowand crucially its the same laptop Ive been using ever since I first ran an LLM on my computer back in March 2023 (see Large language models are having their Stable Diffusion moment).That same laptop that could just about run a GPT-3-class model in March last year has now run multiple GPT-4 class models! Some of my notes on that:This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.These models take up enough of my 64GB of RAM that I dont run them oftenthey dont leave much room for anything else.The fact that they run at all is a testament to the incredible training and inference performance gains that weve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect theres still more to come.Metas Llama 3.2 models deserve a special mention. They may not be GPT-4 class, but at 1B and 3B sizes they punch massively above their weight. I run Llama 3.2 3B on my iPhone using the free MLC Chat iOS app and its a shockingly capable model for its tiny (<2GB) size. Try firing it up and asking it for a plot outline of a Netflix Christmas movie where a data journalist falls in love with a local ceramacist. Heres what I got, at a respectable 20 tokens per second:Heres the rest of the transcript. Its bland and generic, but my phone can pitch bland and generic Christmas movies to Netflix now!LLM prices crashed, thanks to competition and increased efficiencyThe past twelve months have seen a dramatic collapse in the cost of running a prompt through the top tier hosted LLMs.In December 2023 (heres the Internet Archive for the OpenAI pricing page) OpenAI were charging $30/million input tokens for GPT-4, $10/mTok for the then-new GPT-4 Turbo and $1/mTok for GPT-3.5 Turbo.Today $30/mTok gets you OpenAIs most expensive model, o1. GPT-4o is $2.50 (12x cheaper than GPT-4) and GPT-4o mini is $0.15/mToknearly 7x cheaper than GPT-3.5 and massively more capable.Other model providers charge even less. Anthropics Claude 3 Haiku (from March, but still their cheapest model) is $0.25/mTok. Googles Gemini 1.5 Flash is $0.075/mTok and their Gemini 1.5 Flash 8B is $0.0375/mTokthats 27x cheaper than GPT-3.5 Turbo last year.Ive been tracking these pricing changes under my llm-pricing tag.These price drops are driven by two factors: increased competition and increased efficiency. The efficiency thing is really important for everyone who is concerned about the environmental impact of LLMs. These price drops tie directly to how much energy is being used for running promptsIve heard from sources I trust that both Google Gemini and Amazon Nova charge less than their energy costs for running inference, and I presume the same is true of the other vendors.Theres still plenty to worry about with respect to the environmental impact of the great AI datacenter buildout, but a lot of the concerns over the energy cost of individual prompts are no longer credible.Heres a fun napkin calculation: how much would it cost to generate short descriptions of every one of the 68,000 photos in my personal photo library using Googles Gemini 1.5 Flash 8B (released in October), their cheapest model?Each photo would need 260 input tokens and around 100 output tokens.260 * 68,000 = 17,680,000 input tokens17,680,000 * $0.0375/million = $0.66100 * 68,000 = 6,800,000 output tokens6,800,000 * $0.15/million = $1.02Thats a total cost of $1.68 to process 68,000 images. Thats so absurdly cheap I had to run the numbers three times to confirm I got it right.How good are those descriptions? Heres what I got from this command:llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpegAgainst this photo of butterflies at the California Academy of Sciences:A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange slices of fruit are visible inside the dish.Two butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings. The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (thats less than a 400th of a cent).This increase in efficiency and reduction in price is my single favourite trend from 2024. I want the utility of LLMs at a fraction of the energy cost and it looks like thats what were getting.My butterfly example above illustrates another key trend from 2024: the rise of multi-modal LLMs.A year ago the single most notable example of these was GPT-4 Vision, released at OpenAIs DevDay in November 2023. Googles multi-modal Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it into the 2023 window.In 2024, almost every significant model vendor released multi-modal models. We saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images, audio and video), then September brought Qwen2-VL and Mistrals Pixtral 12B and Metas Llama 3.2 11B and 90B vision models. We got audio input and output from OpenAI in October, then November saw SmolVLM from Hugging Face and December saw image and video models from Amazon Nova.In October I upgraded my LLM CLI tool to support multi-modal models via attachments. It now has plugins for a whole collection of different vision models.I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.Voice and live camera mode are science fiction come to lifeThe audio and live video modes that have started to emerge deserve a special mention.The ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for omni) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models.The demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product.The delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in 4o mode is not running the new features yet.When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. Ive been using it extensively on walks with my dog and its amazing how much the improvement in intonation elevates the material. Ive also had a lot of fun experimenting with the OpenAI audio APIs.Even more fun: Advanced Voice mode can do accents! Heres what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.Your browser does not support the audio element.OpenAI arent the only group with a multi-modal audio model. Googles Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but thats meant to roll out in Q1 of 2025.Googles NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two podcast hosts about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:Your browser does not support the audio element.The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.These abilities are just a few weeks old at this point, and I dont think their impact has been fully felt yet. If you havent tried them out yet you really should.Both Gemini and OpenAI offer API access to these features as well. OpenAI started with a WebSocket API that was quite challenging to use, but in December they announced a new WebRTC API which is much easier to get started with. Building a web app that a user can talk to via voice is easy now!Prompt driven app generation is a commodity alreadyThis was possible with GPT-4 in 2023, but the value it provides became evident in 2024.We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)often in a single prompt.Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new fetaure that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.Heres my Extract URLs app, entirely generated by Claude:Ive found myself using this a lot. I noticed how much I was relying on it in October and wrote Everything I built with Claude Artifacts this week, describing 14 little tools I had put together in a seven day period.Since then, a whole bunch of other teams have built similar systems. GitHub announced their version of thisGitHub Sparkin October. Mistral Chat added it as a feature called Canvas in November.Steve Krause from Val Town built a version of it against Cerebras, showcasing how a 2,000 token/second LLM can iterate on an application with changes visible in less than a second.Then in December, the Chatbot Arena team introduced a whole new leaderboard for this feature, driven by users building the same interactive app twice with two different models and voting on the answer. Hard to come up with a more convincing argument that this feature is now a commodity that can be effectively implemented against all of the leading models.Ive been tinkering with a version of this myself for my Datasette project, with the goal of letting users use prompts to build and iterate on custom widgets and data visualizations against their own data. I also figured out a similar pattern for writing one-shot Python programs, enabled by uv.This prompt-driven custom interface feature is so powerful and easy to build (once youve figured out the gnarly details of browser sandboxing) that I expect it to show up as a feature in a wide range of products in 2025.Universal access to the best models lasted for just a few short monthsFor a few short months this year all three of the best available modelsGPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Prowere freely available to most of the world.OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely available from its launch in June. This was a momentus change, because for the previous year free users had mostly been restricted to GPT-3.5 level models, meaning new users got a very inaccurate mental model of what a capable LLM could actually do.That era appears to have ended, likely permanently, with OpenAIs launch of ChatGPT Pro. This $200/month subscription service is the only way to access their most capable model, o1 Pro.Since the trick behind the o1 series (and the future models it will undoubtedly inspire) is to expend more compute time to get better results, I dont think those days of free access to the best available models are likely to return.Agents still havent really happened yetI find the term agents extremely frustrating. It lacks a single, clear and widely understood meaning... but the people who use the term never seem to acknowledge that.If you tell me that you are building agents, youve conveyed almost no information to me at all. Without reading your mind I have no way of telling with of the dozens of possible definitions you are talking about.The two main categories I see are people who think AI agents are obviously things that go and act on your behalfthe travel agent modeland people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term autonomy is often thrown into the mix too, again without including a clear definition.(I also collected 211 definitions on Twitter a few months agohere they are in Datasette Liteand had gemini-exp-1206attempt to summarize them.)Whatever the term may mean, agents still have that feeling of perpetually coming soon.Terminology aside, I remain skeptical as to their utility based, once again, on the challenge of gullibility. LLMs believe anything you tell them. Any systems that attempts to make meaningful decisions on your behalf will run into the same roadblock: how good is a travel agent, or a digital assistant, or even a research tool if it cant distinguish truth from fiction?Just the other day Google Search was caught serving up an entirely fake description of the non-existant movie Encanto 2. It turned out to be summarizing an imagined movie listing from a fan fiction wiki.Prompt injection is a natural consequence of this gulibility. Ive seen precious little progress on tackling that problem in 2024, and weve been talking about it since September 2022.Im beginning to see the most popular idea of agents as dependent on AGI itself. A model thats robust against gulliblity is a very tall order indeed.Evals really matterAnthropics Amanda Askell (responsible for much of the work behind Claudes Character):The boring yet crucial secret behind good system prompts is test-driven development. You dont write down a system prompt and find ways to test it. You write down tests and find a system prompt that passes them.Its become abundantly clear over the course of 2024 that writing good automated evals for LLM-powered systems is the skill thats most needed to build useful applications on top of these models. If you have a strong eval suite you can adopt new models faster, iterate better and build more reliable and useful product features than your competition.Vercels Malte Ubl:When @v0 first came out we were paranoid about protecting the prompt with all kinds of pre and post processing complexity.We completely pivoted to let it rip. A prompt without the evals, models, and especially UX is like getting a broken ASML machine without a manualIm still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement themIm tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.Apple Intelligence is bad, Apples MLX library is excellentAs a Mac user Ive been feeling a lot better about my choice of platform this year.Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIAs CUDA over other platforms.The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apples MLX library, an array framework for Apple Silicon. Its fantastic.Apples mlx-lm Python supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.Prince Canumas excellent, fast moving mlx-vlm project brings vision LLMs to Apple Silicon as well. I used that recently to run Qwens QvQ.While MLX is a game changer, Apples own Apple Intelligence features have mostly been a dissapointment. I wrote about their initial announcement in June, and I was optimistic that Apple had focused hard on the subset of LLM applications that preserve user privacy and minimize the chance of users getting mislead by confusing features.Now that those features are rolling out theyre pretty weak. As an LLM power-user I know what these models are capable of, and Apples LLM features offer a pale imitation of what a frontier LLM can do. Instead were getting notification summaries that misrepresent news headlines and writing assistant tools that Ive not found useful at all. Genmoji are kind of fun though.The rise of inference-scaling reasoning modelsThe most interesting development in the final quarter of 2024 was the introduction of a new shape of LLM, exemplified by OpenAIs o1 modelsinitially released as o1-preview and o1-mini on September 12th.One way to think about these models is an extension of the chain-of-thought prompting trick, first explored in the May 2022 paper Large Language Models are Zero-Shot Reasoners.This is that trick where, if you get a model to talk out loud about a problem its solving, you often get a result which the model would not have achieved otherwise.o1 takes this process and further bakes it into the model itself. The details are somewhat obfuscated: o1 models spend reasoning tokens thinking through the problem that are not directly visible to the user (though the ChatGPT UI shows a summary of them), then outputs a final result.The biggest innovation here is that it opens up a new way to scale a model: instead of improving model performance purely through additional compute at training time, models can now take on harder problems by spending more compute on inference.The sequel to o1, o3 (they skipped o2 for European trademark reasons) was announced on 20th December with an impressive result against the ARC-AGI benchmark, albeit one that likely involved more than $1,000,000 of compute time expense!o3 is expected to ship in January. I doubt many people have real-world problems that would benefit from that level of compute expenditureI certainly dont!but it appears to be a genuine next step in LLM architecture for taking on much harder problems.OpenAI are not the only game in town here. Google released their first entrant in the category, gemini-2.0-flash-thinking-exp, on December 19th.Alibabas Qwen team released their QwQ model on November 28thunder an Apache 2.0 license, and that one I could run on my own machine. They followed that up with a vision reasoning model called QvQ on December 24th, which I also ran locally.DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through their chat interface on November 20th.To understand more about inference scaling I recommend Is AI progress slowing down? by Arvind Narayanan and Sayash Kapoor.Nothing yet from Anthropic or Meta but I would be very surprised if they dont have their own inference-scaling models in the works. Meta published a relevant paper Training Large Language Models to Reason in a Continuous Latent Space in December.Was the best currently available LLM trained in China for less than $6m?Not quite, but almost! It does make for a great attention-grabbing headline.The big news to end the year was the release of DeepSeek v3dropped on Hugging Face on Christmas Day without so much as a README file, then followed by documentation and a paper the day after that.DeepSeek v3 is a huge 685B parameter modelone of the largest openly licensed models currently available, significantly bigger than the largest of Metas Llama series, Llama 3.1 405B.Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models. This is by far the highest ranking openly licensed model.The really impressive thing about DeepSeek v3 is the training cost. The model was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama 3.1 405B trained 30,840,000 GPU hours11x that used by DeepSeek v3, for a model that benchmarks slightly worse.Those US export regulations on GPUs to China seem to have inspired some very effective training optimizations!The environmental impact got betterA welcome result of the increased efficiency of the modelsboth the hosted ones and the ones I can run locallyis that the energy usage and environmental impact of running a prompt has dropped enormously over the past couple of years.OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days. I have it on good authority that neither Google Gemini nor Amazon Nova (two of the least expensive model providers) are running prompts at a loss.I think this means that, as individual users, we dont need to feel any guilt at all for the energy consumed by the vast majority of our prompts. The impact is likely neglible compared to driving a car down the street or maybe even watching a video on YouTube.Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign that training costs can and should continue to drop.For less efficient models I find it useful to compare their energy usage to commercial flights. The largest Llama 3 model cost about the same as a single digit number of fully loaded passenger flights from New York to London. Thats certainly not nothing, but once trained that model can be used by millions of people at no extra training cost.The environmental impact got much, much worseThe much bigger problem here is the enormous competitive buildout of the infrastructure that is imagined to be necessary for these models in the future.Companies like Google, Meta, Microsoft and Amazon are all spending billions of dollars rolling out new datacenters, with a very material impact on the electricity grid and the environment. Theres even talk of spinning up new nuclear power stations, but those can take decades.Is this infrastructure necessary? DeepSeek v3s $6m training cost and the continued crash in LLM prices might hint that its not. But would you want to be the big tech executive that argued NOT to build out this infrastructure only to be proven wrong in a few years time?An interesting point of comparison here could be the way railways rolled out around the world in the 1800s. Constructing these required enormous investments and had a massive environmental impact, and many of the lines that were built turned out to be unnecessarysometimes multiple lines from different companies serving the exact same routes!The resulting bubbles contributed to several financial crashes, see Wikipedia for Panic of 1873, Panic of 1893, Panic of 1901 and the UKs Railway Mania. They left us with a lot of useful infrastructure and a great deal of bankruptcies and environmental damage.The year of slop2024 was the year that the word "slop" became a term of art. I wrote about this in May, expanding on this tweet by @deepfates:Watching in real time as slop becomes a term of art. the way that spam became the term for unwanted emails, slop is going in the dictionary as the term for unwanted AI generated contentI expanded that definition a tiny bit to this:Slop describes AI-generated content that is both unrequested and unreviewed.I ended up getting quoted talking about slop in both the Guardian and the NY Times. Heres what I said in the NY TImes:Society needs concise ways to talk about modern A.I. both the positives and the negatives. Ignore that email, its spam, and Ignore that article, its slop, are both useful lessons.I love the term slop because it so succinctly captures one of the ways we should not be using generative AI!Slop was even in the running for Oxford Word of the Year 2024, but it lost to brain rot.Synthetic training data works greatAn idea that surprisingly seems to have stuck in the public consciousness is that of model collapse. This was first described in the paper The Curse of Recursion: Training on Generated Data Makes Models Forget in May 2023, and repeated in Nature in July 2024 with the more eye-catching headline AI models collapse when trained on recursively generated data.The idea is seductive: as the internet floods with AI-generated slop the models themselves will degenerate, feeding on their own output in a way that leads to their inevitable demise!Thats clearly not happening. Instead, we are seeing AI labs increasingly train on synthetic contentdeliberately creating artificial data to help steer their models in the right way.One of the best descriptions Ive seen of this comes from the Phi-4 technical report, which included this:Synthetic data as a substantial component of pretraining is becoming increasingly common, and the Phi series of models has consistently emphasized the importance of synthetic data. Rather than serving as a cheap substitute for organic data, synthetic data has several direct advantages over organic data.Structured and Gradual Learning. In organic datasets, the relationship between tokens is often complex and indirect. Many reasoning steps may be required to connect the current token to the next, making it challenging for the model to learn effectively from next-token prediction. By contrast, each token generated by a language model is by definition predicted by the preceding tokens, making it easier for a model to follow the resulting reasoning patterns.Another common technique is to use larger models to help create training data for their smaller, cheaper alternativesa trick used by an increasing number of labs. DeepSeek v3 used reasoning data created by DeepSeek-R1. Metas Llama 3.3 70B fine-tuning used over 25M synthetically generated examples.Careful design of the training data that goes into an LLM appears to be the entire game for creating these models. The days of just grabbing a full scrape of the web and indiscriminately dumping it into a training run are long gone.LLMs somehow got even harder to useI drum Ive been banging for a while is that LLMs are power-user toolstheyre chainsaws disguised as kitchen knives. They look deceptively simple to usehow hard can it be to type messages to a chatbot?but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.If anything, this problem got worse in 2024.Weve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether its accurately reflected in the undocumented and secret training set.The number of available systems has exploded. Different systems have different tools they can apply to your problemslike Python and JavaScript and web search and image generation and maybe even database lookups... so youd better understand what those tools are, what they can do and how to tell if the LLM used them or not.Did you know ChatGPT has two entirely different ways of running Python now?Want to build a Claude Artifact that talks to an external API? Youd better understand CSP and CORS HTTP headers first.The models may have got more capable, but most of the limitations remained the same. OpenAIs o1 may finally be able to (mostly) count the Rs in strawberry, but its abilities are still limited by its nature as an LLM and the constraints placed on it by the harness its running in. o1 cant run web searches or use Code Interpreter, but GPT-4o canboth in that same ChatGPT UI. (o1 will pretend to do those things if you ask it to, a regression to the URL hallucinations bug from early 2023).What are we doing about this? Not much. Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.Meanwhile, its increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. Ive seen so many examples of people trying to win an argument with a screenshot from ChatGPTan inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.Theres a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they cant see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!There is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.Knowledge is incredibly unevenly distributedMost people have heard of ChatGPT by now. How many have heard of Claude?The knowledge gap between the people who actively follow this stuff and the 99% of the population who do not is vast.The pace of change doesnt help either. In just the past month weve seen general availability of live interfaces where you can point your phones camera at something and talk about it with your voice... and optionally have it pretend to be Santa. Most self-c
Content Creation/Content Synthesis/Decision Making
Computer and Mathematical/Arts, Design, Entertainment, Sports, and Media/Business and Financial Operations
null
null
null
null
null
null
news
Benjamin Kaiser
Can LLMs Accurately Recall the Bible
I've often found myself uneasy when LLMs (Large Language Models) are asked to quote the Bible. While they can provide insightful discussions about faith, their tendency to hallucinate responses raises concerns when dealing with scripture, which we regard as the inspired Word of God.To explore these concerns, I created a benchmark to evaluate how accurately LLMs can recall scripture word for word. Here's a breakdown of my methodology and the test results.MethodologyTo ensure consistent and
https://benkaiser.dev/can-llms-accurately-recall-the-bible/
https://benkaiser.dev/co…/blog-title.jpeg
2024-12-29T03:44:52Z
I've often found myself uneasy when LLMs (Large Language Models) are asked to quote the Bible. While they can provide insightful discussions about faith, their tendency to hallucinate responses raises concerns when dealing with scripture, which we regard as the inspired Word of God.To explore these concerns, I created a benchmark to evaluate how accurately LLMs can recall scripture word for word. Here's a breakdown of my methodology and the test results.MethodologyTo ensure consistent and fair evaluation, I tested each model using six scenarios designed to measure their ability to accurately recall scripture. For readers interested in the technical details, the source code for the tests is available here. All tests were conducted with a temperature setting of 0, and I have given slack to the models by making the pass check case and whitespace insensitive.A temperature of 0 ensures the models generate the most statistically probable response at each step, minimising creativity or variability and prioritising accuracy. This approach is particularly important when evaluating fixed reference material like the Bible, where precise wording matters.Test 1: Popular Scripture RecallModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetWhen asking a model to recall John 3:16 in the NIV translation, the only model that failed to accurately recall the verse word for word was Llama 3.3 70B. It was only a very slight translation mismatch, with it recalling "only begotten son" where the actual verse in the NIV does not include begotten, despite it being present in other translations.Test 2: Obscure Verse RecallModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetMany models struggled to recall Obadiah 1:16 NIV word for word, often mixing up the words with other translations. For these cases, I have marked them as partial for correctly recalling the verse in some translation, even if not the specific requested one. The models that clearly succeeded seem to be very large models, 405B for Llama and GPT 4o and Claude 3.5 Sonnet.Test 3: Verse ContinuationModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetWhen quoting the model 2 Chronicles 11:13 (but without specifying where in the bible it is found) and asking it to produce the immediate next verse, we had a much more mixed bag of results. Many medium-to-large sized models got this correct, but the smaller ones completely hallucinated parts or all of the verse. Claude 3.5 Haiku almost recalled the verse, but referred to the Levites as "they", which is not explicitly a translation in any of the more well known translations and appears to be the model substituting the intention of the word rather than the exact one.Test 4: Verse Block RecallModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetWhen asked to recall Lamentations chapter 3 verses 19 through 24, the models did this very well. Only the smallest of the models, Llama 3.1 8B outright failed here, instead recalling the beginning of the chapter. The two warnings were only slight translation mismatches of a few words, but the essence of the verse was preserved.Test 5: Query Based LookupModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetAsking the models, "What's that verse in the bible about the Earth being filled with knowledge of God's glory?", all of them successfully recalled it was Habakkuk 2:14. Verse lookup is definitely a strong-suit, even in smaller models.Test 6: Entire Chapter RecallModelPassLlama 3.1 405BLlama 3.1 70BLlama 3.1 8BLlama 3.3 70BGPT 4oGPT 4o miniGemini 1.5 ProGemini 1.5 FlashGemini 2.0 FlashClaude 3.5 HaikuClaude 3.5 SonnetWhen asking for the entire contents of Romans 6 in the KJV translation, almost all of the models recalled all 23 verses accurately. Even the failed case of Llama 3.1 8B recalled over 98% of the words correctly, with only 9 incorrect words. If you really want to lean on an LLM to give you textually accurate bible verses of popular translations, you really should lean on higher parameter count (i.e. larger) models. These include models like Llama 405B, OpenAI GPT 4o and Claude Sonnet which all had perfect scores. Smaller models (7B range) will often mix up translations, and in some cases even mix up or hallucinate verse altogether. Medium-sized models (70B range) often accurately preserve the intention of the verses, although the verse may be a mangled representation of several translations, and in some cases paraphrased a little by the LLM.You can certainly still use smaller models for discussion that references scripture by Book/Chapter/Verse, but it is important to lean on an actual copy of the Bible for the correct text in these cases.Looking into the future, we may very well see smaller models perform better on these benchmarks, but there is surely a limitation to how much information can be encoded into such small models.For full test results, see the results file here, including the raw prompts for each test. If you feel like I missed a crucial test, feel free to submit an issue on GitHub.
Detection and Monitoring/Content Synthesis
Unknown
null
null
null
null
null
null
news
Gary Grossman, Edelman
The end of AI scaling may not be nigh: Here’s what’s next
Here's why the AI field is poised for continued breakthroughs through new methodologies and creative engineering.
https://venturebeat.com/ai/the-end-of-ai-scaling-may-not-be-nigh-heres-whats-next/
https://venturebeat.com/…w=1200&strip=all
2024-12-01T20:15:00Z
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreAs AI systems achieve superhuman performance in increasingly complex tasks, the industry is grappling with whether bigger models are even possible or if innovation must take a different path.The general approach to large language model (LLM) development has been that bigger is better, and that performance scales with more data and more computing power. However, recent media discussions have focused on how LLMs are approaching their limits. Is AI hitting a wall?The Verge questioned, while Reuters reported that OpenAI and others seek new path to smarter AI as current methods hit limitations. The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic. This issue has led to concerns that these systems may be subject to the law of diminishing returns where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets. This does not mean the end of performance gains for AI. It simply means that to sustain progress, further engineering is needed through innovation in model architecture, optimization techniques and data use.A similar pattern of diminishing returns appeared in the semiconductor industry. For decades, the industry had benefited from Moores Law, which predicted that the number of transistors would double every 18 to 24 months, driving dramatic performance improvements through smaller and more efficient designs. This too eventually hit diminishing returns, beginning somewhere between 2005 and 2007 due to Dennard Scaling the principle that shrinking transistors also reduces power consumption having hit its limits which fueled predictions of the death of Moores Law.I had a close up view of this issue when I worked with AMD from 2012-2022. This problem did not mean that semiconductors and by extension computer processors stopped achieving performance improvements from one generation to the next. It did mean that improvements came more from chiplet designs, high-bandwidth memory, optical switches, more cache memory and accelerated computing architecture rather than the scaling down of transistors.Similar phenomena are already being observed with current LLMs. Multimodal AI models like GPT-4o, Claude 3.5 and Gemini 1.5 have proven the power of integrating text and image understanding, enabling advancements in complex tasks like video analysis and contextual image captioning. More tuning of algorithms for both training and inference will lead to further performance gains. Agent technologies, which enable LLMs to perform tasks autonomously and coordinate seamlessly with other systems, will soon significantly expand their practical applications.Future model breakthroughs might arise from one or more hybrid AI architecture designs combining symbolic reasoning with neural networks. Already, the o1 reasoning model from OpenAI shows the potential for model integration and performance extension. While only now emerging from its early stage of development, quantum computing holds promise for accelerating AI training and inference by addressing current computational bottlenecks.The perceived scaling wall is unlikely to end future gains, as the AI research community has consistently proven its ingenuity in overcoming challenges and unlocking new capabilities and performance advances. In fact, not everyone agrees that there even is a scaling wall. OpenAI CEO Sam Altman was succinct in his views: There is no wall.Source: X https://x.com/sama/status/1856941766915641580 Speaking on the Diary of a CEO podcast, ex-Google CEO and co-author of Genesis Eric Schmidt essentially agreed with Altman, saying he does not believe there is a scaling wall at least there wont be one over the next five years. In five years, youll have two or three more turns of the crank of these LLMs. Each one of these cranks looks like its a factor of two, factor of three, factor of four of capability, so lets just say turning the crank on all these systems will get 50 times or 100 times more powerful, he said. Leading AI innovators are still optimistic about the pace of progress, as well as the potential for new methodologies. This optimism is evident in a recent conversation on Lennys Podcast with OpenAIs CPO Kevin Weil and Anthropic CPO Mike Krieger.Source: https://www.youtube.com/watch?v=IxkvVZua28k In this discussion, Krieger described that what OpenAI and Anthropic are working on today feels like magic, but acknowledged that in just 12 months, well look back and say, can you believe we used that garbage? Thats how fast [AI development] is moving. Its true it does feel like magic, as I recently experienced when using OpenAIs Advanced Voice Mode. Speaking with Juniper felt entirely natural and seamless, showcasing how AI is evolving to understand and respond with emotion and nuance in real-time conversations.Krieger also discusses the recent o1 model, referring to this as a new way to scale intelligence, and we feel like were just at the very beginning. He added: The models are going to get smarter at an accelerating rate. These expected advancements suggest that while traditional scaling approaches may or may not face diminishing returns in the near-term, the AI field is poised for continued breakthroughs through new methodologies and creative engineering.While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.A recent study forecasted that ChatGPT would help doctors make diagnoses when presented with complicated patient cases. Conducted with an early version of GPT-4, the study compared ChatGPTs diagnostic capabilities against those of doctors with and without AI help. A surprising outcome revealed that ChatGPT alone substantially outperformed both groups, including doctors using AI aid. There are several reasons for this, from doctors lack of understanding of how to best use the bot to their belief that their knowledge, experience and intuition were inherently superior.This is not the first study that shows bots achieving superior results compared to professionals. VentureBeat reported on a study earlier this year which showed that LLMs can conduct financial statement analysis with accuracy rivaling and even surpassing that of professional analysts. Also using GPT-4, another goal was to predict future earnings growth. GPT-4 achieved 60% accuracy in predicting the direction of future earnings, notably higher than the 53 to 57% range of human analyst forecasts.Notably, both these examples are based on models that are already out of date. These outcomes underscore that even without new scaling breakthroughs, existing LLMs are already capable of outperforming experts in complex tasks, challenging assumptions about the necessity of further scaling to achieve impactful results. These examples show that current LLMs are already highly capable, but scaling alone may not be the sole path forward for future innovation. But with more scaling possible and other emerging techniques promising to improve performance, Schmidts optimism reflects the rapid pace of AI advancement, suggesting that in just five years, models could evolve into polymaths, seamlessly answering complex questions across multiple fields. Whether through scaling, skilling or entirely new methodologies, the next frontier of AI promises to transform not just the technology itself, but its role in our lives. The challenge ahead is ensuring that progress remains responsible, equitable and impactful for everyone.Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.DataDecisionMakersWelcome to the VentureBeat community!DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.You might even consider contributing an article of your own!Read More From DataDecisionMakers
Decision Making/Prediction/Content Synthesis
Healthcare Practitioners and Support/Business and Financial Operations
null
null
null
null
null
null
news
Will Shanklin
LG updates its Gram laptop line ahead of CES 2025
LG’s Gram laptops are back for another round of updates ahead of CES 2025. A decade into the thin and light lineup’s existence in LG’s portfolio, the latest models load up on AI (surprise!) and boost performance while maintaining their trademark portability.Four new models are launching at CES: two variants of the Gram Pro, a new Gram Pro 2-in-1 and the entry-level Gram Book. The first version of the Gram Pro has an Intel Core H-series (Arrow Lake) processor under the hood for more traditional laptop tasks (including some gaming). A second model uses an Intel Core Ultra V-series (Lunar Lake) chip for AI tasks. The latter is a Microsoft Copilot+ PC with all the AI goodies expected from that line, including Recall, Live Captions, Cocreator and Windows Studio Effects.Even if you don’t opt for the Copilot+ variant, you’ll still get LG’s AI features in most of the new models (all but the Gram Book). These are split into two camps: on-device and cloud-processed AI tasks. On-device features include a scaled-down chatbot derived from LG’s EXAONE large language model. It also adds LG’s Time Travel, a feature similar to Microsoft’s Recall that can call up web pages, documents, videos and audio files you’ve visited or used. It’s unclear if or how LG’s version will avoid the privacy pitfalls Microsoft ran into with Recall.The cloud-based AI features include Gram Chat Cloud, a more robust chatbot (powered by GPT-4o) that can integrate with your calendar and email services.The Gram Pro has a 17-inch (Arrow Lake) or 16-inch (Lunar Lake) screen with a 2,560 x 1,600 LCD screen. Its variable refresh rate maxes out at 144Hz. Both variants have up to 32GB (LPDDR5X Max 8,400MHz, Dual Channel) of RAM and 2TB of storage.The Arrow Lake model should make for a solid gaming laptop with RTX 4050 graphics. The AI / Lunar Lake model only has integrated Intel graphics. But at least the latter is lighter and thinner, weighing 2.73 pounds. compared to the Arrow Lake model’s 3.26 pounds.LGLG snagged a CES 2025 Innovation Award for the latest Gram Pro 2-in-1. The top-line configuration has a 16-inch screen with a 2,880 x 1,800 OLED display (120Hz variable refresh rate); cheaper variants are limited to a 2,560 x 1,600 LCD (144Hz VRR). The 2-in-1 ships with a wirelessly rechargeable stylus and — as its name suggests — can be folded back into tablet mode.Finally, the Gram Book is an entry-level model LG is launching “to expand its US market presence.” Its screen is a more compromised 1,920 x 1,080 15.6-inch LCD at a mere 60Hz. Its Intel Core i5 CPU with integrated Intel graphics and a maximum of 16GB of RAM combine to make it the most underpowered of the bunch. It’s also the heaviest at 3.75 pounds.Unfortunately, we don’t know how “entry-level” it is: LG hasn’t yet announced pricing information (or release dates) for any of the new laptops.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/lg-updates-its-gram-laptop-line-ahead-of-ces-2025-090049429.html?src=rss
https://consent.yahoo.com/v2/collectConsent?sessionId=1_cc-session_d3b5f7da-8610-4e8b-a04d-f765a10ac342
null
2024-12-31T09:00:49Z
If you click 'Accept all', we and our partners, including 238 who are part of the IAB Transparency & Consent Framework, will also store and/or access information on a device (in other words, use cookies) and use precise geolocation data and other personal data such as IP address and browsing and search data, for personalised advertising and content, advertising and content measurement, and audience research and services development.If you do not want us and our partners to use cookies and personal data for these additional purposes, click 'Reject all'.If you would like to customise your choices, click 'Manage privacy settings'.You can change your choices at any time by clicking on the 'Privacy & cookie settings' or 'Privacy dashboard' links on our sites and apps. Find out more about how we use your personal data in our privacy policy and cookie policy.
Digital Assistance/Content Synthesis/Content Creation
Unknown
null
null
null
null
null
null
news
Luke Marsden
Test Driven Development (TDD) for your LLMs? Yes please, more of that please
Recap and a walkthrough video of the Testing & CI for GenAI Workshop we ran yesterday. Join the next one!
https://blog.helix.ml/p/building-reliable-genai-applications
https://substackcdn.com/…25_1344x768.webp
2024-12-04T14:42:10Z
Testing LLM-based applications has become one of the most crucial challenges in modern software development. While traditional software testing gives us clear pass/fail criteria, how do you verify that your AI is consistently giving good responses? When is a response "correct enough"? And how do you automate this testing process in a way that scales?In this hands-on workshop, we tackle these challenges head-on by building and testing three different types of AI applications. Rather than getting lost in theoretical discussions, we focus on practical solutions that you can implement today.Watch the recap video above, and/or sign up to join the next one! Register here - we are running them at 9am PT / 5pm UK every Monday.The Power of Test Driven Development (TDD) for GenAIThe traditional approach to testing AI applications often relies on manual review and subjective evaluation also known as testing based on vibes! A team member might spend hours chatting with the AI, trying to catch edge cases and inconsistencies. While this has its place, it's neither scalable nor reproducible.Instead, we demonstrate a more systematic approach using Helix.ml's testing framework. The key insight is using another AI model as an automated evaluator (judge), with clearly defined criteria for what makes a response acceptable. This, plus the tooling and configuration format to run these tests automatically, creates a reproducible testing process that can be integrated into your CI/CD pipeline.What We Build TogetherThroughout the workshop, we create three distinct applications that showcase different testing challenges:A Comedian Chatbot: Seems simple, but raises interesting questions about consistency and personality. How do you verify that every response is actually a joke? We show how precise prompt engineering and automated testing can ensure consistent behavior.Document Q&A System: Using real HR documentation, we build a system that can accurately answer policy questions. This demonstrates how to test against ground truth while allowing for natural language variation.Exchange Rate API Integration: We tackle the challenges of testing AI systems that interact with external APIs, ensuring they handle currency pairs correctly and present information clearly.Continuous Integration for AI ApplicationsThe most exciting part? We show how to automate all of this testing in your CI pipeline. By the end of the workshop, you'll see how to:Write testable specifications for AI applications in YAMLCreate automated evaluations using LLM judgesIntegrate these tests into GitHub Actions or GitLab CIDeploy tested changes automaticallyWhat's Next?We're running regular workshops to help teams implement these testing practices. Join the next workshop to learn these critical skills to build reliable GenAI applications that have access to knowledge and API integrations to business systems.Register for the next workshop here: Register- workshops run at 9am PT / 5pm UK every Monday.Want to dive deeper?We also offer private workshops to help you implement these testing practices with your specific use cases. Email [email protected] to schedule a session.The code and examples from this workshop are available on GitHub: https://github.com/helixml/testing-genaiWatch the walkthrough video:Building reliable AI applications doesn't have to be a shot in the dark. With the right testing framework and practices, you can develop AI systems with the same confidence you bring to traditional software development. Join us in the next workshop to learn how.
Unknown
Unknown
null
null
null
null
null
null
news
Verity Harding
It’s Time to Move Past AI Nationalism
Countries must collaborate if we’re going to create a less fearful future for artificial intelligence.
https://www.wired.com/story/artificial-intelligence-global-collaboration/
https://media.wired.com/…ia%20Fruitos.jpg
2024-12-26T09:00:00Z
In 2025, there will be a course correction in AI and geopolitics, as world leaders increasingly understand that their national interests are best served through the promise of a more positive and cooperative future.The post-ChatGPT years in AI discourse could be characterized as somewhere between a gold rush and a moral panic. In 2023, at the same time as there was record investment in AI, tech experts, including Elon Musk and Steve Wozniak, published an open letter calling for a six-month moratorium on the training of AI systems more powerful than GPT-4, while others compared AI to a nuclear war and a pandemic.This has understandably clouded the judgment of political leaders, pushing the geopolitical conversation about AI into some disturbing places. At the AI & Geopolitics Project, my research organization at Cambridge University, our analysis clearly shows the increasing trend towards AI nationalism.In 2017, for example, President Xi Jinping announced plans for China to become an AI superpower by 2030. The Chinese New Generation AI Development Plan aimed for the country to reach a world-leading level of AI innovation by 2025 and become a major AI innovation center by 2030.The CHIPs and Science Act of 2022a US ban on exporting semiconductorswas a direct response to this, designed to advantage US domestic AI capabilities and curtail China. In 2024, following an executive order signed by President Biden, the US Treasury Department also published draft rules to ban or restrict investments in artificial intelligence in China.AI nationalism depicts AI as a battle to be won, rather than an opportunity to be harnessed. Those who favor this approach, however, would do well to learn deeper lessons from the Cold War beyond the notion of an arms race. At that time, the United States, while pushing to become the most advanced technological nation, managed to use politics, diplomacy and statecraft to create a positive and aspirational vision for space exploration. Successive US governments also managed to get support at the UN for a treaty that protected space from nuclearization, specified that no nation could colonize the moon, and ensured that space was the province of all mankind.That same political leadership has been lacking in AI. In 2025, however, we will start to see a shift back in the direction of cooperation and diplomacy.The AI Summit in France in 2025 will be part of this shift. President Macron is already reframing his event away from a strict safety framing of AI risk, and towards one that, in his words, focuses on the more pragmatic solutions and standards. In a virtual address to the Seoul Summit, the French president made clear that he intends to address a much broader range of policy issues, including how to actually ensure society benefits from AI.The UN, recognizing the exclusion of some countries from the debate around AI, has also released in 2024 its own plans aimed at a more collaborative global approach.Even the US and China have begun to engage in tentative diplomacy, establishing a bilateral consultation channel on AI in 2024. While the impact of these initiatives remains uncertain, they clearly indicate that, in 2025, the worlds AI superpowers will likely pursue diplomacy over nationalism.
Unknown
Unknown
null
null
null
null
null
null
news
Carl Franzen
The 4 biggest AI stories from 2024 and one key prediction for 2025
By all measures, 2024 was the biggest year for artificial intelligence yet — at least when it comes to the commercialization.
https://venturebeat.com/ai/the-4-biggest-ai-stories-from-2024-and-one-key-prediction-for-2025/
https://venturebeat.com/…w=1200&strip=all
2024-12-23T15:10:03Z
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreBy all measures, 2024 was the biggest year for artificial intelligence yet at least when it comes to the commercialization of the technology.The large language model (LLM) boom spurred on by ChatGPTs launch in late 2022 showed no signs of slowing down, with numerous new LLMs introduced by not just OpenAI and stalwart tech giants such as Microsoft, Meta and Google, but also scores of other startups and individual developers.Reports of AI research slowing proved to be, if not unfounded, certainly overstated for now.In addition, new technologies beyond the Transformer architecture that underpins most big LLMs began to surface, such as Liquid AIs Liquid Foundation Models.And finally, firms began to fully embrace the agentic approach to AI developing specific AI powered bots, applications, and workflows that can work on specific problems independently, or with less human stewardship than the typical back-and-forth of LLM chatbots.Distilling the years news stories down to a top 14, much less a top 10 or top 4, was a vexing effort. But Ive gone ahead and tried, albeit cheating slightly by combining several stories into larger themes. In my eyes, heres what will make the largest impact moving out from this year:The company arguably most responsible for kicking off the gen AI era didnt miss a beat this year, despite intensifying competition from newbies and legacy tech, even its own investor and partner Microsoft.o1 Model: OpenAI released its first new family of large general purpose models beyond its GPT series, the o1 reasoning series, which allocates more time to process complex prompts, resulting in higher accuracy. Its particularly effective in science, coding, and reasoning tasks.o3 Model: It followed the o1 model from September with a blockbuster end-of-the-year announcement of an even more advancedo3 model. While this wont be availably publicly or even to any third parties till early 2025, it shows OpenAI is not resting on its laurels.ChatGPT Search: This feature, initially launched as an invitation-only stand alone product called SearchGPT before being collapsed into ChatGPT proper, enables more real-time web information retrieval within ChatGPT and a refined presentation of search results, enhancing its utility for up-to-date queries and going head-to-head against Google, Bing, and newcomer Perplexity.Canvas: Introduced in October, Canvas expands the ChatGPT interface beyond that of a conversational one to a workstation like pane that can update content dynamically at the users request, such as editing a document or coding project. Of course, it was hard not to see it as a reaction to, or at least a comparable feature to Anthropics Artifacts announced several months prior.Sora: After nearly a year of teasing us with its closely guarded video generator model, OpenAI in early December finally launched Sora to the masses, quickly inviting a wide range of reactions as it sought to differentiate in a hotly competitive AI video space with a unique and well thought-out interface and storyboarding feature.Llama 3 and 3.1: Meta introduced Llama 3 in April, setting a new standard for performance in open source AI, then followed it up quickly with Llama 3.1 in July with 405 billion parameters. Versions of Llama 3.1 were used to power Meta AI, the companys assistant integrated across platforms like WhatsApp, Messenger, Instagram, and Facebook, aiming to become the most widely used AI assistant.Llama 3.3: Released in December 2024, Llama 3.3 delivered performance comparable to larger models but at a fraction of the computational cost, making it more accessible for enterprise applications.Meanwhile, Chinese models such as Alibabas Qwen-2.5 family and DeepSeeks new V2.5 and R1-Lite Preview appeared seemingly out of nowhere to top some of the benchmark charts, and Nvidia itself went beyond supplying graphics cards and software architectures to launch its own open source, powerful Nemotron-70B model.Nous Research, a small outfit in San Francisco aiming to offer more personalized and less restrictive AI models as open source, also debuted several cool new ideas.And lets not forget Frances Mistral, which rapidly expanded its own open source and proprietary AI offerings.In the comeback story of the year, Googles Gemini series of AI models that were once mocked for their strange image generations and criticized for being overly woke came back roaring with new, more powerful versions that now top the third-party performance benchmark charts and are increasingly appealing to developers and businesses.Google introduced Gemini 2.0 Flash, a multimodal AI model that supports streaming video analysis and can see and instruct what youre doing on your screen, and followed it up with Gemini 2.0 Flash Thinking that competes with OpenAIs o1 and o3 reasoning models.As the year went on, agentic AI went from being a buzzworld to a real series of major product announcements and initiatives by top enteprise software vendors. Take for example:Salesforces Agentforce 2.0: Salesforce unveiled Agentforce 2.0 a few days ago, an advanced AI agent program to enhance reasoning, integration, and customization features across its CRM and sales offerings, as well as Slack, significantly improving enterprise productivity tools.SAPs Joule: SAP converted its Joule chatbot into an AI agent powered by open-source large language models (LLMs), driving innovation and efficiency in enterprise settings.Googles Project Astra: As part of the Gemini 2.0 initiative, Google launched Project Astra, an AI assistant designed to provide real-time, contextual responses by leveraging Googles suite of services, aiming to enhance user productivity and decision-making.My big prediction for 2025: AI generated content will reign supremeBuilding upon these advancements, 2025 is poised to witness the proliferation of AI-generated content across business and consumer domains, especially as everyone from OpenAI to Meta, Google, Microsoft, Apple, even Elon Musks xAI now has AI image generators built in to their offerings.This expansion will streamline content creation, enhance personalization, and drive efficiency in various sectors.Additionally, we anticipate the initial large-scale deployments of large language models (LLMs) and generative AI-powered robotics in both commercial and consumer settings, revolutionizing automation and human-robot interactions.Thats all in the last #AIBeat newsletter for 2024. Thanks for reading, writing, subscribing, sharing, commenting, and for being here with us. Looking forward to sharing more and hearing more from you all in 2025.Happy holidays and New Year from all of us at VentureBeat to you and your loved ones.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.
Unknown
Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null