source
stringclasses 2
values | author
stringlengths 0
824
⌀ | title
stringlengths 0
475
⌀ | description
stringlengths 0
32.8k
⌀ | url
stringlengths 0
713
| urlToImage
stringlengths 0
2k
⌀ | publishedAt
stringlengths 20
20
⌀ | content
stringlengths 0
32.8k
⌀ | category_nist
stringlengths 5
160
| category
stringlengths 5
239
| id
stringlengths 6
7
⌀ | subreddit
stringlengths 3
21
⌀ | score
int64 0
30.2k
⌀ | num_comments
int64 0
2.27k
⌀ | created_time
timestamp[ns] | top_comments
stringlengths 1
25.4k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
news | Dassault Aviation | Dassault Aviation and ISAE-SUPAERO renew their collaboration on the "Design and Architecture of Cognitive Air Systems" Research Chair | Dassault Aviation and ISAE-SUPAERO renew their collaboration on the "Design and Architecture of Cognitive Air Systems" Research Chair The "Design and... | https://finance.yahoo.com/news/dassault-aviation-isae-supaero-renew-100900242.html | https://s.yimg.com/uu/api/res/1.2/C_QPVwlloSO4quvT7k93ig--~B/aD0yNzg7dz04MDA7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/globenewswire.com/0deeab091917886ca0308dd9bf4f1a02 | 2022-03-21T10:09:00Z | Dassault Aviation and ISAE-SUPAERO renew their collaboration on the "Design and Architecture of Cognitive Air Systems" Research ChairThe "Design and Architecture of Cognitive Air Systems" teaching and research chair, initiated in 2016 by Dassault Aviation, ISAE-SUPAERO and its Foundation, aims at rethinking the relationship between crews and systems used in aviation. After some first promising results, the two aviation actors have decided to renew their partnership for another three years. Work aiming to optimize Human-Machine interactionThis chair, whose principal areas of research are neuro-ergonomics, automated decision-making and systems engineering, aims to study different aspects of human-machine collaboration. The idea is to make civil and military aviation operations more robust, efficient and safer, while guaranteeing complete crew control. The systems considered are often operated in complex situations, so they feature advanced automated decision-making algorithms from the field of artificial intelligence (AI) to carry out their missions more independently, but always under human control.At ISAE-SUPAERO, the Department of Aerospace Vehicles Design and Control (DCAS) has expertise in neuro-ergonomics and AI for systems control. The field of neuro-ergonomics analyses brain function and user behaviors at work through the lens of neuroscience. In practical terms, it involves evaluating the mental state of users in relation to their ability to complete the tasks assigned to them. AI for system control is the field that develops algorithms for automated decision-making. The decisions concern, for example, the set of tasks to be suggested to the operator or to be fully automated to alleviate the operators workload and improve their performance in the operating context.Dassault Aviation is particularly interested in issues surrounding Human-Machine interaction because military aviation is a very demanding field due to the variety and unpredictability of missions which require complex tactical management. The challenge lies in providing Humans with all the services they need to take responsibility for this management. This is why we are working with ISAE-SUPAERO in order to identify the phenomena that impact collaboration between crews and their machines, says Jean-Louis Gueneau, Scientific Coordinator of the chair at Dassault Aviation.First conclusive resultsThe work conducted by the research chair on Human-Machine interaction from 2016 to 2021 led to the development of different tools based on physiological measurements, as well as machine learning techniques and automated action planning. The teams particularly focused on developing active or passive assistance functions to help pilots and operators improve their performance.To do so, work was initially centered on "PilotMonitoring" to gain a better understanding of the crew's activity. Experiments were conducted on simulators using behavioral and physiological measurement tools to determine metrics capable of evaluating the operators performance, and their level of engagement or stress.Dassault Aviation are planning to integrate these functions into their civil and military aircraft within the next decade. The machine, the operators teammate"This chair focuses primarily on the development of innovative technologies that contribute, on the one hand, to qualify the human-machine interaction to see whether the collaboration is efficient, and on the other, to automatically decide what should be maintained, proposed, or changed to foster the team's performance", explains Caroline Chanel, Chair Manager at ISAE-SUPAERO.For this purpose, quantitative behavioral and physiological metrics will be combined with more qualitative metrics to evaluate the efficiency of Human-Machine cooperation. This efficiency assessment will then be used by AI algorithms to adapt and enhance the cooperation.These innovative topics are of interest to industrial firms and researchers with a view to developing concrete applications in the long term.* * *ABOUT DASSAULT AVIATION With over 10,000 military and civil aircraft (including 2,500 Falcons) delivered in more than 90 countries over the last century, Dassault Aviation has built up expertise recognized worldwide in the design, development, sale and support of all types of aircraft, ranging from the Rafale fighter, to the high-end Falcon family of business jets, military drones and space systems. In 2021, Dassault Aviation reported revenues of 7.2 billion. The company has 12,400 employees.dassault-aviation.comABOUT ISAE-SUPAEROA world leader in higher education and research in aerospace engineering, ISAE-SUPAERO is a research and innovation-driven institution of higher education, committed to balancing scientific excellence, academic visibility and proximity to industrial objectives. It offers a complete, unique range of advanced higher education programs including the Ingénieur ISAE-SUPAERO (MSc) Degree and a apprenticeship program, 1 master in aeronautics and space delivered in English, 6 research masters, 15 advanced masters, and 6 Doctoral schools.ISAE-SUPAERO has developed a research policy resolutely focused on answering the future needs of the aerospace industry and other high technology sectors. Close collaboration with industry is reflected in the Institutions ongoing commitment to developing teaching and research chairs in strategic fields such as neuroergonomics and human factors, advanced space concepts, embedded systems engineering, eco-design for aircraft or innovative propulsive systems. 5 chairs are currently in progress with Daher, Axa, Airbus, Dassault, Arianegroup. On the international level, ISAE-SUPAERO cooperates with premier European universities (TU Munich, TU Delft, ETSIA Madrid, Politecnico Torino et Milano, KTH Stockholm, Imperial College, Cranfield), North American institutions (Caltech, Stanford, Georgia Tech, UC Berkeley, EP Montreal...) and Latin American and Asian universities as well.The ISAE-SUPAERO includes 1900 undergraduate students. Every year, over 30% of the Institutes graduates are international students, and the alumni network includes over 24 000 former graduates.www.isae-supaero.frABOUT THE ISAE-SUPAERO FOUNDATIONThe ISAE-SUPAERO Foundation was recognized as being in the public interest in 2008. It works to develop ISAE-SUPAEROs world standing and leadership by offering and accelerating the completion of innovative projects that contribute to a sustainable world. Its actions are built around five objectives: promote the development of aerospace research, consolidate the innovative teaching tools proposed, further entrepreneurship, support the international mobility of lecturer-researchers and students, and encourage the social outreach policy particularly by supporting the Institutes diversity program, OSE ISAE-SUPAERO, with the French Aerospace Industries Association (GIFAS).www.fondation-isae-supaero.org* * *PRESS CONTACTS:ISAE-SUPAEROOXYGEN AgencyCharline Kohler - Tel +33 (0)5 32 11 07 32 - [email protected] Vienot - Tel +33 (0)5 32 11 07 36 - [email protected] AviationStéphane Fort - Tel +33 (0)1 47 11 86 90 - [email protected] Durand - Tel +33 (0)1 47 11 85 88 - [email protected] photos: mediaprophoto.dassault-aviation.comHD videos: mediaprovideo.dassault-aviation.comAttachment | Decision Making/Content Synthesis | Architecture and Engineering/Computer and Mathematical | null | null | null | null | null | null |
news | Nick Thieme | Machine Learning Reimagines the Building Blocks of Computing | Traditional algorithms power complicated computational tools like machine learning. A new approach, called algorithms with predictions, uses the power of machine learning to improve algorithms. The post Machine Learning Reimagines the Building Blocks of Computing first appeared on Quanta Magazine | https://www.quantamagazine.org/machine-learning-reimagines-the-building-blocks-of-computing-20220315/ | 2022-03-15T12:20:29Z | Algorithms the chunks of code that allow programs to sort, filter and combine data, among other things are the standard tools of modern computing. Like tiny gears inside a watch, algorithms execute well-defined tasks within more complicated programs.Theyre ubiquitous, and in part because of this, theyve been painstakingly optimized over time. When a programmer needs to sort a list, for example, theyll reach for a standard sort algorithm thats been used for decades.Now researchers are taking a fresh look at traditional algorithms, using the branch of artificial intelligence known as machine learning. Their approach, called algorithms with predictions, takes advantage of the insights machine learning tools can provide into the data that traditional algorithms handle. These tools have, in a real way, rejuvenated research into basic algorithms.Machine learning and traditional algorithms are two substantially different ways of computing, and algorithms with predictions is a way to bridge the two, said Piotr Indyk, a computer scientist at the Massachusetts Institute of Technology. Its a way to combine these two quite different threads.The recent explosion of interest in this approach began in 2018 with a paper by Tim Kraska, a computer scientist at MIT, and a team of Google researchers. In it, the authors suggested that machine learning could improve a well-studied traditional algorithm called a Bloom filter, which solves a straightforward but daunting problem.Imagine you run your companys IT department and you need to check if your employees are going to websites that pose a security risk. Naively, you might think youll need to check every site they visit against a blacklist of known sites. If the list is huge (as is likely the case for undesirable sites on the internet), the problem becomes unwieldly you cant check every site against a huge list in the tiny amount of time before a webpage loads.The Bloom filter provides a solution, allowing you to quickly and accurately check whether any particular sites address, or URL, is on the blacklist. It does this by essentially compressing the huge list into a smaller list that offers some specific guarantees.Bloom filters never produce false negatives if they say the site is bad, its bad. However, they can produce false positives, so perhaps your employees wont be able to visit some sites they should have access to. Thats because they trade some accuracy for an enormous amount of data compression a trick called lossy compression. The more that Bloom filters compress the original data, the less accurate they are, but the more space they save.To a simple Bloom filter, every website is equally suspicious until its confirmed to not be on the list. But not all websites are created equal: Some are more likely than others to wind up on a blacklist, simply because of details like their domain or the words in their URL. People understand this intuitively, which is why you likely read URLs to make sure theyre safe before you click on them.Kraskas team developed an algorithm that can also apply this kind of logic. They called it a learned Bloom filter, and it combines a small Bloom filter with a recurrent neural network (RNN) a machine learning model that learns what malicious URLs look like after being exposed to hundreds of thousands of safe and unsafe websites.When the learned Bloom filter checks a website, the RNN acts first and uses its training to determine if the site is on the blacklist. If the RNN says its on the list, the learned Bloom filter rejects it. But if the RNN says the site isnt on the list, then the small Bloom filter gets a turn, accurately but unthinkingly searching its compressed websites.By putting the Bloom filter at the end of the process and giving it the final say, the researchers made sure that learned Bloom filters can still guarantee no false negatives. But because the RNN pre-filters true positives using what its learned, the small Bloom filter acts more as a backup, keeping its false positives to a minimum as well. A benign website that could have been blocked by a larger Bloom filter can now get past the more accurate learned Bloom filter. Effectively, Kraska and his team found a way to take advantage of two proven but traditionally separate ways of approaching the same problem to achieve faster, more accurate results.Kraskas team showed that the new approach worked, but they didnt formalize why. That task fell to Michael Mitzenmacher, an expert on Bloom filters at Harvard University, who found Kraskas paper innovative and exciting, but also fundamentally unsatisfying. They run experiments saying their algorithms work better. But what exactly does that mean? he asked. How do we know?In 2019, Mitzenmacher put forward a formal definition of a learned Bloom filter and analyzed its mathematical properties, providing a theory that explained exactly how it worked. And whereas Kraska and his team showed that it could work in one case, Mitzenmacher proved it could always work.Mitzenmacher also improved the learned Bloom filters. He showed that adding another standard Bloom filter to the process, this time before the RNN, can pre-filter negative cases and make the classifiers job easier. He then proved it was an improvement using the theory he developed.The early days of algorithms with predictions have proceeded along this cyclical track innovative ideas, like the learned Bloom filters, inspire rigorous mathematical results and understanding, which in turn lead to more new ideas. In the past few years, researchers have shown how to incorporate algorithms with predictions into scheduling algorithms, chip design and DNA-sequence searches.In addition to performance gains, the field also advances an approach to computer science thats growing in popularity: making algorithms more efficient by designing them for typical uses.Currently, computer scientists often design their algorithms to succeed under the most difficult scenario one designed by an adversary trying to stump them. For example, imagine trying to check the safety of a website about computer viruses. The website may be benign, but it includes computer virus in the URL and page title. Its confusing enough to trip up even sophisticated algorithms.Indyk calls this a paranoid approach. In real life, he said, inputs are not generally generated by adversaries. Most of the websites employees visit, for example, arent as tricky as our hypothetical virus page, so theyll be easier for an algorithm to classify. By ignoring the worst-case scenarios, researchers can design algorithms tailored to the situations theyll likely encounter. For example, while databases currently treat all data equally, algorithms with predictions could lead to databases that structure their data storage based on their contents and uses.And this is still only the beginning, as programs that use machine learning to augment their algorithms typically only do so in a limited way. Like the learned Bloom filter, most of these new structures only incorporate a single machine learning element. Kraska imagines an entire system built up from several separate pieces, each of which relies on algorithms with predictions and whose interactions are regulated by prediction-enhanced components.Taking advantage of that will impact a lot of different areas, Kraska said. | Prediction/Decision Making | Unknown | null | null | null | null | null | null |
|
news | CrowdStrike Opens Call for Applications for its 2022-23 NextGen Scholarships | AUSTIN, Texas--(BUSINESS WIRE)--CrowdStrike (Nasdaq: CRWD), a leader in cloud-delivered protection of endpoints, cloud workloads, identity and data, today announced that the application period for its 2022-23 NextGen Scholarship Program is now open for submissions. This year, up to eight (8) scholarships of $10,000 will be awarded to select undergraduate and graduate students studying cybersecurity or artificial intelligence (AI). NextGen Scholarship applicants will also be included in the cand | https://www.businesswire.com/news/home/20220412005453/en/CrowdStrike-Opens-Call-for-Applications-for-its-2022-23-NextGen-Scholarships | 2022-04-12T12:07:07Z | AUSTIN, Texas--(BUSINESS WIRE)--CrowdStrike (Nasdaq: CRWD), a leader in cloud-delivered protection of endpoints, cloud workloads, identity and data, today announced that the application period for its 2022-23 NextGen Scholarship Program is now open for submissions. This year, up to eight (8) scholarships of $10,000 will be awarded to select undergraduate and graduate students studying cybersecurity or artificial intelligence (AI). NextGen Scholarship applicants will also be included in the candidate pool for CrowdStrike internships, giving aspiring students both greater access to world-class education and marketable, real-world experience in one of today's most critical and fastest growing industries.CrowdStrike established the scholarship program in 2017 to nurture and develop the next generation of talent and research in cybersecurity and artificial intelligence (AI). To date, CrowdStrike has awarded more than $165,000 in scholarships to 24 of the top students in the U.S. and Canada.Applicants for the 2022-23 academic year must be current undergraduates, graduate students or high school seniors who will be enrolled full-time at a college/university in the United States or Canada in the fall of 2022, majoring in cybersecurity or AI and achieving a minimum 3.30 GPA. Recipients will be evaluated based on academic achievement, essay content, expressed career interest and work/internship/volunteer experience in cybersecurity, AI or related fields.Cybersecurity is a field becoming more and more important every year, especially as our lives become increasingly digital. In the news, I constantly hear about data breaches, ransomware attacks, and numerous other cybersecurity threats, said Aneesh Boreda, a 2021-22 NextGen Scholarship award recipient who is studying computer and information sciences at the University of Pennsylvania. By pursuing a career in utilizing artificial intelligence to build powerful security tools to prevent these threats, I can help develop the next generation of algorithms used in the arms race between cyberattackers and security companies.How to applyThe scholarship program is administered by International Scholarship and Tuition Services, Inc. (ISTS), an independent company that specializes in managing sponsored educational assistance programs. ISTS hosts the online application process, responds to applicant inquiries and performs preliminary ranking for the selection process. For more information about the program and to begin the application process, visit here. The deadline to apply is June 2, 2022.Past award winnersThe recipients of the NextGen Scholarships for the previous academic year 2021 - 2022 were:Kehinde Ayano, graduate student at Purdue UniversityAneesh Boreda, undergraduate student at University of PennsylvaniaZachary Espiritu, undergraduate student at Brown UniversityWesley Muthemba, graduate student at University of WashingtonLy Na Nguyen, undergraduate student at Columbia UniversityArianne Ghislaine Rull, undergraduate student at York UniversitySara Takhim, undergraduate student at Northeastern UniversitySamuel Watson, undergraduate student at University of AlabamaAbout CrowdStrikeCrowdStrike (Nasdaq: CRWD), a global cybersecurity leader, has redefined modern security with one of the worlds most advanced cloud-native platforms for protecting critical areas of enterprise risk endpoints and cloud workloads, identity and data.Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.CrowdStrike: We stop breaches.Learn more: https://www.crowdstrike.com/ Follow us: Blog | Twitter | LinkedIn | Facebook | Instagram Start a free trial today: https://www.crowdstrike.com/free-trial-guide/© 2022 CrowdStrike, Inc. All rights reserved. CrowdStrike, the falcon logo, CrowdStrike Falcon and CrowdStrike Threat Graph are marks owned by CrowdStrike, Inc. and registered with the United States Patent and Trademark Office, and in other countries. CrowdStrike owns other trademarks and service marks, and may use the brands of third parties to identify their products and services. | Unknown | Unknown | null | null | null | null | null | null |
||
news | PR Newswire | Insight Partners Celebrates Inaugural ScaleUp Series with AI Conference, ScaleUp:AI | This week, New York-based global private equity and venture capital firm Insight Partners hosted its premier Artificial Intelligence (AI) industry conference... | https://finance.yahoo.com/news/insight-partners-celebrates-inaugural-scaleup-210400102.html | https://s.yimg.com/uu/api/res/1.2/0ZSwQPvsD2d4MfpbXz4HSw--~B/aD0xMzA7dz00MDA7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/prnewswire.com/bcd2e077059b44be642599f29148e108 | 2022-04-11T21:04:00Z | Over 1,700 attendees from 50 Countries Across the Globe Joined to Discuss the Impacts of AINEW YORK, April 11, 2022 /PRNewswire/ -- This week, New York-based global private equity and venture capital firm Insight Partners hosted its premier Artificial Intelligence (AI) industry conference, ScaleUp:AI, on April 6-7 in New York City. The hybrid event brought together thousands of participants across the globe to hear the foremost thinkers, doers, users, and investors of AI opine on both the industry today and its future, with thought-provoking presentations and networking opportunities.Presented by Insight Partners, Citi, and NASDAQ, ScaleUp:AI provided a comprehensive look at how AI is impacting every industry and facet of business, with sessions exploring its effects on healthcare, cybersecurity, fraud mitigation, ethics and equity, business intelligence, financial services, and more. Leaders in AI partnered with Insight on the event, including 6sense, Acceldata, BigPanda, Clarify Health, Cognigy, Expressive, Explorium, Featurespace, intenseye, Pecan, Quantum Metric, Rasgo, Relevance AI, RudderStack, Run:AI, SentinelOne, Slim.AI, StormForge, Turing, Weights & Biases, WINT, Zest AI.ScaleUp:AI is the first in a series of Insight Partners global conferences and was specifically designed for the artificial intelligence (AI) community. With over 1,700 attendees in person and virtually, from more than 50 countries across the globe, the event was packed with insights and key takeaways, including:Why "good data" can be more important than "big data" as the world moves towards a more data-centric versus model-centric view of AI.The importance of explainability and bias detection before any model gets into production at scale.The value of strategically combining human and machine intelligence and why "symbiotic intelligence" is the best of both worlds.Why the evolution of AI systems is going to happen more quickly than the evolution of traditional software, and how we are truly at the beginning of a scale up inflection point.ScaleUp: AI speakers (in alphabetical order):Ali Ghodsi, CEO at Databricks Allie K. Miller, Global Head of Machine Learning BD, Startups and Venture Capital at AWS Andrew Ng, Co-Founder Google Brain + CEO at Landing AI Alex Dalyac, Founder & CEO, Tractable Amir Orad, CEO, Sisense Anita Lynch, Chief Data Officer | Corporate Board Director | Investor Arvind Purushotham, Managing Director & Global Head, Venture Investing, Citi Asmau Ahmed, Explorer, Alphabet X Brad Peterson, CTO & CIO, NASDAQ Chase Ginther, Machine Learning & Data Scientist Platform Architect, Snowflake Christina Montgomery, Vice President & Chief Privacy Officer, IBM Clement Delangue, CEO, Hugging Face Danny Tobey, M.D., J.D., Partner, DLA Piper David Kirkpatrick, Founder & Editor-in-Chief, Techonomy Gabi Steele, Founder, Preql Ganesh Bell, Managing Director, Insight Partners Gayatri Narayan, SVP Digital Products and Services, PepsiCo George Mathew, Managing Director, Insight Partners Heather Carroll Cox, Chief Digital Health & Analytics Officer, Humana Janice Tse, Senior Director, Data Science, PayPal Jared Dunnmon, Technical Director, AI/ML Portfolio, Defense Innovation Unit Jason Zintak, CEO, 6sense Jay Budzik, CTO, Zest AI Jerry Overton, CEO, Applied AI Studio Jillian D'Onfro, Enterprise Editor, Insight Partners Joe McKendrick, Analyst & Contributor, Forbes Jon Krohn, Chief Data Scientist, Nebula Jonathan Rosenbaum, Managing Director, Insight Partners Karen Snow, SVP, Head of US Listings and Revenue, NASDAQ Karthik Tadinada, Chief Operating Officer, FeatureSpace Krishna Gade, CEO at Fiddler AI Leah Weiss, Founder, Preql Lonne Jaffe, Managing Director, Insight Partners Lydia Dishman, Staff Editor, Fast Company Manoj Saxena, Executive Chairman, Responsible Artificial Intelligence Institute & Cognitive Scale Neil Ackerman, Head of Global Supply Chain, Johnson & Johnson Nicholas Warner, Chief Operating Officer, SentinelOne Nick Sinai, Senior Advisor, Insight Partners Nicole Wong, Former Deputy US Chief Technology Officer Nikki Parker, Insight Partners Patrick Dougherty, Co-Founder & CTO, Rasgo Prag Sharma, Global Head of Artificial Intelligence, Citi Praveen Akkiraju, Managing Director, Insight Partners Ryan Hinkle, Managing Director, Insight Partners Scott Barclay, Managing Director, Insight Partners Scott Southwood, Chief Revenue Officer, Quantum Metric Seth Dobrin, PhD., Global Chief AI Officer, IBM Siobhan Savage, Co-Founder & CEO, Reejig Sirisha Kadamalakalva, Chief Strategy Officer, DataRobot Suchi Saria, PhD., Founder & CEO, Bayesian Health Supriya Gupta, Head of Recommendations, Credit Karma Svetlana Sicular, VP Analyst, Gartner Tomas Pfister, Head of AI Research, Google Cloud Vaibhav Sahgal, Principal, Americas, Economist Impact Vittorio Cretella, Chief Information Officer, P&G Wardah Inam, Co-Founder & CEO, Overjet Willian Falcon, Founder & CEO, Grid.ai Yan-David Erlich, CRO & COO, Weights & BiasesAbout ScaleUp SeriesThe ScaleUp Series which features topics like AI, healthcare, and cybersecurity is designed to guide and influence leaders, decision makers, and doers who are ready to soar, with the support of the software industry's most trusted ScaleUp partner, Insight Partners. Learn more at scaleup.events.About Insight PartnersInsight Partners is a global software investor partnering with high-growth technology, software, and Internet startup and ScaleUp companies that are driving transformative change in their industries. As of February 24, 2022, the closing of the firm's recent fundraise, Fund XII, brings Insight Partners regulatory assets under management to over $90B. Insight Partners has invested in more than 600 companies worldwide and has seen over 55 portfolio companies achieve an IPO. Headquartered in New York City, Insight has offices in London, Tel Aviv, and Palo Alto. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Insight Partners meets great software leaders where they are in their growth journey, from their first investment to IPO. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.View original content to download multimedia:https://www.prnewswire.com/news-releases/insight-partners-celebrates-inaugural-scaleup-series-with-ai-conference-scaleupai-301523223.htmlSOURCE Insight Partners | Unknown | Business and Financial Operations/Healthcare Practitioners and Support | null | null | null | null | null | null |
news | Vantage Market Research | Global Automotive Artificial Intelligence (AI) Market is Forecast to Grow to US$7,676.92 Million by 2028, with a CAGR of 31.30% in the 2022-2028 period | Vantage Market Research | WASHINGTON, March 22, 2022 (GLOBE NEWSWIRE) -- According to a new market research report " Automotive Artificial Intelligence Market Size, Share & Trends... | https://finance.yahoo.com/news/global-automotive-artificial-intelligence-ai-084000442.html | https://s.yimg.com/uu/api/res/1.2/.1l2dRhMAr9ckt20pkfOtw--~B/aD0yNTY7dz01Mzk7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/globenewswire.com/6f880bf126de30dfde36722a390df3e0 | 2022-03-22T08:40:00Z | Vantage Market Research, The North Star for the Working WorldWASHINGTON, March 22, 2022 (GLOBE NEWSWIRE) -- According to a new market research report " Automotive Artificial Intelligence Market Size, Share & Trends Analysis Report by Offering (Hardware, Software), by Technology (Deep Learning, Machine Learning, Context Awareness, Computer Vision), by Process (Signal Recognition, Image Recognition, Data Mining), by Application (HumanMachine Interface, Semi-autonomous Driving, Autonomous Driving), by Region (North America, Europe, Asia Pacific, Latin America and Middle East & Africa) - Global Industry Assessment (2016 - 2021) & Forecast (2022 - 2028)", published by Vantage Market Research, the global post COVID-19 market size of the Automotive Artificial Intelligence Market is expected to grow from USD 1,498.30 Million in 2021 to USD 7,676.92 Million by 2028 at a CAGR of 31.30% during the forecast period.Market Synopsis-Artificial intelligence (AI) is a cutting-edge computer science technology. It shares similarities with human intelligence in terms of language comprehension, reasoning, learning, problem solving. In the development and revision of technology, market manufacturers face enormous intellectual challenges during the forecast period. Furthermore, the expansion of the automotive industry is expected to drive the Automotive Artificial Intelligence Market during the forecast period. The automotive industry has recognized the potential of artificial intelligence and is one of the major industries that employs AI to augment and mimic human action which is the major factor driving the growth of Automotive Artificial Intelligence Market during the forecast period. The growing demand for self-driving cars, as well as a preference for enhanced user experience and convenience features, are propelling the Automotive Artificial Intelligence Market forward during the forecast period. A rise in demand for premium vehicles, on the other hand, is expected to provide the market with high-paying growth opportunities in upcoming years. On the other hand, the threat of hackers and the rise in cybercrime are expected to limit market growth during the forecast period.Please Check Out Our Free Sample Reports and Make a More Informed Decision: Get Access to a Free Copy of Our Latest Sample Report @ https://www.vantagemarketresearch.com/automotive-artificial-intelligence-market-1381/request-sample(Sample reports are a great way to test our in-depth reports or study before you make a purchase)The newly updated, 140+ page reports provide an in-depth analysis of the COVID-19 virus and pandemic.Using industry data and interview with experts, you can learn about topics such as regional impact analysis, global forecast, competitive landscape analysis, size & share of regional markets.We offer these reports in PDF format so you can read them on your computer and print them out.Free sample includes, Industry Operating Conditions, Industry Market Size, Profitability Analysis, SWOT Analysis, Industry Major Players, Historical and Forecast, Growth Porter's 5 Forces Analysis, Revenue Forecasts, Industry Trends, Industry Financial Ratios.The report also presents the country-wise and region-wise analysis of the Vantage Market Research and includes a detailed analysis of the key factors affecting the growth of the market.Sample Report further sheds light on the Major Market Players with their Sales Volume, Business Strategy and Revenue Analysis, to offer the readers an advantage over others.Key Insights & Findings from the Report:According to our primary respondents research, the Automotive Artificial Intelligence market is predicted to grow at a CAGR of roughly 31.30% during the forecast period.The Automotive Artificial Intelligence market was estimated to be worth roughly USD 1,498.30 Million in 2021 and is expected to reach USD 7,676.92 Million by 2028; based on primary research.On the basis of region, Asia-Pacific is projected to dominate the worldwide Automotive Artificial Intelligence market.List of Prominent Players in the Automotive Artificial Intelligence Market:NVIDIA Corporation (US)Alphabet Inc. (US)Intel Corporation (US)IBM Corporation (US)Microsoft Corporation (US)Harman International Industries Inc. (US)Xilinx Inc. (US)Qualcomm Inc. (US)Tesla Inc. (US)Volvo Car Corporation (Sweden)BMW AG (Germany)Audi AG (Germany)General Motors Company (US)Ford Motor Company (US)Toyota Motor Corporation (Japan)Hyundai Motor Corporation (South Korea)Uber Technologies Inc. (US)Honda Motor Co. Ltd. (Japan)Daimler AG (Germany)Didi Chuxing (China)AI motive (Hungary)Nauto Inc. (US)Nu Tonomy (US)Argo AI (US)drive.ai (US)Purchase This Premium Report Now @ https://www.vantagemarketresearch.com/buy-now/automotive-artificial-intelligence-market-1381/0Benefits of Purchasing Automotive Artificial Intelligence Market Reports: Customer Satisfaction: Our team of experts assists you with all your research needs and optimizes your reports.Analyst Support: Before or after purchasing the report, ask a professional analyst to address your questions.Assured Quality: Focuses on accuracy and quality of reports.Incomparable Skills: Analysts provide in-depth insights into reports.Market Dynamics:Drivers-Autonomous vehicles are gaining popularity around the world due to features such as automatic parking, self-driving, autopilot, and others that reduce human effort while driving. Furthermore, such autonomous vehicles are supported by some of the most technologically advanced companies, including Nvidia, Intel, and Tesla, and thus the likelihood of failure is low. Tesla's autopilot system, for example, is one of the most advanced systems available in the Automotive Artificial Intelligence Market, with features such as keeping the vehicle within a lane while driving, auto changing lanes when necessary, self-parking, are the major factors driving the growth of Automotive Artificial Intelligence Market in the upcoming years.Challenges:Autonomous vehicles have made consumers' lives safer and more convenient. However, manufacturers face certain threats related to security and privacy concerns, such as personal data security, cyber-attacks, and driver distractions which are the major factors hindering the growth of the market. For instance, approximately 1.4 million Fiat Chrysler vehicles were recalled in the United States due to a flaw in their dashboard computers that allowed hackers to disable the vehicle. Similarly, white hat hackers were successful in inserting malware into the Tesla car's system.Read Full Research Report @ https://www.vantagemarketresearch.com/industry-report/automotive-artificial-intelligence-market-1381Regional Trends:Among other regions, Asia Pacific region acquires the largest market share with steady growth over the forecast period. Increased sales of luxury passenger vehicles in the region, increased consumer positive perception of AI, and increased consumer disposable income are expected to drive market growth during the forecast period. Several other instances, similar to the ones mentioned above, point to the positive growth of the Asia Pacific Automotive Artificial Intelligence Market in years to come. Furthermore, growing adoption of intelligent-based solutions for automating operations, as well as increased digitalization, are all contributing to market growth in this region. Besides, North America had a substantial share in the market due to developed economy and early adoption of technology Increased adoption of electric and self-driving cars, increased implementation of driver-assist technologies such as voice command and ADAS systems, and increased consumer per capita income are expected to drive market growth in the coming years.Europe is expected to account for a significant share of the market and to grow rapidly over the forecast period, driven by factors such as increased demand for convenience features such as advanced driver assistance systems (ADAS) and adaptive cruise control (ACC) in semi-autonomous vehicles, increased adoption of electric vehicles, and the presence of major market players in the region.Recent Developments:In May 2021, Didi Chuxing has announced a strategic partnership with Volvo Cars on self-driving vehicles for DiDi's test fleet. Volvo Cars' autonomous drive-ready XC90 vehicles will be the first to feature DiDi Gemini, a new self-driving hardware platform outfitted with NVIDIA DRIVE AGX Pegasus. These vehicles, which are outfitted with DiDi's Gemini self-driving hardware platform, will eventually be used in robotaxi services.Browse market data Tables and Figures spread through 145 Pages and in-depth TOC on "Automotive Artificial Intelligence Market Size, Share & Trends Analysis Report by Offering (Hardware, Software), by Technology (Deep Learning, Machine Learning, Context Awareness, Computer Vision), by Process (Signal Recognition, Image Recognition, Data Mining), by Application (HumanMachine Interface, Semi-autonomous Driving, Autonomous Driving), by Region (North America, Europe, Asia Pacific, Latin America and Middle East & Africa) - Global Industry Assessment (2016 - 2021) & Forecast (2022 - 2028)" View detailed Research Report here https://www.vantagemarketresearch.com/press-release/automotive-artificial-intelligence-market-market-503877Key questions answered in the report: Which regional market will show the highest and rapid growth?Which are the top five players of the Automotive Artificial Intelligence Market?How will the Automotive Artificial Intelligence Market change in the upcoming six years?Which application and product will take a lions share of the Automotive Artificial Intelligence Market?What is the Automotive Artificial Intelligence market drivers and restrictions?What will be the CAGR and size of the Automotive Artificial Intelligence Market throughout the forecast period?The report on the Automotive Artificial Intelligence Market highlights:Assessment of the marketPremium InsightsCompetitive LandscapeCOVID Impact AnalysisHistoric Data, Estimates, and ForecastCompany ProfilesGlobal and Regional DynamicsThis market titled Automotive Artificial Intelligence Market will cover exclusive information in terms of Geographic Segmentation, Forecast, Regional Analysis, Key Market Trends, and various others as mentioned below:ParameterDetailsMarket Size in 2021USD 1,498.30 MillionProjected Market Size in 2028USD 7,676.92 MillionCAGR Growth RateCAGR of 31.30% From 2022 2028 Market Size Provided for Years2016-2028Base Year2021Historic Years2016-2020Forecast Years2022-2028Segments Covered OfferingHardwareSoftwareTechnologyDeep LearningMachine LearningContext AwarenessComputer VisionNatural Language ProcessingProcessSignal RecognitionImage RecognitionData MiningApplicationHumanMachine InterfaceSemi-autonomous DrivingAutonomous DrivingRegion & Counties Covered North AmericaU.S.CanadaMexicoEuropeU.KFranceGermanyItalySpainRest Of EuropeAsia PacificChinaJapanIndiaSouth KoreaSouth East AsiaRest Of Asia PacificLatin AmericaBrazilArgentinaRest Of Latin AmericaMiddle East & AfricaGCC CountriesSouth AfricaRest Of Middle East & AfricaCompanies Covered NVIDIA Corporation (US)Alphabet Inc. (US)Intel Corporation (US)IBM Corporation (US)Microsoft Corporation (US)Harman International Industries Inc. (US)Xilinx Inc. (US)Qualcomm Inc. (US)Tesla Inc. (US)Volvo Car Corporation (Sweden)BMW AG (Germany)Audi AG (Germany)General Motors Company (US)Ford Motor Company (US)Toyota Motor Corporation (Japan)Hyundai Motor Corporation (South Korea)Uber Technologies Inc. (US)Honda Motor Co. Ltd. (Japan)Daimler AG (Germany)and Didi Chuxing (China)AI motive (Hungary)Nauto Inc. (US)nuTonomy (US)Argo AI (US)and drive.ai (US).Report CoverageMarket growth drivers, restraints, opportunities, Porters five forces analysis, PEST analysis, value chain analysis, regulatory landscape, technology landscape, patent analysis, market attractiveness analysis by segments and North America, company market share analysis, and COVID-19 impact analysisCustomization of the Report:The report can be customized as per client needs or requirements. For any queries, you can contact us on [email protected] or +1 (202) 380-9727. Our sales executives will be happy to understand your needs and provide you with the most suitable reports.Download Free Sample Report Now @ https://www.vantagemarketresearch.com/automotive-artificial-intelligence-market-1381/request-sampleBrowse More Related Report:About Vantage Market Research:We, at Vantage Market Research, provide quantified B2B high quality research on more than 20,000 emerging markets, in turn, helping our clients map out constellation of opportunities for their businesses. We, as a competitive intelligence market research and consulting firm provide end to end solutions to our client enterprises to meet their crucial business objectives. Our clientele base spans across 70% of Global Fortune 500 companies. The company provides high quality data and market research services. The company serves various enterprises and clients in a wide variety of industries. The company offers detailed reports on multiple industries including Chemical Materials and Energy, Food and Beverages, Healthcare Technology, etc. The companys experienced team of Analysts, Researchers, and Consultants use proprietary data sources and numerous statistical tools and techniques to gather and analyse information.Follow Us on LinkedIn: https://www.linkedin.com/company/vantage-market-research/Follow Us on Twitter: https://twitter.com/vantagemarketrContact usEric Kunz6218 Georgia Avenue NW Ste 1 - 564Washington DC 20011-5125United States Tel: +1 202 380 9727Email: [email protected]: https://www.vantagemarketresearch.com/Latest Vantage Market Research Press Releases @https://www.vantagemarketresearch.com/insight/press-releasesLatest Vantage Market Research Blog @ https://www.vantagemarketresearch.com/insight/blogsBlog: | Prediction/Decision Making/Recommendation | Unknown | null | null | null | null | null | null |
news | Kunal Gautam | New features from Apache Hudi 0.9.0 on Amazon EMR | Apache Hudi is an open-source transactional data lake framework that greatly simplifies incremental data processing and data pipeline development. It does this by providing transaction support and record-level insert, update, and delete capabilities on data lakes on Amazon Simple Storage Service (Amazon S3) or Apache HDFS. Apache Hudi is integrated with open-source big data analytics […] | https://aws.amazon.com/blogs/big-data/new-features-from-apache-hudi-0-9-0-on-amazon-emr/ | 2022-04-04T16:35:50Z | Apache Hudi is an open-source transactional data lake framework that greatly simplifies incremental data processing and data pipeline development. It does this by providing transaction support and record-level insert, update, and delete capabilities on data lakes on Amazon Simple Storage Service (Amazon S3) or Apache HDFS. Apache Hudi is integrated with open-source big data analytics frameworks, such as Apache Spark, Apache Hive, Presto, and Trino. Furthermore, Apache Hudi lets you maintain data in Amazon S3 or Apache HDFS in open formats such as Apache Parquet and Apache Avro.Common use cases where we see customers use Apache Hudi are as follows:To simplify data ingestion pipelines that deal with late-arriving or updated records from streaming and batch data sources.To ingest data using Change Data Capture (CDC) from transactional systems.To implement data-deletion pipelines to comply with data privacy regulations, e.g., GDPR (General Data Protection Regulation) compliance. Conforming to GDPR is a necessity of todays modern data architectures, which includes the features of right to erasure or right to be forgotten, and it can be implemented using Apache Hudi capabilities in place of deletes and updates.We are excited to announce that Apache Hudi 0.9.0 is available on Amazon EMR 5.34 and EMR 6.5.0. This is a major release, which includes Spark SQL DML and DDL support as its highlight, along with several other writer/reader side improvements. The 3x query performance improvement that we observe over Hudi 0.6.0 is especially remarkable so if you are looking to implement a transactional data lake with record level upserts and deletes or are using an older version of Hudi, this is a great version to use. In this post, well focus on the following new features and improvements that come with the 0.9.0 release:Spark SQL DML and DDL Support: Explore Spark SQL DML and DDL support.Performance Improvements: Explore the performance improvements and new performance related features introduced on the writer and query side.Additional Features: Explore additional useful features, such as Amazon DynamoDB-based locks for Optimistic Concurrency Control (OCC), delete partitions operation, etc.Spark SQL DML and DDL supportThe most exciting new feature is that Apache Hudi 0.9.0 adds support for DDL/DMLs using Spark SQL. This takes a huge step toward making Hudi more easily accessible, operable by all people (non-engineers, analysts, etc.). Moreover, it enables existing datasets to be easily migrated to Apache Hudi tables, and it takes a step closer to a low-code paradigm using Spark SQL DML and DDL hence eliminating the need to write scala/python code.Users can now create tables using CREATE TABLE....USING HUDI and CREATE TABLE .. AS SELECT SQL statements to directly manage tables in AWS Glue catalog.Then, users can use INSERT, UPDATE, MERGE INTO, and DELETE SQL statements to manipulate data. The INSERT OVERWRITE statement can be used to overwrite existing data in the table or partition for existing batch ETL pipelines.Lets run through a quick example where we create a Hudi table amazon_customer_review_hudi resembling the schema of Amazon Customer reviews Public Dataset and perform the following activities:Pre-requisite: Create Amazon Simple Storage Service (S3) Buckets s3://EXAMPLE-BUCKET and s3://EXAMPLE-BUCKET-1Create a partitioned Hudi table amazon_product_review_hudiCreate a source Hudi table amazon_customer_review_parquet_merge_source with contents that will be merged with the amazon_product_review_hudi tableInsert data into amazon_customer_review_parquet_merge_source and amazon_product_review_hudi as well as perform a merge operation by reading the data fromamazon_customer_review_parquet_merge_source and merging with the Hudi table amazon_product_review_hudiPerform a delete operation on amazon_customer_review_hudi over the previously inserted recordsConfigure Spark SessionWe use the following script via EMR studio notebook, to configure Spark Session to work with Apache Hudi DML and DDL support. The following examples demonstrate how to launch the interactive Spark shell, use Spark submit, or use Amazon EMR Notebooks to work with Hudi on Amazon EMR. We recommend launching your EMR cluster with the following Apache Livy configuration:[ { "Classification": "livy-conf", "Properties": { "livy.file.local-dir-whitelist": "/usr/lib/hudi" } }]The above configuration lets you directly refer to the local /usr/lib/hudi/hudi-spark-bundle.jar on the EMR leader node while configuring the Spark session. Alternatively, you can also copy /usr/lib/hudi/hudi-spark-bundle.jar over to an HDFS location and refer to that while initializing Spark session. Here is a command for initializing the Spark session from a notebook:%%configure -f{ "conf" : { "spark.jars":"file:///usr/lib/hudi/hudi-spark-bundle.jar", "spark.serializer":"org.apache.spark.serializer.KryoSerializer", "spark.sql.extensions":"org.apache.spark.sql.hudi.HoodieSparkSessionExtension" }}Create a TableLets create the following Apache Hudi tables amazon_customer_review_hudi and amazon_customer_review_parquet_merge_sourceamazon_customer_review_hudi and amazon_customer_review_parquet_merge_source%%sql /****************************Create a HUDI table having schema same as of Amazon customer reviews table containing selected columns *****************************/-- Hudi 0.9.0 configuration https://hudi.apache.org/docs/configurations-- Hudi configurations can be set in options block as hoodie.datasource.hive_sync.assume_date_partitioning = 'false',create table if not exists amazon_customer_review_hudi ( marketplace string, review_id string, customer_id string, product_title string, star_rating int, timestamp long , review_date date, year string, month string , day string ) using hudi location 's3://EXAMPLE-BUCKET/my-hudi-dataset/' options ( type = 'cow', primaryKey = 'review_id', preCombineField = 'timestamp', hoodie.datasource.write.hive_style_partitioning = 'true' ) partitioned by (year,month,day);-- Change Location 's3://EXAMPLE-BUCKET/my-hudi-dataset/' to appropriate S3 bucket you have created in your AWS account%%sql /****************************Create amazon_customer_review_parquet_merge_source to be used as source for merging into amazon_customer_review_hudi.The table contains deleteRecord column to track if deletion of record is needed*****************************/create table if not exists amazon_customer_review_parquet_merge_source ( marketplace string, review_id string, customer_id string, product_title string, star_rating int, review_date date, deleteRecord string ) STORED AS PARQUET LOCATION 's3://EXAMPLE-BUCKET-1/toBeMergeData/'-- Change Location (s3://EXAMPLE-BUCKET-1/toBeMergeData/') to appropriate S3 bucket you have created in your AWS accountFor comparison if, amazon_customer_review_hudi was to be created using programmatic approach the PySpark sample code is as follows.# Create a DataFrameinputDF = spark.createDataFrame( [ ("Italy", "11", "1111", "table", 5, 1648126827, "2015/05/02", "2015", "05", "02"), ("Spain", "22", "2222", "chair", 5, 1648126827, "2015/05/02", "2015", "05", "02") ], ["marketplace", "review_id", "customer_id", "product_title", "star_rating", "timestamp", "review_date", "year", "month", "day" ])# Print Schema of inputDF inputDF.printSchema()# Specify common DataSourceWriteOptions in the single hudiOptions variablehudiOptions = {"hoodie.table.name": "amazon_customer_review_hudi","hoodie.datasource.write.recordkey.field": "review_id","hoodie.datasource.write.partitionpath.field": "year,month,day","hoodie.datasource.write.precombine.field": "timestamp","hoodie.datasource.write.hive_style_partitioning": "true", "hoodie.datasource.hive_sync.enable": "true","hoodie.datasource.hive_sync.table": " amazon_customer_review_hudi","hoodie.datasource.hive_sync.partition_fields": "year,month,day","hoodie.datasource.hive_sync.partition_extractor_class": "org.apache.hudi.hive.MultiPartKeysValueExtractor"}# Create Hudi table and insert data into my_hudi_table_1 hudi table at the S3 location specified inputDF.write \ .format("org.apache.hudi")\ .option("hoodie.datasource.write.operation", "insert")\ .options(**hudiOptions)\ .mode("append")\ .save("s3://EXAMPLE-BUCKET/my-hudi-dataset/") Insert data into the Hudi tablesLets insert records into the table amazon_customer_review_parquet_merge_source to be used for the merge operation. This includes inserting a row for fresh insert, update, and delete.%%sql /**************************** Insert a record into amazon_customer_review_parquet_merge_source for deletion *****************************/-- The record will be deleted from amazon_customer_review_hudi after merge as deleteRecord is set to yesinsert into amazon_customer_review_parquet_merge_source select 'italy', '11', '1111', 'table', 5, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, 'yes' %%sql/**************************** Insert a record into amazon_customer_review_parquet_merge_source used for update*****************************/-- The record will be updated from amazon_customer_review_hudi with new Star rating and product_title after mergeinsert into amazon_customer_review_parquet_merge_source select 'spain', '22', '2222', 'Relaxing chair', 4, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, 'no' %%sql/**************************** Insert a record into amazon_customer_review_parquet_merge_source for insert *****************************/-- The record will be inserted into amazon_customer_review_hudi after merge insert into amazon_customer_review_parquet_merge_source select 'uk', '33', '3333', 'hanger', 3, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, 'no' Now lets insert records into the amazon_customer_review_hudi table used as the destination table for the merge operation.%%sql/**************************** Insert a record into amazon_customer_review_hudi table for deletion after merge *****************************/-- Spark SQL date time functions https://spark.apache.org/docs/latest/api/sql/index.html#date_addinsert into amazon_customer_review_hudi select 'italy', '11', '1111', 'table', 5, unix_timestamp(current_timestamp()) as timestamp, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, date_format(date '2015-05-02', "yyyy") as year, date_format(date '2015-05-02', "MM") as month, date_format(date '2015-05-02', "dd") as day %%sql/**************************** Insert a record into amazon_customer_review_hudi table for update after merge *****************************/insert into amazon_customer_review_hudi select 'spain', '22', '2222', 'chair ', 5, unix_timestamp(current_timestamp()) as timestamp, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, date_format(date '2015-05-02', "yyyy") as year, date_format(date '2015-05-02', "MM") as month, date_format(date '2015-05-02', "dd") as day Merge intoLets perform the merge from amazon_customer_review_parquet_merge_source into amazon_customer_review_hudi.%%sql /*************************************MergeInto : Merge Source Into Traget **************************************/-- Source amazon_customer_review_parquet_merge_source -- Taget amazon_customer_review_hudimerge into amazon_customer_review_hudi as targetusing ( select marketplace, review_id, customer_id, product_title, star_rating, review_date, deleteRecord, date_format(review_date, "yyyy") as year, date_format(review_date, "MM") as month, date_format(review_date, "dd") as day from amazon_customer_review_parquet_merge_source ) sourceon target.review_id = source.review_id when matched and deleteRecord != 'yes' then update set target.timestamp = unix_timestamp(current_timestamp()), target.star_rating = source.star_rating, target.product_title = source.product_titlewhen matched and deleteRecord = 'yes' then deletewhen not matched then insert ( target.marketplace, target.review_id, target.customer_id, target.product_title, target.star_rating, target.timestamp , target.review_date, target.year , target.month , target.day ) values ( source.marketplace, source.review_id, source.customer_id, source.product_title, source.star_rating, unix_timestamp(current_timestamp()), source.review_date, source.year , source.month , source.day )Considerations and LimitationsThe merge-on condition can only be applied on primary key as of now.-- The merge condition is possible only on primary keyson target.review_id = source.review_idSupport for partial updates is supported for the Copy on Write (CoW) table, but it isnt supported for the Merge on Read (MoR) tables.The target tables fields cannot be the right-value of the update expression for the MoR table:-- The update will result in an error as target columns are present on right hand side of the expression update set target.star_rating = target.star_rating +1 Delete a RecordNow lets delete the inserted record.%%sql/*************************************Delete the inserted record from amazon_customer_review_hudi table **************************************/Delete from amazon_customer_review_hudi where review_id == '22'%%sql /*************************************Query the deleted record from amazon_customer_review_hudi table **************************************/select * from amazon_customer_review_hudi where review_id == '22'Schema EvolutionHudi supports common schema evolution scenarios, such as adding a nullable field or promoting the datatype of a field. Lets add a new column ssid (type int) to existing amazon_customer_review_hudi table, and insert a record with extra column. Hudi allows for querying both old and new data with the updated table schema.%%sql/*************************************Adding a new column name ssid of type int to amazon_customer_review_hudi table**************************************/ALTER TABLE amazon_customer_review_hudi ADD COLUMNS (ssid int)%%sql/*************************************Adding a new record to altered table amazon_customer_review_hudi **************************************/insert into amazon_customer_review_hudi select 'germany', '55', '5555', 'car', 5, unix_timestamp(current_timestamp()) as timestamp, TO_DATE(CAST(UNIX_TIMESTAMP('2015/05/02', 'yyyy/MM/dd') AS TIMESTAMP)) as review_date, 10 as ssid, date_format(date '2015-05-02', "yyyy") as year, date_format(date '2015-05-02', "MM") as month, date_format(date '2015-05-02', "dd") as day %%sql /*************************************Promoting ssid type from int to long **************************************/ALTER TABLE amazon_customer_review_hudi CHANGE COLUMN ssid ssid long%%sql /*************************************Querying data from amazon_customer_review_hudi table**************************************/select * from amazon_customer_review_hudi where review_id == '55'Spark Performance ImprovementsQuery Side ImprovementsApache Hudi tables are now registered with the metastore as Spark Data Source tables. This enables Spark SQL queries on Hudi tables to use Sparks native Parquet Reader in case of Copy on Write tables, and Hudis custom MergeOnReadSnapshotRelation in case of Merge on Read tables. Therefore, it no longer depends on Hive Input Format fallback within Spark, which isnt as maintained and efficient as Sparks native readers. This unlocks many optimizations, such as the use of Sparks native parquet readers, and implementing Hudis own Spark FileIndex implementation. The File Index helps improve file listing performance via optimized caching, support for partition pruning, as well as the ability to list files via Hudi metadata table (instead of listing directly from Amazon S3). In addition, Hudi now supports time travel query via Spark data source, which lets you query snapshot of the dataset as of a historical time instant.Other important things to note are:Configurations such as spark.sql.hive.convertMetastoreParquet=false and mapreduce.input.pathFilter.class=org.apache.hudi.hadoop.HoodieROTablePathFilter are no longer needed while querying via Spark SQL.Now you can use a non-globbed query path when querying Hudi datasets via Data Source API. This lets you query the table via base path without having to specify * in the query path.We ran a performance benchmark derived from the 3 TB scale TPC-DS benchmark to determine the query performance improvements for Hudi 0.9.0 on EMR 6.5.0, relative to Hudi 0.6.0 on EMR 6.2.0 (at the beginning of 2021) for Copy on Write tables. The queries were run on 5-node c5.9xlarge EMR clusters.In terms of Geometric Mean, the queries with Hudi 0.9.0 are three times faster than they were with Hudi 0.6.0. The following graphs compare the total aggregate runtime and geometric mean of runtime for all of the queries in the TPC-DS 3 TB query dataset between the two Amazon EMR/Hudi releases (lower is better):In terms of Geometric Mean the queries with Hudi 0.9.0 are 3 times faster than they were with Hudi 0.6.0.Writer side improvementsVirtual Keys SupportApache Hudi maintains metadata by adding additional columns to the datasets. This lets it support upsert/delete operations and various capabilities around it, such as incremental queries, compaction, etc. These metadata columns (namely _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, _hoodie_file_name and _hoodie_commit_seqno) let Hudi uniquely identify a record, the partition/file in which a record exists, and the latest commit that updated a record.However, generating and maintaining these metadata columns increases the storage footprint for Hudi tables on disk. Some of these columns, such as _hoodie_record_key and _hoodie_partition_path, can be constructed from other data columns already stored in the datasets. Apache Hudi 0.9.0 has introduced support for Virtual Keys. This lets users disable the generation of these metadata columns, and instead depend on actual data columns to construct the record key/partition paths dynamically using appropriate key generators. This helps in reducing the storage footprint, as well as improving ingestion time. However, this feature comes with the following caveats:This is only meant to be used for Append Only / Immutable data. It cant be used for use cases requiring upserts and deletes, which requires metadata columns such as _hoodie_record_key and _hoodie_partition_path for bloom indexes to work.Incremental queries will not be supported, because they need _hoodie_commit_time to filter records written/updated at a specific time.Once this feature is enabled, it cant be turned off for an existing table.The feature is turned off by default, and it can be enabled by setting hoodie.populate.meta.fields to false. We measured the write performance and storage footprint improvements using Bulk Insert with public Amazon Customer Reviews dataset. Here is the code snippet that we used:import org.apache.hudi.DataSourceWriteOptionsimport org.apache.hudi.config.HoodieWriteConfigimport org.apache.spark.sql.SaveModevar srcPath = "s3://amazon-reviews-pds/parquet/"var tableName = "amazon_reviews_table"var tablePath = "s3://<bucket>/<prefix>/" + tableNameval inputDF = spark.read.format("parquet").load(srcPath)inputDF.write.format("hudi") .option(HoodieWriteConfig.TABLE_NAME, tableName) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL) .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "review_id") .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "product_category") .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "review_date") .option("hoodie.populate.meta.fields", "<true/false>") .mode(SaveMode.Overwrite) .save(tablePath)The experiment was run on a four node c4.2xlarge EMR cluster (one leader, three core). We observed a 10.63% improvement in the write runtime performance, and a 8.67% reduction in storage footprint with virtual keys enabled. The following graph compares the bulk insert runtime and table size with and without virtual keys (lower is better):Timeline Server-based Marker MechanismApache Hudi supports the automatic cleaning up of uncommitted data written during write operations. This cleaning is supported by generating marker files corresponding to each data file, which serves as a method to track data files of interest rather than having to scan the entire table by listing all of the files. Although the existing marker mechanism is much more efficient than scanning the entire table for uncommitted data files, it can still have a performance impact for Amazon S3 data lakes. For example, writing a significant number of marker files (one per-data file) and then deleting them following a successful commit could take a non-trivial amount of time, sometimes in the order of several minutes. In addition, it has the potential to hit Amazon S3 throttling limits when a significant number of data/marker files are being written concurrently.Apache Hudi 0.9.0 introduces a new timeline server based implementation of this marker mechanism. This makes it more efficient for Amazon S3 workloads by improving the overall write performance, as well as significantly decreasing the probability of hitting Amazon S3 throttle limits. The new mechanism uses Hudis timeline server component as a central place for processing all of the marker creation/deletion requests (from all executors), which allows for batching of these requests and reducing the number of requests to Amazon S3. Therefore, users with Amazon S3 data lakes can leverage this to improve write operations performance and avoid throttling due to marker files management. It would be especially impactful for scenarios where a significant number of data files (e.g., 10k or more) are being written.This new mechanism is notenabled by default, and it can be enabled by setting hoodie.write.markers.type to timeline_server_based, for the write operation. For more details about the feature, refer to this blog post by the Apache Hudi community.Additional ImprovementsDynamoDB-based LockingOptimistic Concurrency Control was one of the major features introduced with Apache Hudi 0.8.0 to allow multiple concurrent writers to ingest data into the same Hudi table. The feature requires acquiring locks for which either Zookeeper (default on EMR) or Hive Metastore could be used. However, these lock providers require all of the writers to be running on the same cluster on which the Zookeeper/Hive Metastore is running.Apache Hudi 0.9.0 on Amazon EMR has introduced DynamoDB as a lock provider. This would let multiple writers running across different clusters ingest data into the same Hudi table. This feature was originally added to Hudi 0.9.0 on Amazon EMR, and it contributed back to open source Hudi in version 0.10.0. To configure this, the following properties should be set:ConfigurationValueDescriptionRequiredhoodie.write.lock.providerorg.apache.hudi.client.transaction.lock.DynamoDBBasedLockProviderLock Provider implementation to be usedYeshoodie.write.lock.dynamodb.table<String>DynamoDB table name to be used for acquiring locks. If the table doesnt exist, it will be created. The same table can be used across all of your Hudi jobs operating on the same or different tablesYeshoodie.write.lock.dynamodb.partition_key<String>String Value to be used for the locks table partition key attribute. It must be a string that uniquely identifies a Hudi table, such as the Hudi table nameNo. Default: Hudi Table Namehoodie.write.lock.dynamodb.region<String>AWS Region in which the DynamoDB locks table exists, or must be created.No. Default:us-east-1hoodie.write.lock.dynamodb.billing_mode<String>DynamoDB billing mode to be used for the locks table while creating. If the table already exists, then this doesnt have an effectNo. Default:PAY_PER_REQUESThoodie.write.lock.dynamodb.read_capacity<Integer>DynamoDB read capacity to be used for the locks table while creating. If the table already exists, then this doesnt have an effectNo. Default: 20hoodie.write.lock.dynamodb.write_capacity<Integer>DynamoDB write capacity to be used for the locks table while creating. If the table already exists, then this doesnt have an effectNo. Default: 10Furthermore, Optimistic Concurrency Control must be enabled via the following:hoodie.write.concurrency.mode = optimistic_concurrency_controlhoodie.cleaner.policy.failed.writes = LAZYYou can seamlessly configure these properties at the cluster level, using EMR Configurations API with hudi-defaults classification, to avoid having to configure it with every job.Delete partitionsApache Hudi 0.9.0 introduces a DELETE_PARTITION operation for its Spark Data Source API that can be leveraged to delete partitions. Here is a scala example of how to leverage this operation:import org.apache.hudi.DataSourceWriteOptionsimport org.apache.hudi.config.HoodieWriteConfigimport org.apache.spark.sql.SaveModeval deletePartitionDF = spark.emptyDataFramedeletePartitionDF.write.format("hudi") .option(HoodieWriteConfig.TABLE_NAME, "<table name>") .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.DELETE_PARTITION_OPERATION_OPT_VAL) .option(DataSourceWriteOptions.PARTITIONS_TO_DELETE.key(), "<partition_value1>,<partition_value2>") .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "<record key(s)>") .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "<partition field(s)>") .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "<precombine key>") .mode(SaveMode.Append) .save("<table path>")However, there is a known issue:Hive Sync fails when performed along with DELETE_PARTITION operation because of a bug. Hive Sync will succeed for any future insert/upsert/delete operation performed following the delete partition operation. This bug has been fixed in Hudi release 0.10.0.Asynchronous ClusteringApache Hudi 0.9.0 introduces support for asynchronous clustering via Spark structured streaming sink and Delta Streamer. This lets users continue ingesting data into the data lake, while the clustering service continues to run in the background to reorganize data for improved query performance and optimal file sizes. This is made possible with the Optimistic Concurrency Control feature introduced in Hudi 0.8.0. Currently, clustering can only be scheduled for partitions that arent receiving any concurrent updates. Additional details on how to get started with this feature can be found in this blog post.ConclusionIn this post, we shared some of the new and exciting features in Hudi 0.9.0 available on Amazon EMR versions 5.34 and 6.5.0 and later. These new features enable the ability for data pipelines to be built solely with SQL statements, thereby making it easier to build transactional data lakes on Amazon S3.As a next step, for a hands on experience on Hudi 0.9.0 on EMR, try out the notebook here on EMR Studio using Amazon EMR version 6.5.0 and let us know your feedback.About the AuthorsKunal Gautam is a Senior Big Data Architect at Amazon Web Services. Having experience in building his own Startup and working along with enterprises, he brings a unique perspective to get people, business and technology work in tandem for customers. He is passionate about helping customers in their digital transformation journey and enables them to build scalable data and advance analytics solutions to gain timely insights and make critical business decisions. In his spare time, Kunal enjoys Marathons, Tech Meetups and Meditation retreats. Gabriele Cacciola is a Senior Data Architect working for the Professional Service team with Amazon Web Services. Coming from a solid Startup experience, he currently helps enterprise customers across EMEA implement their ideas, innovate using the latest tech and build scalable data and analytics solutions to make critical business decisions. In his free time, Gabriele enjoys football and cooking.Udit Mehrotra is a software development engineer at Amazon Web Services and an Apache Hudi PMC member/committer. He works on cutting-edge features of Amazon EMR and is also involved in open-source projects such as Apache Hudi, Apache Spark, Apache Hadoop, and Apache Hive. In his spare time, he likes to play guitar, travel, binge watch, and hang out with friends. | Process Automation/Content Creation | Unknown | null | null | null | null | null | null |
|
news | Conversica | Conversica Announces the Completion of a $25 Million Financing With Morgan Stanley Expansion Capital to Further Its Growth | The principal balance will support strategic initiatives to expand Conversica’s portfolio and customer base of mid-market and enterprise organizationsFOSTER ... | https://finance.yahoo.com/news/conversica-announces-completion-25-million-150000320.html | https://s.yimg.com/uu/api/res/1.2/Lm9UelM6dqAOC9TA80lMwA--~B/aD01NTt3PTI1MDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/globenewswire.com/8f8565d18017d6304816d31f399bfc66 | 2022-04-05T15:00:00Z | The principal balance will support strategic initiatives to expand Conversicas portfolio and customer base of mid-market and enterprise organizationsFOSTER CITY, Calif., April 05, 2022 (GLOBE NEWSWIRE) -- Conversica, Inc., a leading provider of Conversational AI solutions for enterprise revenue teams, announces the successful completion of a $25 million financing with investment funds managed by Morgan Stanley Expansion Capital. Conversica will use the capital to support strategic business initiatives as it prepares for exponential growth.Today, AI is more than a technology. Its an essential strategy for intelligent automation that we believe will transform customer lifecycles and drive unbridled revenue growth, said Jim Kaskade, CEO of Conversica. Our partnership with Morgan Stanley Expansion Capital and this growth financing positions us to go public in the future. We're now very focused on larger enterprises, who view AI as a critical part of their ability to deliver exceptional digital experiences to their end-customers, at scale. Its going to spark an inflection point for our company and we couldnt be more proud to have the financial backing of one of the largest growth-focused banks on the planet.Conversica provides Conversational AI solutions, uniquely focused on revenue growth versus cost savings. Conversica's Conversational AI helps enterprise marketing, sales and customer success teams attract, acquire and grow customers at scale across the entire customer revenue lifecycle. Conversica is expanding its company portfolio and customer base with a focus on mid-market organizations and large enterprises as they are the primary audiences seeking massive revenue growth from AI and automation. Conversational AI will deliver better conversational experiences, build meaningful customer relationships and bolster revenue at scale.In our increasingly digital world, exceptional customer experiences will be the competitive battleground where organizations vie for market share. AI solutions are key for companies seeking advanced automation as a means of differentiation. We believe that only digital automation leveraging AI will deliver greater personalization, timely response and consistent engagement across all digital touch points. We foresee significant increases in demand for the Conversational AI market. We are thrilled to support Conversica as a leader in this exciting space and are confident in the companys upcoming developments and long-term outlook, said Nick Nocito, Executive Director of Morgan Stanley Expansion Capital.About ConversicaConversica is a leading provider of Conversational AI solutions, uniquely focused on revenue growth. Conversica helps enterprise marketing, sales, and customer success teams attract, acquire and grow customers at scale across the entire customer revenue lifecycle. Conversica AI Assistants serve as digital team members and autonomously engage prospects, existing customers, or partners in human-like, two-way interactions at scale to drive towards the next best action, whether thats scheduling a sales meeting, or gauging interest to buy additional products or services.Processing over a billion interactions, the Conversica Conversational AI platform integrates natural language processing (NLU & NLG), decision & policy management, business process automation and deep learning capabilities to drive customer engagement across multiple digital communication channels and languages. The Conversica platform supports over 50 integrations into the most popular MAP and CRM platforms and offers an open API for custom integration.To learn more, visit conversica.com and follow the company on Twitter, LinkedIn and Facebook.Conversica Media Contacts Edward Lin, on behalf of [email protected] | Digital Assistance/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | TipRanks | Artificial Intelligence Is Here; 2 ‘Strong Buy’ Stocks That Stand to Benefit | It’s the nature of investing to look for the ‘next new thing,’ the company or technology or product that will bring the next sea-change to its industry – and... | https://finance.yahoo.com/news/artificial-intelligence-2-strong-buy-122955454.html | https://s.yimg.com/uu/api/res/1.2/pAdjIdWi3yP0wUco3eQhkg--~B/aD00NjM7dz04NTI7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/tipranks_452/59a4f68738f88225d520488aa7afe142 | 2022-04-08T12:29:55Z | Its the nature of investing to look for the next new thing, the company or technology or product that will bring the next sea-change to its industry and with it, windfall profits. A look at history will show that these developments are often unpredictable, but they can be recognized early. The advent of digital tech in the late 90s provides a good example of the possible gains and risks. The survivors of the dot.com bubble have prospered mightily.Now, artificial intelligence AI, or machine learning is poised to take the digital world to its next frontier. With applications in virtually every facet of the tech industry, from smartphones to robotics to data analytics, AI is going to change the way we interact with machines, with applications, and perhaps even with each other. The possibilities in this are endless, limited only to human imagination.For investors, this means that new opportunities are going to open up, as companies move to capitalize on AI through product creation, programming, hardware development and perhaps AIs biggest impact hasnt been built yet.For now, however, we can look for firms that already have their hands in AI, either in their own work or in their products. Weve taken two of these names and looked up their details in the TipRanks database. Both are Strong Buys, according to the Streets analysts, with plenty of upside potential in store for 2022. Let's take a closer look.One Stop Systems (OSS)Well start with One Stop Systems, a provider of the high performance computers (HPCs) used in the highest-end data centers and data storage facilities. The company specifically markets its products for AI applications, offering portable, ruggedized servers and storage for AI on the Fly. The company draws its customers from a wide range of sectors, including telecom, manufacturing, and defense. One Stops systems are used in both rack-scale multi-HPC server complexes, and in smaller, single-HPC servers.This past November, OSS released a new product for AI Transportable solutions. The Rigel Edge Supercomputer brings higher performance to compact deployments, using Nvidias HGX A100 4-GPU platform to power GPU-accelerated computing. The system is air-cooled, and only 25.6 inches in total depth. The company is also working on release of an even more compact, liquid-cooled, version of the Rigel.New products and a solid reputation for compact, transportable AI systems have worked together to bring OSS strong financial results. In the most recently reported quarter, 4Q21, the company had $17.8 million in total revenue, up 11% sequentially and 28% year-over-year. For the full year, the top line of $62 million was a company record and up 19% from 2020. Full year non-GAAP EPS came in at 16 cents per share, double the 2020 net EPS. Looking ahead, One Stop is guiding toward 26% y/y revenue growth in 1Q22.In coverage for Roth Capital, 5-star analyst Scott Searle points out the companys strong guidance, as well as its moves toward the autonomous vehicle market, a growing frontier for AI.Appropriately overshadowing 4Q21 results is the upwardly revised 1Q22 outlook (up 26% y/y) which features a growing opportunity pipeline around AI transportables and autonomous vehicle applications which could open a $500M+ TAM in 2023... With high level engagements at multiple autonomous vehicle trucking and fleet applications we believe this could drive an inflection in the 2023-time frame beyond our upwardly revised expectations of 12.8% growth. Overall, we view this opportunity as potentially transformative for OSS... We are raising our sales estimates and would be buyers of this stealth AI and autonomous vehicle microcap, Searle wrote.In line with these comments, the analyst gives One Stops shares a Buy rating, along with a $9 price target that implies a strong upside of ~131% for the next 12 months. (To watch Searles track record, click here)Overall, with 3 Buys and no Holds or Sells assigned in the last three months, the word on the Street is that OSS is a Strong Buy. The stock is selling for $3.90 per share, and its $8 average price target indicates a 105% upside by years end. (See OSS stock forecast on TipRanks)Nvidia Corporation(NVDA)The second stock well look at needs no introduction. Nvidia, with a market cap of more than $600 billion and an 80%-plus market share in the GPU segment, is a dominant player in the gaming and data center sectors. The companys GPU chips are in high demand, due to their computing capacity which is capable of powering AI systems.Nvidia is deeply involved with AI companies and applications, supplying high-end GPU AI-capable semiconductors to companies involved in data centers, autonomous vehicles, intelligent factory robots, cloud computing even ordinary desktop systems are coming with AI capabilities installed. The company provides deep learning chips to power cloud services from major names like Amazon, Google, IBM, and Microsoft.The companys recent history, of 7 consecutive quarters showing sequential gains in both earnings and revenue, may help explain why analysts find the stock so compelling. The companys 4Q21 EPS, at $1.32, beat the forecast (of $1.23) and rose 69% year-over-year, while the 4Q revenue grew 53% y/y, to reach $7.64 billion.Matt Ramsay, 5-star analyst from Cowen, is unabashedly bullish on Nvidia, writing: AI computing opportunities are inflecting across essentially all vertical industries, lead by datacenter, enterprise, gaming, edge, auto, simulation... ultimately monetized through hardware and now vertical-specific software... We view NVIDIA as the premier AI-driven growth story in semis and possibly the tech industry."Ramsay uses these comments, as part of an in-depth look at NVDA, to support his Outperform (i.e. Buy) rating on the stock. His price target, of $350, suggests ~45% in store for the chip giant. (To watch Ramsays track record, click here)In recent weeks, no fewer than 26 Wall Street analysts have weighed in on this stock giving it 21 Buys and 5 Holds, for a Strong Buy consensus rating. The shares are selling for $242.08 and their $351.74 nearly matches Ramsays objective, for a 45% one-year upside potential. (See NVDA stock forecast on TipRanks)To find good ideas for AI stocks trading at attractive valuations, visit TipRanks Best Stocks to Buy, a newly launched tool that unites all of TipRanks equity insights.Disclaimer: The opinions expressed in this article are solely those of the featured analysts. The content is intended to be used for informational purposes only. It is very important to do your own analysis before making any investment. | Recommendation/Decision Making/Discovery | Business and Financial Operations | null | null | null | null | null | null |
news | Valkyrie Trading Society | GSI Technologies Appear Ever More Relevant To The Modern Computing Landscape | Gemini is going to be orders of magnitude faster at certain applications. Read more to find out why the opportunity remains very compelling for GSIT stock. | https://seekingalpha.com/article/4495609-gsi-technologies-more-relevant-modern-computing-landscape | 2022-03-15T15:48:33Z | Sean Gallup/Getty Images NewsPublished on the Value Lab 12/3/21 This article is about GSI Technologies (NASDAQ:GSIT), but it is also just as much about what Google (NASDAQ:GOOG) is doing with its search algorithms. I am a Data Scientist and not a Computer Scientist, so not an expert on hardware, but as I understand it the movement of Google from using BERT to multimodal methods in search would make the Gemini APU relevant if they could commercialise it soon enough. With the Gemini APU APIs being built out now, hopefully they'll be in time to catch this wind. GSIT is currently valued at only a hundred million in market cap, but if it became a major hardware provider to Google, there is probably at least a 5x opportunity here, if not 50x considering the breadth of applications and the growth in those applications. With the Gemini APUs ideal for search applications as well as recommendation discussed in previous articles, we think it could be a revolutionary play for yet another reason. What is Google Doing that GSIT Could Help With? Google is moving from using BERT to using a multi-modal system for understanding the information contained in a query. What is BERT though? My explanation for the layman is something like this. If you represent a word by some unique matrix of numbers, that key would not depend on the context of the word. Of course, the context of the word matters a lot. Think 'man bites dog' versus 'dog bites man'. BERT trains itself to guess what words are missing in sentences where things are randomly left out. Then it can understand future queries by training paying attention to all the elements in the sentence with reference to each other element in the sentence. Every time you need to train this model, it requires a series of matrix operations, and many permutations of these matrix operations for each word in a sentence because it's paying attention to the relations between words. On top of the representation of words already being a matrix, this results in a bunch of matrix operations. And this is happening at many heads in parallel and through many layers. So the operations are already complex on their own, and there is a matrix of these operations too, so there is a lot of dimension here that will require reaching into the memory and making computation. All this results in a model that can generate language, extremely convincingly where you would absolutely not know that it's an AI, meaning that it also has an almost human understanding of the intention of a query. This is what makes Google a good search engine. You don't need to query in a perfect way, the engine will infer meaning from your query to give you the results you want. Now, imagine that in addition to all these operations, you add a whole other dimension to this. Imagine combining the meaning of words with the appearance of images. Much like words, images also have matrix representations. They are three dimensional. Two dimensions are the space, and the last dimension is the colour channel (RBG). The size of the image determines the number of pixels, each with an intensity value for red, green and blue. Lots of numbers and potentially big matrices. What MUM is, the multi-modal model Google wants to use now instead of BERT, is to combine an understanding of the meaning of a sentence with the values of pixel intensities in each channel that make up an appearance of an image. So in addition to all those matrix operations related to words, we massively increase the complexity of learning by combining that now with image matrices as well, which have a lot of values in them! In the future, even audio information could be given numerical representations and be further combined into these multi-modal systems to combine data of various types to enhance Google's understanding of queries. Conclusions GPUs and even TPUs work off Von Neumann architecture. This has inherent limitations where frequent reaches into the memory before performing calculations creates a bottleneck. The GSIT patent acquired from MikaMonu means this is no longer a problem, with memory in-place operations being possible. Because of all the computations that Google's models already do from memory, and with multi-modal systems likely to increase the complexity of the calculations in pretty much an exponential fashion, the Gemini APU which doesn't rely on Von Neumann architecture could change the game. The company remains self-financing, and we have hope that they can start delivering a commercial product soon. As long as their patent can stay protected, and that they create a product that can be commercially shipped, our understanding is that it will be useful for some of the most valuable applications in the world. Of course, we don't know what will happen, and we are not experts in hardware. But we know enough to understand what this APU might be able to do for these companies. With massively faster and less energy intensive calculations, Google, Netflix (NASDAQ:NFLX) and Amazon (NASDAQ:AMZN) should be lining up at their doorstep once the Gemini is ready to be sold. It's hard to say exactly how big the opportunity is. The Gemini is going to be orders of magnitude faster at certain applications, and likely to reduce energy usage consequently by 60-70%. Supposing the obtainable market can be built starting from Netflix's cloud computing costs in a bottom-up approach, consider their expense of around $30 million per month, or $360 million per year for AWS needs. Netflix is about 50% of the streaming market, so for the needs of streaming companies, which is substantially recommendation engines, the value of the market for Gemini might be around $720 million in streaming. Of course, streaming and their recommendation engines are just a small subset of the recommendation engines running on servers across all of ecommerce. Then there's also search, so looking at just streaming understates things. But supposing the $720 million as a super conservative figure and supposing a 10% operating margin, which is what GSIT had for its legacy SRAM business, you get a $72 million EBIT. GSI Technology is a $100 million company, so that means a 1.4x multiple on this forecast EBIT. It's very low, with semiconductor companies easily doing a 15x EBIT multiple which would value GSIT at $1 billion, suggesting a potential 10x opportunity. It could end up being much higher than this considering we only considered server costs with respect to streaming.In any case, while GSIT gets its Gemini ready for production in foundries and shipping to servers of hopefully marquee customers like Amazon and Google, they are still managing to keep somewhat above water in terms of cash burn with their legacy businesses. Without R&D, their operating income would be about $3 million, but they are in cash burn territory with the R&D being above $20 million at this point while trying to develop APIs and libraries for the Gemini. So equity raises are in the cards with additional paid in capital growing by 20% since last year. Dilution is certainly non-negligible here, which is a risk. But the APIs are being worked on as we speak, and the company hopes to be able to get its product out to first customers in Q1 2022. After that, it shouldn't be more than a couple of years of dilution before the product is fully launched, hopefully as semiconductor shortages ease. With 10x, or perhaps 5x after two years of dilution, being the highly conservative estimate of upside, and markets available beyond streaming like for search and more complex Google multimodal algorithms, the opportunity remains very compelling as a small, speculative exposure.If you thought our angle on this company was interesting, you may want to check out our service, The Value Lab. We focus on long-only value strategies, where we try to find international mispriced equities and target a portfolio yield of about 4%. We've done really well for ourselves over the last 5 years, but it took getting our hands dirty in international markets. If you are a value-investor, serious about protecting your wealth, our group of buy-side and sell-side experienced analysts will have lots to talk about. Give our no-strings-attached free trial a try to see if it's for you. | Content Synthesis/Information Retrieval Or Search | Computer and Mathematical | null | null | null | null | null | null |
|
news | Titan Space Technologies Announces Successful Deployment of Machine Learning Models During Historic Ax-1 Mission | Titan Space Technologies Announces Successful Deployment of Machine Learning Models During Historic Ax-1 Mission - SpaceRef | http://spaceref.com/news/viewpr.html?pid=59866 | 2022-04-11T18:23:18Z | Titan Space Technologies Announces Successful Deployment of Machine Learning Models During Historic Ax-1 MissionPress Release From: Titan CorporationPosted: Monday, April 11, 2022 Titan Space Technologies has successfully deployed and run a suite of machine learning models on the HPE Spaceborne Computer-2, an edge computing and AI system aboard the International Space Station (ISS), in support of Axiom Space's future missions and vision of smart spacecraft architectures. Working closely with Axiom and HPE, Titan was tasked with applying its space experimentation platform to a use case based on the new demands of a modern space station. Titan's success demonstrates the applied artificial intelligence (AI) capabilities critical for conducting space experimentation on orbital destinations and spacecraft now and in the future."As the industry continues to make progress in the rapid path to commercialization in space, optimizing key applications and capabilities on the International Space Station will be essential to support future, mission-critical spacecraft demands," said Dr. Mark Fernandez, principal investigator of Spaceborne Computer-2, HPE. "We look forward to continue collaborating with Titan Space Technologies and welcome its expertise and technical craftsmanship that is necessary to build required space infrastructure for successful commercial development in space.""Working closely with HPE to deploy and test large scale machine learning on HPE Spaceborne Computer-2 was the signal I'd hoped for, heralding the potential for applied AI at scale in space.", said Russell Foltz-Smith, Titan Co-founder and Chief Compute Officer. "The AI ramp-up needed to support the record investment and human activity in LEO is something that HPE and Axiom are uniquely positioned to address."Ashley Pilipiszyn, Titan Co-founder and CEO, has quickly transitioned from her previous role as OpenAI's Technical Director to space entrepreneur. "Axiom is thinking about the future of humans in space that blends years of industry experience combined with the innovative approach of today's commercial entrepreneurs and technologists, and this is where Titan and Axiom align nicely. We believe that an orbital ecosystem driven by science, technology and commercialization has no boundaries. And we believe space experimentation is the path to that future."Meanwhile, Axiom prepares for a long journey of innovation that puts scientific research first. "We are creating a platform today that will enable the future of research tomorrow", said Tejpaul Bhatia, Chief Revenue Officer at Axiom Space. "Our relationship with Titan brings a new level of progress and excitement as it represents the next inflection point for scientific discovery in space with the aid of AI."About Titan Space Technologies Corporation:Titan Space Technologies was founded in 2021 with the mission of making space experimentation accessible via its AI-driven orbital compute platform to enterprise customers. It was founded by Ashley Pilipiszyn, Russell Foltz-Smith, and Rob Maigret - formerly of OpenAI, Wolfram Alpha, and The Walt Disney Company, respectively. For more information, visit: titanspace.coAbout Hewlett Packard Enterprise:Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com // end //More news releases and status reports or top stories.Please follow SpaceRef on Twitter and Like us on Facebook. | Decision Making/Process Automation | Others | null | null | null | null | null | null |
||
news | 12 Apr 22 CrowdStrike Opens Call for Applications for its 2022-23 NextGen Scholarships Annual scholarship program is open to students pursuing a cybersecurity or AI education AUSTIN, Texas – April 12, 2022 – CrowdStrike (Nasdaq: CRWD), a leader in cloud-delivered protection of endpoints, cloud workloads, identity and data, today announced that the application | CrowdStrike today announced that the application period for its 2022-23 NextGen Scholarship Program is now open for submissions. Learn More! | https://www.crowdstrike.com/press-releases/crowdstrike-opens-applications-for-2022-23-nextgen-scholarships/ | 2022-04-12T17:43:00Z | Annual scholarship program is open to students pursuing a cybersecurity or AI educationAUSTIN, Texas April 12, 2022 CrowdStrike (Nasdaq: CRWD), a leader in cloud-delivered protection of endpoints, cloud workloads, identity and data, today announced that the application period for its 2022-23 NextGen Scholarship Program is now open for submissions. This year, up to eight (8) scholarships of $10,000 will be awarded to select undergraduate and graduate students studying cybersecurity or artificial intelligence (AI). NextGen Scholarship applicants will also be included in the candidate pool for CrowdStrike internships, giving aspiring students both greater access to world-class education and marketable, real-world experience in one of today’s most critical and fastest growing industries.CrowdStrike established the scholarship program in 2017 to nurture and develop the next generation of talent and research in cybersecurity and artificial intelligence (AI). To date, CrowdStrike has awarded more than $165,000 in scholarships to 24 of the top students in the U.S. and Canada.Applicants for the 2022-23 academic year must be current undergraduates, graduate students or high school seniors who will be enrolled full-time at a college/university in the United States or Canada in the fall of 2022, majoring in cybersecurity or AI and achieving a minimum 3.30 GPA. Recipients will be evaluated based on academic achievement, essay content, expressed career interest and work/internship/volunteer experience in cybersecurity, AI or related fields.Cybersecurity is a field becoming more and more important every year, especially as our lives become increasingly digital. In the news, I constantly hear about data breaches, ransomware attacks, and numerous other cybersecurity threats, said Aneesh Boreda, a 2021-22 NextGen Scholarship award recipient who is studying computer and information sciences at the University of Pennsylvania. By pursuing a career in utilizing artificial intelligence to build powerful security tools to prevent these threats, I can help develop the next generation of algorithms used in the arms race between cyberattackers and security companies.How to applyThe scholarship program is administered by International Scholarship and Tuition Services, Inc. (ISTS), an independent company that specializes in managing sponsored educational assistance programs. ISTS hosts the online application process, responds to applicant inquiries and performs preliminary ranking for the selection process. For more information about the program and to begin the application process, visit here. The deadline to apply is June 2, 2022.Past award winnersThe recipients of the NextGen Scholarships for the previous academic year 2021 – 2022 were:Kehinde Ayano, graduate student at Purdue UniversityAneesh Boreda, undergraduate student at University of PennsylvaniaZachary Espiritu, undergraduate student at Brown UniversityWesley Muthemba, graduate student at University of WashingtonLy Na Nguyen, undergraduate student at Columbia UniversityArianne Ghislaine Rull, undergraduate student at York UniversitySara Takhim, undergraduate student at Northeastern UniversitySamuel Watson, undergraduate student at University of AlabamaAbout CrowdStrikeCrowdStrike (Nasdaq: CRWD), a global cybersecurity leader, has redefined modern security with one of the worlds most advanced cloud-native platforms for protecting critical areas of enterprise risk endpoints and cloud workloads, identity and data.Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.CrowdStrike: We stop breaches.Learn more: https://www.crowdstrike.com/Follow us: Blog | Twitter | LinkedIn | Facebook | InstagramStart a free trial today: https://www.crowdstrike.com/free-trial-guide/© 2022 CrowdStrike, Inc. All rights reserved. CrowdStrike, the falcon logo, CrowdStrike Falcon and CrowdStrike Threat Graph are marks owned by CrowdStrike, Inc. and registered with the United States Patent and Trademark Office, and in other countries. CrowdStrike owns other trademarks and service marks, and may use the brands of third parties to identify their products and services.ContactKevin BenacciCrowdStrike Corporate [email protected] | Personalization/Content Synthesis | Education, Training, and Library/Computer and Mathematical | null | null | null | null | null | null |
||
news | Kat Jercich | AWS, GE leaders talk hurdles to data-sharing, AI implementation | In their HIMSS22 fireside chat, Amazon Web Services' Dr. Taha Kass-Hout and GE Healthcare's Vignesh Shetty discussed how breaking down information silos can help move the industry forward. | https://www.healthcareitnews.com/news/aws-ge-leaders-talk-hurdles-data-sharing-ai-implementation | 2022-03-15T12:27:13Z | ORLANDO The healthcare business is an unusual one: Although trillions of dollars are poured into the industry, billions of people worldwide don't have reasonable access to care.Part of the solution to that gap, explained Amazon Web Services Chief Medical Officer and Director of Machine Learning Dr. Taha Kass-Hout, may be found in artificial intelligence, and in technology more broadly."Innovations like precision medicine, conversational bots, AI scribes and APIs for data interoperability are great examples of how we can help improve care, close gaps in care, provide more efficiencies and also provide more equitable care," said Kass-Hout in a fireside chat at the HIMSS22 Machine Learning and AI for Healthcare Forum on Monday.Additionally, given the move toward the digitization of health data, particularly via the cloud, the question becomes how to use that information for the benefit of patients. One hurdle, as other HIMSS22 panelists pointed out earlier in the day, is the sheer amount of unstructured data being created. "Every health organization, payer or life sciences organization is trying to structure this information," Kass-Hout said. "If you do, you can make better connected decisions, you can design better clinical trials, you can operate more efficiently or you can detect better trends in a population." Vignesh Shetty, SVP and GM of Edison AI and Platform at GE Healthcare, told attendees at the fireside chat that bias is another issue for would-be AI implementers to contend with."A lot of times, people say, 'I don't trust AI,'" he said. "But it isn't as much about the algorithm; it's about the data that was used to create the algorithm and that could lead to potential bias." "Breaking the 'black box' is not an easy task," Kass-Hout chimed in, referring to promoting transparency around AI algorithms. "It's really very hard. Understanding the bias that went into the model is also really hard." Another major challenge, he said, is that data today is locked in "thousands of incompatible formats.""For many business reasons, they're locked behind different silos," Kass-Hout said. "You want all this information to come together at the point of care, where you sort of have a 360-degree view of every patient. So, you can understand what's going on with them today, but also really try to forecast and predict what's going to be next."In this way, he said, we can "start moving the care system from 'sick' care to, really, healthcare." Organizations with fragmented data in their own companies, Kass-Hout suggested, should start with concrete use cases including operational efficiencies."Start with the data that you need to address that use case," he said. "By working through an end-to-end use case crisply, you'll be able to bring a lot of this information together, and this is where you start realizing a lot of the value in machine learning technology." "Start small and then scale," said Shetty. "Don't get held back by the fact that there are silos within the enterprise; that's the reality for almost every enterprise today," he continued. Overall, said Shetty, the industry is at "very early stages" of machine learning and AI."Healthcare and health tech [are] at an inflection point," he said. "The specific marriage of human intelligence with some of these tools to drive better clinical and operational outcomes is something that I'm super-stoked about."Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: [email protected] IT News is a HIMSS Media publication. | Content Synthesis/Decision Making | Healthcare Practitioners and Support/Management | null | null | null | null | null | null |
|
news | GNS Healthcare Hires Industry Veteran, Joseph F. Donahue, as Chief Business Officer | SOMERVILLE, Mass., April 13, 2022 /PRNewswire/ -- GNS Healthcare, the leader in the use of "Virtual Patients" and Causal AI and simulation technology for biopharmaceutical companies, today announced the hiring of Joseph F. Donahue as Chief Business Officer. In this role, Mr. Donahue will... | https://www.prnewswire.com/news-releases/gns-healthcare-hires-industry-veteran-joseph-f-donahue-as-chief-business-officer-301524454.html | 2022-04-13T13:30:00Z | GNS Healthcare Hires Industry Veteran, Joseph F. Donahue, as Chief Business Officer.Tweet this"GNS Healthcare's use of Causal AI modelling and simulation technology to identify relationships in multi-omics and clinical data to expose the circuitry of human disease biology and accelerate drug R&D stands apart from other AI and deep learning approaches available today," said Joseph F. Donahue. "I am excited to be joining such an exceptional team of life sciences researchers, physicists, and mathematicians who are focused on working closely with our customers, data partners, and patient groups to accelerate the discovery and development of new and improved medicines that lead to better outcomes for patients." Colin Hill, Co-founder and CEO of GNS Healthcare commented, "Joe is a proven business, science and technology leader with an outstanding track record of delivering innovative and impactful solutions to life sciences organizations globally. I am thrilled to have him as part of the team as we continue to expand client relationships for our Causal AI platform and therapeutic areas for our Gemini Virtual Patient models."GNS Healthcare is the leader in the application of Causal AI and simulation technology to help pharmaceutical and biotech companies discover and develop new medicines faster. GNS' patented AI uncovers new insights from multi-omic and real-world data leading to the discovery of novel biological targets, more efficient clinical trials, and patients who are likely to respond to therapies. The Gemini Virtual Patient models across oncology, auto-immune diseases, and neurology allow researchers and data scientists to simulate clinical trials, disease progression and drug response at the individual patient level in diverse patient cohorts. GNS' partners include seven out of the top ten pharmaceutical companies, leading research centers, medical societies, and patient advocacy groups globally, and our advisory board consists of a renowned group of scientific and medical experts.For more information, please visit www.gnshealthcare.comMedia Contact: Zena Sfeir Director of Marketing [email protected] SOURCE GNS Healthcare | Discovery/Prediction/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Lily Roberts | Machine Learning Techniques Can Speed Up Glacier Modeling By A Thousand Times | Glacier modeling techniques that involve artificial intelligence can produce high fidelity results to a large number of questions much more quickly than traditional approaches. | https://news.climate.columbia.edu/2022/03/25/machine-learning-techniques-can-speed-up-glacier-modeling-by-a-thousand-times/ | 2022-03-25T13:01:47Z | A novel glacier model has been developed which can simulate ice dynamics and ice interaction with the climate up to a thousand times faster than previous models. This model can be used to predict the evolution of glaciers and ice sheets under different scenarios. Since meltwater from glaciers and ice sheets is a major component of sea level rise, models like this are a valuable tool to assess their potential future contribution.The new model uses a machine learning approach which makes glacier modeling much quicker whilst maintaining high levels of fidelity (the degree to which a simulation or model accurately reproduces the object or process it is designed to represent). As a result, more model simulations with different inputs and assumptions can be conducted, investigating a wider range of questions.The state of the art Instructed Glacier Model is highly efficient compared to well-established simulation tools. It implements an artificial neural network, which is a computer system that mimics the neural networks found in our brains. They trained the neural network by inputting data from ice sheet models so that it could emulate ice dynamics. This training process is called machine learning, and it is considered part of the field of artificial intelligence. Modeling methods prior to AI required a great deal of human input, supervision and decision making, whereas with machine learning, the computer system navigates the human process of updating the model on its own.The lead developer, Guillaume Jouvet, a senior researcher at the University of Zurich, explained that [there is] a new trend for machine learning to learn from data generated by physical models. Physics-based models (also termed physical models) have long been used to understand the physical processes occurring in the Earth system, without relying on any artificial intelligence.Physical modeling of ice sheets and glaciers at high spatial resolutions is an enormous challenge even today. Over the past two decades, exceptional efforts have been made to develop models to simulate ice flow and its associated physical processes, as well as its interaction with the climate. Adding complexity to models increases the computational cost of the simulation, so most models often use approximations to the Stokes equations, which most faithfully describe ice flow, entailing a compromise between accuracy and computational cost. Jouvet describes that the main motivation behind transitioning to machine learning is in a way, you are shortcutting your physical modeling, making the gain computationally way cheaper.GlacierHub spoke with Laura Sandoval, of the University of Colorado, Boulder, who led a review into artificial intelligence in the geosciences field. In the past decade the AI [and] machine learning activities have increased tremendously in the field of geosciences, [but] most AI efforts in geoscientific research groups are still at the infancy stage, she stated. Currently, researchers are actively exploring many AI models and prototyping solutions for the challenging problems within their domains. However, in comparison to the traditional physics-based models, there have been no big breakthroughs in AI and machine learning products yet. Sandoval added the implementation of AI is still underway.The Instructed Glacier Model substitutes the most computationally demanding model component by using a neural network trained from large datasets. Taking advantage of the large amount of modeling data available to train the neural network delivers high fidelity solutions at much lower computational cost. It can predict ice flow from given variables and simplified processes to be used in global glacier modeling as well as researching past glaciated environments.Diagram of an artificial neural network, mirroring those of the human brain. Credit Liam Huang/Flickr.The most expensive part was computing the dynamics because it involved heavy physics, [but] machine learning accelerated this part of the model. The result is that we can model the glacier to the same accuracy much quicker than before. We can use this to explore many more parameters and [conduct] more refined simulations, said Jouvet. The research on the model took Jouvet and his team over a year. He added that I had to learn this new technique all the tools Im using are really new.The researchers are pleased that they were able to have the machine learning up and running and Jouvet will now look forward to using his model to reconstruct the evolution of glaciers in the Alps over the last glacial cycle of 100,000 years. The gain for this approach is you speed up the modeling so you can afford to do long timescales. [Where] traditional models may take several weeks, it can now take an hour.The research team will now use the glacier model to reconstruct the past history of glaciers like this, Monarch Glacier in the Bernese Alps, Switzerland. Credit John Lillis/Flickr.Implementation of AI and machine learning does not come without its challenges and skepticism, similar to that seen in high-profile cases in biology and engineering. Sandoval explains Ethics is truly one of the major concerns. However, since we are still at an early prototyping stage, the current main arguments against AI are uncertainty, explainability and reproducibility. Ethical issues surrounding AI include the loss of human jobs, the unequal distribution of wealth created by AI machines, the security of AI data and the capacity for malicious intent. As implementation of AI increases, more concerns are emerging, such as the environmental issues of using large amounts of energy to run computer models. Similar arguments have been widely seen against other cyber services like cryptocurrency and electronic trading.Scientists have been studying big questions about our climate and Earth system for many years and have accumulated a large amount of data which will be used to train AI models. Given the recent huge investments in AI from both the public and private sectors, we expect to see that the relevant application on data-centric AI research in geosciences will bloom in the next few years, says Sandoval.Despite the transition, not every geoscientific problem can be solved by AI, and some questions are not well suited to classic machine learning techniques. Some Earth phenomena are extreme events and their patterns cannot be learnt from historical data. Finding a suitable question is the key first step to develop a successful AI application, Sandoval concludes.The novel Instructed Glacier Model is a successful example of how new techniques for glacier modeling may replace the traditionally known physics-based approaches. Many uncertainties surrounding artificial intelligence still remain and whether large-scale progress in the field will be seen is a question of the coming decade. For now, both old and new techniques will be implemented in order to provide answers to some of our greatest questions regarding ice sheets and glaciers. | Prediction/Content Synthesis | Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Sam Shead | Reid Hoffman's new start-up poaches first staff from Google and Meta | Inflection AI, the start-up launched by LinkedIn billionaire Reid Hoffman and DeepMind co-founder Mustafa Suleyman, has poached staff from Google and Meta. | https://www.cnbc.com/2022/03/29/inflection-ai-reid-hoffmans-start-up-poaches-staff-from-google-meta.html | 2022-03-29T08:48:49Z | Inflection AI, the start-up launched earlier this month by LinkedIn billionaire Reid Hoffman and DeepMind co-founder Mustafa Suleyman, has poached artificial intelligence gurus from Google and Meta, according to CNBC analysis.Headquartered in Silicon Valley, Inflection's aim is to develop AI software products that make it easier for humans to communicate with computers.When the company was launched, the only three team members that were made public were Suleyman, Hoffman and former DeepMind researcher Karén Simonyan. However, others have now joined the fold.Heinrich Kuttler left his research engineering manager role at Meta AI in London this month to become a member of the founding team at Inflection, working on the technical side of the business, according to his LinkedIn page. He is one of many people to have left Meta since CEO Mark Zuckerberg announced a pivot to the so-called metaverse.Elsewhere, Joe Fenton left his senior product manager role at Google in February to become a member of the founding team at Inflection AI, working on the product side of the business.Both Kuttler and Fenton, who used to work at DeepMind with Suleyman, did not immediately respond to a CNBC request for comment. Inflection declined to comment. Meta and Google did not immediately respond to a CNBC request for comment.Researchers and engineers that land AI jobs at DeepMind, Google and Meta tend to have impressive resumes. Fenton, for example, has a first class degree in physics from the University of Warwick in England, while Kuttler studied theoretical and mathematical physics at leading universities in Germany.Inflection is the first company Hoffman has co-founded since he sold LinkedIn to Microsoft for $26.2 billion in 2016. It is also the first company Suleyman has co-founded since he sold DeepMind to Google in 2014 for around $600 million. The pair are also investors at venture capital firm Greylock."AI is one of the most transformative technologies of our time," Hoffman said in a statement shared with CNBC at the time of the launch."Mustafa has been at the forefront of some of the most exciting advances in artificial intelligence. It's a privilege to join him and Karen in building Inflection." | Unknown | Unknown | null | null | null | null | null | null |
|
news | Bloom Raises $1.1M in Seed Round Funding to Bring Generative AI to eCommerce | Bloom boosts merchants' conversion rates with technology that tracks consumers' on-site interactions and delivers the most desirable product photos, causing shoppers to stay on the page longer and buy NEW YORK, March 14, 2022 /PRNewswire/ - Product photo performance engine Bloom has... | https://www.prnewswire.com/news-releases/bloom-raises-1-1m-in-seed-round-funding-to-bring-generative-ai-to-ecommerce-301499268.html | 2022-03-14T10:00:00Z | Bloom boosts merchants' conversion rates with technology that tracks consumers' on-site interactions and delivers the most desirable product photos, causing shoppers to stay on the page longer and buyNEW YORK, March 14, 2022 /PRNewswire/ - Product photo performance engine Bloom has announced that it has raised $1.1 million at the close of its seed funding round. The AI-driven platform has garnered backing from various high-powered venture firms and angel investors, bringing Bloom one step closer to bringing generative AI to eCommerce.Founded by Aarlo Stone Fish, a Yale alumnus with 20 years of experience as a software engineer and an AI expert, and Sam Dundas, a lifelong entrepreneur and product builder, Bloom uses generative AI to improve customers' online shopping experiences and drive more sales. Bloom's technology tracks a consumer's on-site behaviors such as clicks, zooms, swipes, and bounces and then compiles that behavioral data to deliver pixel-by-pixel the most compelling and personalized images possible. The result is that consumers spend more time on the page and eventually purchase. eCommerce sites integrating Bloom have seen a boost in their conversion rates ranging between 5-14% without the merchant lifting a finger."We're building the engine to power billions of shopper sessions with 100% personalized content," said Dundas. "Bloom is positioned to become an essential layer to the e-commerce tech stack."Bloom's seed funding round was backed by prominent investors who have supported a wide range of startups, including those in the AI sector. Investors in the seed round include Inovia Capital; AIX Ventures, featuring Investing Partners Richard Socher (founder of MetaMind), Pieter Abbeel (co-founder of Covariant), Chris Manning (director of Stanford AI Lab), and Anthony Goldbloom (founder of Kaggle); Forum Ventures, which also invested in Bloom during its pre-seed accelerators; OneValley Investments; and The Y Startup Index, co-founded by Sean Glass, a prolific angel investor and serial entrepreneur.With eCommerce accounting for $4.93 trillion worldwide and expected to grow to $7.39 trillion by 2025 and mobile commerce purchases accounting for 72.9% of all online purchases, Bloom helps its partners get more of these sales with its mobile-optimized product photo engine. By integrating Bloom into their sites, merchants, especially those with $1+ million in sales, can enhance the online shopping experience with a done-for-you AI-driven solution that closes more sales without investing additional precious time or human resources into the process."Showing the same product photos to every shopper is medieval," said Shaun Johnson, founding partner at AIX Ventures and seed round investor. "A shopper should see a photo that is generated to meet their specific needs, including the model type and environment. Bloom is this dynamic photo solution that is taking e-commerce photos into the future."Bloom initially came to market with a generative AI product capable of creating photo-realistic fashion models, indistinguishable from real-life people. After launching Bloom's beta version and collecting feedback from 200 eCommerce merchants, Stone Fish and Dundas discovered that brands don't understand how their existing assets perform. Wanting to find a solution to this problem that delivered a clear ROI, Bloom was born. The seed funding will enable the Bloom team to bring its solutions to more merchants and continue to expand its capabilities, eventually achieving its ultimate vision of linking its performance engine with generative AI in real-time.Bloom is currently available on Shopify, accessible to the over two million eCommerce merchants on the platform.SOURCE Bloom AI | Personalization/Content Synthesis/Decision Making | Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
||
news | ‘No code’ brings the power of AI to the masses | Sean Cusack, a software engineer at Microsoft and beekeeper on the side, wanted to know if anything besides bees was going into his hives. So he built a tiny photo booth (a sort of bee vestibule) that took pictures whenever something appeared around it. But sorting through thousands of insect portraits proved tedious. | https://bdnews24.com/technology/2022/04/05/no-code-brings-the-power-of-ai-to-the-masses | 2022-04-05T06:49:50Z | Colleagues told him about a new product that the company wasworking on called Lobe.ai, which allows anybody to train a computer-visionsystem to recognize objects. Cusack used it to identify his honeybees butalso to keep an eye out for the dreaded Asian murder hornet.It was just really simple, Cusack said, adding that theunderlying data science was over my head, despite his title.The Lobe platform allowed him to drag and drop sample photosand click a few buttons to make a system that could recognize his beloved beesand spot unwelcome visitors.Cusack is part of a growing army of citizen developers,who use new products that allow anyone to apply artificial intelligence withouthaving to write a line of computer code. Proponents of the no-code AIrevolution believe it will change the world. It used to require a team ofengineers to build a piece of software, and now users with a web browser and anidea have the power to bring that idea to life themselves.We are trying to take AI and make it ridiculously easy,said Craig Wisneski, a no-code evangelist and co-founder of Akkio, a startupthat allows anyone to make predictions using data.AI is following a familiar progression.First, its used by a small core of scientists, JonathanReilly, Akkios other co-founder, said. Then the user base expands toengineers who can navigate technical nuance and jargon until, finally, itsmade user-friendly enough that almost anyone can use it.Just as clickable icons have replaced obscure programmingcommands on home computers, new no-code platforms replace programming languageswith simple and familiar web interfaces. And a wave of startups is bringing thepower of AI to nontechnical people in visual, textual and audio domains.Juji, for example, is a tool designed to make building AIchatbots as easy as creating a PowerPoint presentation. It uses machinelearning to automatically handle complex conversation flows and infer usersparticular characteristics to personalise each engagement rather than simplyserving up preprogrammed interactions.Its co-founder, Michelle Zhou, said the goal was to giveJuji chatbots advanced human soft skills such as emotional intelligence so thatthey could connect with users on a more human level than existing systems have.Using Juji, staff members at the University of Illinois were able to create andmanage their custom AI chatbot and scale their student recruitment operations.But not all of the existing tools are robust enough to domore than simple tasks. Googles Teachable Machines is a computer-vision toolsimilar to Lobe.ai. Steve Saling, a former landscape architect who is nowliving with ALS, worked with the Teachable Machines team for about a year and ahalf to train a system to turn switches on and off using his facialexpressions.It gets more accurate with more data, Saling said in anemail.But he said the process of collecting that data picturesof his face at different angles and in varying light was labour intensive andthe system never reached the level of accuracy required.Automation needs to be in excess of 99 percent reliable tobe dependent on it, Saling said. Teachable Machines will get there, but it isnot there yet.Still, it is early days.No-code AI tools are still on the fringes of the largerno-code movement because not many people understand machine learning enough todream up whats possible, said Josh Tiernan, who runs No Code Founders, acommunity of nontechnical entrepreneurs who use no-code tools such as WordPressor Bubble.But he expects no-code AI to grow as more people understandits potential.Another force in no-codes favour: Advances in AI itself aremaking no-code platforms more powerful. OpenAI, the company co-founded by ElonMusk, has a vast AI system, GPT-3, that can write code when prompted withsimple English. It can even create websites and do other basic programmingtasks. OpenAI has used the system to create GitHub Copilot, a tool that acts asan autocomplete function for coders, speeding up their work. DeepMind, asubsidiary of Alphabet, Googles parent company, has gone a step further withan AI tool capable of writing complete code to solve complex problems posed toit with normal speech or text.Already, users of Microsofts Power Platform, which includesa family of no-code products, can generate simple applications by justdescribing them.I could say something like, Look up all the customerrecords from the last year, and it will go do that for you automatically,said Charles Lamanna, Microsofts corporate vice president of business apps andplatforms.He estimates that half of all office work could be automatedwith AI if there were enough developers to do the work.The only way to do this is to empower everybody to be ano-code developer, Lamanna said.Eventually the broader public will be able to createAI-enabled software in much the same way that teenagers today can createsophisticated video effects that would have required a professional studio adecade or two ago.For now, though, most no-code-AI users are businessprofessionals who want to streamline the way things are done without having toinvolve a programmer.Googles AppSheet, for example, is an open platform wherepeople can connect data and, with a single click, create apps that can beopened on a smartphone, tablet or computer. It uses AI to understand the intentof users and enables them to build mobile and desktop applications withintegrated computer vision and predictive analytics features. It costs nothingto make apps for personal use.Were focused on making this consumable by regular people,said Praveen Seshadri, AppSheets co-founder and CEO.There are many organisations around the world that haveteams of people that need to coordinate schedules and tasks, he added. Each oneis unique and more suited to building a custom app than trying to use somethingoff the shelf.As an example, he cited New Incentives, an organisation thatgives small amounts of cash to mothers in some of the poorest parts of Nigeriato encourage them to immunise their children. To track that data, they built anapp with AppSheet that their employees now use in the field.Alex Denning, who runs Ellipsis, a small marketing companyfor businesses that use WordPress, turned to Akkio to build an AI system thatcould predict which keyword and title would get his clients blog posts themost visibility on Google and other search engines. He dragged and dropped ontoAkkios site a spreadsheet of past keywords and blog-post titles along with thenumber of clicks those posts got on Google. After a few keystrokes, Akkiocreated an AI to do the job.Results for our clients improved by approximately a thirdas a result of leveraging Akkio and the AI, Denning said.Once an AI system is created on Akkio, users can integratewith their existing software. Denning renamed the tool Falcon and uses it tomarket his company.I am not a developer, he said, but it was easy andintuitive for me to make it happen.And Cusack, the bee guy? His AI system never spotted amurder hornet, but it did catch plenty of wasps and earwigs sneaking into hishives a modest but important step for no-code.©2022 The New York Times Company | Information Retrieval Or Search/Content Synthesis/Process Automation | Unknown | null | null | null | null | null | null |
||
news | Introducing Neo4j Graph Data Science as a Service | Neo4j's New Data Science Offering Accelerates Development of Intelligent Apps Using Predictive Analytics and Machine Learning Pipelines SAN MATEO, Calif., April 12, 2022 /PRNewswire/ -- Neo4j®, the world's leading graph data platform, announced Neo4j Graph Data Science, the company's... | https://www.prnewswire.com/news-releases/introducing-neo4j-graph-data-science-as-a-service-301523807.html | 2022-04-12T13:00:00Z | Neo4j's New Data Science Offering Accelerates Development of Intelligent Apps Using Predictive Analytics and Machine Learning PipelinesSAN MATEO, Calif., April 12, 2022 /PRNewswire/ -- Neo4j®, the world's leading graph data platform, announced Neo4j Graph Data Science, the company's comprehensive graph analytics workspace built for data scientists, is now available with new and enhanced capabilities, and as a fully managed cloud service called AuraDS.AI and machine learning (ML) have propelled the use of predictive data architectures and their application across a broad range of use cases like recommendation engines, fraud detection, and customer 360 scenarios. The accuracy of these models is highly correlated to the completeness of context.Neo4j Graph Data Science is designed to make it easy for data scientists to achieve greater predictive accuracy with comprehensive graph analysis techniques. Users can improve models through a library of graph algorithms, ML pipelines, and data science methods. Neo4j Graph Data Science has been widely adopted and is trusted to perform at scale, easily handling hundreds of billions of nodes and relationships."Neo4j Graph Data Science offerings help developers offer better predictions and stronger recommendation engines to business users," said Ritika Suri, Director, Technology Partnerships at Google. "Customers can now deploy Graph Data Science on Google Cloud's trusted, global infrastructure, gaining the ability to seamlessly scale based on business needs, and bringing their data closer to BigQuery and Google Cloud's capability in AI, ML, and analytics."More software developers are looking to data science for ways to offer better predictions and stronger recommendation engines to users. Google Cloud and Neo4j Graph Data Science products help software developers and data scientists who are building the world's next set of intelligent applications by leveraging the power of graph algorithms to bring context to data and improve their models," said Suri.Neo4j Graph Data Science makes it easy for data scientists to work within their existing data pipeline of tools across their ecosystem. Data scientists can use Neo4j Graph Data Science on-premises, and now as a fully managed SaaS solution via Neo4j AuraDS.According to Zack Gow, CTO of Orita, Neo4j Graph Data Science has enabled his team to be more responsive to customer needs."Scale is always top of mind for us because we're processing data that comes from our customers. We never know just how big a customer's data set will be and we chose Neo4j because we knew it could handle the scaling of an order of magnitude more than what we were expecting," Gow said. "Even in the early days, when we were trying out a bunch of tools, Neo4j worked for us immediately. Some of the tools we looked at didn't work at all. Neo4j Graph Data Science got our data into a graph so we could start doing the data science part quickly. As a start up, we don't have time to waste on tools that are cumbersome."Matthew Bernardini, CEO of Zenapse, shared the impact of Neo4j Graph Data Science on his business."We chose Neo4j Graph Data Science on AuraDS because it is a completely managed, cloud-based infrastructure combined with an elegant and user-friendly set of tools and extensive library of production-ready data science algorithms that gives us confidence in our platform and allows us to focus on our data and application development," said Bernardini. "Neo4j Graph Data Science makes it easy to quantify the relationships and similarities that exist in the digital world and to surface new insights about these connected relationships."Neo4j AuraDS: Graph Data Science on Google Cloud PlatformNeo4j AuraDS is the power of Graph Data Science available as a fully managed service. It includes access to over 65 graph algorithms in a single workspace so data scientists can experiment faster. In-graph ML models and the native Python client help increase productivity and simplify workflows.Neo4j AuraDS is available first on Google Cloud's secure, global, and highly performant structure, and can be paid for with existing Google Cloud commitments or with a credit card. In addition to the Graph Data Science core functionality, AuraDS customers benefit from:Simple, powerful workflow: A drag-and-drop UI to model and import data into a graph.Scale up and down: Manage access to high compute hardware on-demand as needs change.Automated operations: Workloads are monitored, patched, and backed up behind the scenes without any user action.MLOps support: Persist, publish, and restore models without interruptions from restarts.Predictable cost: Manage costs with pay-as-you-go pricing and the option of pausing unused instances.One-click backup: Take a snapshot of instances, models, and in-memory graphs in one click.For guidance and reference architectures on how to get started using Neo4j AuraDS with VertexAI, see Use graphs for smarter AI with Neo4j and Google Cloud Vertex AI.More About Neo4j Graph Data Science and AuraDSTo learn more about Neo4j Graph Data Science as a service, AuraDS, read this blog post or tune in to an upcoming webinar, "What's New in Graph Data Science: Faster and Easier Than Before," on Tuesday, April 26, 2022.Neo4j is the world's leading graph data platform. We help organizations including Comcast, ICIJ, NASA, UBS, and Volvo Cars capture the rich context of the real world that exists in their data to solve challenges of any size and scale. Our customers transform their industries by curbing financial fraud and cybercrime, optimizing global networks, accelerating breakthrough research, and providing better recommendations. Neo4j delivers real-time transaction processing, advanced AI/ML, intuitive data visualization, and more. Find out more at neo4j.com and follow us at @Neo4j. Share this on TwitterContact:[email protected]neo4j.com/pr© 2022 Neo4j, Inc., Neo Technology®, Neo4j®, Cypher®, Neo4j® Bloom, Neo4j Graph Data Science Library, Neo4j® Aura, Neo4j® AuraDS and Neo4j® AuraDB are registered trademarks or a trademark of Neo4j, Inc. All other marks are owned by their respective companies.SOURCE Neo4j | Prediction/Process Automation | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
||
news | Synthesis AI CEO and Founder Featured in NVIDIA GTC 2022 Panel on Synthetic Data | Yashar Behzadi to be featured in a panel covering synthetic data business strategy across industries featuring experts from Microsoft, NVIDIA, and Ford SAN FRANCISCO, March 22, 2022 /PRNewswire/ -- Synthesis AI, a pioneer in synthetic data technologies, today announced its founder and CEO... | https://www.prnewswire.com/news-releases/synthesis-ai-ceo-and-founder-featured-in-nvidia-gtc-2022-panel-on-synthetic-data-301507190.html | 2022-03-22T12:00:00Z | Yashar Behzadi to be featured in a panel covering synthetic data business strategy across industries featuring experts from Microsoft, NVIDIA, and Ford SAN FRANCISCO, March 22, 2022 /PRNewswire/ -- Synthesis AI, a pioneer in synthetic data technologies, today announced its founder and CEO Yashar Behzadi, will be featured in a panel at NVIDIA GTC 2022 alongside other distinguished experts from Microsoft, NVIDIA, and Ford.The panel titled "Why Synthetic Data is Important for Your Business: Strategies and Implementations Across Industries" will focus on 3D synthetic data generation spanning partners and customers and will showcase the underlying value of Omniverse for high-fidelity, accurate data generation across different use cases and industries. The panel will take place on March 24 from 12:00 - 12:50 p.m. EDT.Behzadi will appear alongside other synthetic data experts including:Rev Lebaredian, VP Omniverse & Simulation Technology, NVIDIAGerard Andrews, Product Marketing, NVIDIAPedro Urbina, Software Developer Manager, MicrosoftNikita Jaipuria, Technical Expert AI Based Modeling For DAT, FordGil Elbaz, CTO and Co-founder, Datagen"I am honored to share my expertise alongside such distinguished industry leaders," said Behzadi. "Sharing our collective experiences and knowledge will help accelerate the understanding and adoption of synthetic data across industries, which is the ultimate goal. The adoption of synthetic data is at an inflection point, but there is still work to be done to educate across use cases and establish more resources to help develop a knowledge base and understanding on ways synthetic data can reduce bias, further democratize, and build more robust AI models."The virtual conference will take place March 21-24 and feature additional workshops, trainings, and programs from tech visionaries, business leaders, and peers who are using the latest advancements in AI and accelerated computing to solve their biggest challenges.To attend the conference and watch the panel, click here.About Synthesis AI Synthesis AI, a San Francisco-based technology company, is pioneering the use of synthetic data to build more capable computer vision models. Through a proprietary combination of generative neural networks and cinematic CGI pipelines, Synthesis' platform can programmatically create vast amounts of perfectly-labeled image data at orders of magnitude increased speed and reduced cost compared to current approaches.SOURCE Synthesis AI | Content Creation/Content Synthesis | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
||
news | NCS News-Feed | BERT Embeddings Part 2: A Modern ML Approach For Detecting Malware | #macos | #macsecurity | #cybersecurity | #infosecurity | #hacker | World Largest Source Of Security News. | https://nationalcybersecurity.com/bert-embeddings-part-2-a-modern-ml-approach-for-detecting-malware-macos-macsecurity-cybersecurity-infosecurity-hacker/ | 2022-04-11T13:24:00Z | A novel methodology, BERT embedding, enables large-scale machine learning model training for detecting malwareIt reduces dependency on human threat analyst involvement in training machine learning modelsBidirectional Encoder Representation from Transformers (BERT) embeddings enable performant results in model trainingCrowdStrike researchers constantly explore novel approaches to improve the automated detection and protection capabilities of machine learning for Falcon customersCrowdStrike data science researchers recently explored and experimented with the use of Bidirectional Encoder Representation from Transformers (BERT) for embedding command lines, focusing on anomaly detection, but without detailing the model itself. Diving deeper into that research, CrowdStrike researchers explain the reasons for using BERT for command line representation and how to train the model and assess its performance.The purpose of this experimental research was to leverage self-supervised deep learning methods to obtain better representations of the string fields that show up in CrowdStrike Falcon® telemetry. CrowdStrike constantly tests the latest advancements in the field to see whether they fit the existing machine learning toolkit. This research ultimately demonstrates that a deep learning approach for embedding strings of this nature (command line) is feasible and can produce satisfactory results.Defining ObjectivesAn embedding is a representation of an input, usually into a lower-dimensional space. Although embeddings are in scope for various input types, string embeddings are a common choice for representing textual inputs in numeric format for machine learning models. An essential trait of embedding models is that inputs similar to each other tend to have closer latent space representations. This similarity is a consequence of the training objective, but researchers are often interested in lexical similarity when strings are involved.This experiment aimed to better represent string fields encountered in a large stream of event data. The focus was on two such fields: command line and image file name. The first field was discussed extensively in the previous blog it consists of an instruction that starts a process that is then recorded as an event and sent to the CrowdStrike Security Cloud. The second field is the name for the executable starting the process, which is a substring of the corresponding command line. Two main factors dictated the pursuit of such an embedding model: First, we aimed to improve the string processing in our models, and second, we wanted to benefit from the significant developments achieved in the area of natural language processing (NLP) in the last few years.Building the ModelDataThe experiment started by collecting data for training the model from events related to process creation. To ensure variety, it was collected from all of the supported platforms (Windows, Mac, Linux) and sampled from long spans of time to ensure that the processes are not biased by temporary events (e.g., the Log4j vulnerability).Model ArchitectureFor the model architecture, BERT was the primary candidate. The end-to-end model consists of two main components that will be discussed separately: the tokenizer and the neural network (which is what people generally refer to when talking about BERT).A tokenizer is a simple mapping from strings, called tokens, to numbers. This numeric representation of the inputs is necessary because the BERT neural network, like any other machine learning algorithm, cannot use textual data directly in its computations. The tokenizers job is to find the optimal tokens given a vocabulary size, which, in this case, was 30,000 tokens. Another advantage of using a tokenizer is that unknown strings from the training set can still be composed out of tokens learned from other strings. This is important because English has a well-defined set of words, while command lines can theoretically feature any character combination. In Figure 1, there is an example of a tokenized command line in the data set.Figure 1. Example of a tokenized command lineThe BERT model is an extensive neural network that relies on an NLP concept called self-attention mechanism. The concept, introduced by Vaswani et al., 2017, has gained significant traction in the research community, to the point where almost all modern NLP solutions use it. To briefly explain how it works, Figure 2 shows how tokens passed through the network pay attention to other relevant tokens in the input. BERT can compute the attention of tokens and use it to build an understanding of the language. Another important concept used by BERT is masked language modeling (MLM). This is the training objective used by the network. Tokens in the input are randomly masked, and the model has to predict the initially masked tokens.Figure 2. The it token in the sentence The animal didnt cross the street because it was too tired comes into focus. The attention is supposed to make sense from a syntactic point of view and is used in machine learning models. Source: https://jalammar.github.io/illustrated-transformer/The MLM objective allows for self-supervised learning, meaning that researchers do not need to explicitly label data, but instead they only need the input strings themselves. This technique presents an advantage for cybersecurity as it removes the need to involve threat analysts in this initial training step.Diving deeper, there are some appropriate training steps for BERT: pre-training and fine-tuning. First, the model is pre-trained using MLM on a large dataset consisting of command lines and image file names. The model is then fine-tuned with a malware classification objective, on a different dataset, for example. With this approach for pre-training, it becomes easy to collect a large dataset from which the model will learn the representation of the data. Hence, the second phase requires significantly fewer labeled samples in the dataset. Learning from smaller amounts of labeled data constitutes an obvious advantage that becomes applicable in this case. Additionally, the first step is task-agnostic, meaning that the pre-trained model can be fine-tuned later for any task needed malware classification, malware family identification, anomaly detection, etc.ExperimentsAfter getting the model ready for training, one of the first steps is sifting through the data for a diverse subset of samples because the tens of billions of events collected from Falcon telemetry were too many for training BERT from scratch in a reasonable amount of time.Small embedding models were repeatedly trained briefly to identify the samples they were performing worse on, excluding them from the subset.Afterward, modeling efforts mainly revolved around finding the right hyper-parameters. Focus was placed on the performance for malware classification using a holdout set from the fine-tuning data, as this was a good measurement of the added benefit of using the embeddings over previous features.The hyper-parameter that brought the most significant improvement was the change to the maximum number of tokens that can get into the BERT model. This is relevant because while image file name strings, which are shorter, would often fit fine into the default token limit, command lines would not. As a result, a huge chunk of information is lost due to the truncation of the tokens, which was intended to bring all inputs to the same size. Increasing this limit to a calibrated value was crucial in the modeling process. The other experiments focused on the embeddings size and the number of hidden layers in the neural network. Optimal values were found for them according to the evolution of the classification metrics. Figures 3 and 4 show the fine-tuning process and its rationale.Figure 3. Classification performance while varying the hidden (embedding) sizeFigure 4. Classification performance while varying the number of hidden layersIn the end, our experiment resulted in an embedding model whose embeddings seem on par with the original features for two of CrowdStrikes existing machine learning models that used strings. These models use not only command lines or image file names for classification, but also other fields coming from the event data, such as execution flags and start times. Results can be observed in Figures 5 and 6.Figure 5. Weighted* true positive rate (TPR) and false positive rate (FPR) for different versions of one of our classification models. Finetuned_v5 is the version using the embeddings from our latest (and best) fine-tuned BERT model, while v3 is an earlier version of the model.* Note: Weighted means that the frequency of samples in the dataset is accounted for when computing the metrics, as the data used is guaranteed to contain duplicatesFigure 6. TPR vs. FPR for another one of our classification modelsOne observation was that the fine-tuned model shows better performance, which makes sense because it was specifically trained to separate between clean and dirty samples.Future Research OpportunitiesFuture areas of research interest involve training an end-to-end neural network for classification that incorporates BERT for the string-type features. Currently, the embeddings are computed and used in our gradient boosted tree models. Putting everything together into a single neural network would probably improve the efficiency of the BERT embeddings, as they are trained along with the rest of the features. There is significant potential that the embeddings would be better used by a deep learning algorithm than a tree-based one, since the latter makes predictions based on higher or lower features than a trained value, while the deep learning algorithm can process the input more freely.CrowdStrike researchers constantly explore novel approaches to improve the efficacy and the automated detection and protection capabilities of machine learning for customers. Additional Resources | Detection and Monitoring/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
|
news | NCS News-Feed | How Human Intelligence Is Supercharging CrowdStrike’s Artificial Intelligence | #cloudsecurity | #cybersecurity | #infosecurity | #hacker | World Largest Source Of Security News. | https://nationalcybersecurity.com/how-human-intelligence-is-supercharging-crowdstrikes-artificial-intelligence-cloudsecurity-cybersecurity-infosecurity-hacker/ | 2022-04-08T18:09:00Z | The CrowdStrike Security Cloud processes over a trillion events from endpoint sensors per day, but human professionals play a vital role in providing structure and ground truth for artificial intelligence (AI) to be effective. Without human professionals, AI is useless.There is a new trope in the security industry, and it goes something like this: To keep yourself safe, you need an AI-powered solution that can act on its own, and to do that, you need to keep those pesky humans away from it. As a practitioner with a track record of bringing AI to cybersecurity not because marchitecture demands it these days but because of its actual utility to solve security problems I find this characterization puzzling.If this sounds controversial to you, then note that it only happens to be so in the cybersecurity industry. This characterization is entirely uncontroversial among AI and machine learning (ML) researchers. And even in other industries, leveraging human expertise is entirely normal. How much so? You can purchase services to get your data sets labeled by humans. Some companies even leverage crowdsourced processes to get labels from regular users. You have probably already contributed to such an effort when proving that youre not a robot to a website.FallaciesHow did this misanthropic security posture become pervasive? There are two fallacies at play. If you are a glass-half-full person, you could call those misconceptions. But if you focus on the top half of the glass, you might call those misrepresentations. First, artificial intelligence is not, indeed, intelligent. Have a conversation with your smart speaker to reassure you of that fact. AI is a set of algorithms and techniques that often produces useful results. But sometimes they fail in odd and unintuitive ways. It even has its own distinct attack surface that adversaries can leverage if left unprotected. Treating AI as the panacea fixing our industrys woes is dangerous, as I discussed last year in an invited talk at the workshop on Robustness of AI Systems Against Adversarial Attacks.Second, we are all still jaded from the signature days. Back then, signatures got deployed, initially stopped threats, then started to miss new threats, prompting humans to write new signatures, restarting the cycle the next day. Naturally, this approach is a losing proposition not only is this model purely reactive, but its speed is also clearly limited by human response time. Of course, this is not how AI models are integrated to prevent threats. No human interaction is needed for an AI model in the CrowdStrike Falcon® platform to stop a threat dead in its tracks. CrowdStrike specifically uses AI to detect threats that have not yet been conceived without requiring any updates.Data, Data, DataWhat does it take to train an AI model that can reliably do such a feat? Most importantly, it takes data. And a lot of it. The CrowdStrike Security Cloud processes over a trillion events from endpoint sensors per day. To put this into perspective, a ream of 500 pages of office printer paper is about 50 millimeters thick (about 2 inches). A trillion pages would stack about 100,000 kilometers high, or roughly 60,000 miles. Those are enough miles to earn you gold status every day on most airlines, but it would take you about four days to fly this distance at a regular airliner cruising speed. And after those four days, the stack will have reached the moon.However, what is essential is that this metaphorical stack is not only tall. The CrowdStrike Security Cloud also has a large footprint covering facets such as endpoint security, cloud security, identity protection, threat intelligence and much more. For each of these facets, we process complex and nuanced data records. All of this information gets contextualized and correlated in our proprietary CrowdStrike Threat Graph®, a large distributed graph database we developed.The Falcon platform was designed from the ground up as a cloud-native system to effectively process this volume of data in meaningful ways. None of this is possible on an appliance. And none of this is possible with hybrid cloud solutions i.e., those clouds that are merely stacks of vendor-managed rack-mounted appliances. Those make as much sense as streaming video across the internet from a VCR.More data allows us to spot fainter signals. Imagine plotting the latitude and longitude of U.S. cities onto graph paper. Initially, you will see some randomly scattered points. After doing this for a large number of cities, a familiar shape will slowly emerge out of a cloud of points. And that would be the shape of the United States. However, that shape would have never become apparent if everyone had used a local graph paper to plot a handful of cities in their vicinity.Structure and Ground TruthSo how do humans fit into the picture? If there is so much information piled onto our metaphorical stack of printer paper that even an airliner could not keep up with it, how do humans have a fighting chance to make an impact? There are two ways. First, stacking the sheets is not the smartest way to organize them. Laying them out flat next to each other results in a paper square of about 250 times 250 kilometers (about 150 miles per side). That is much more manageable an area like this could be mapped. If we instead organize the reams of paper in a cube, it would be about an 180×180×180 meter cube (or about 600 feet per edge). Notice its meters now, no longer kilometers, making it a lot more compact and ready to be charted. The takeaway is that the problem becomes more tractable by organizing data in more dimensions and considering adjacencies. That is the mission of our cloud and Threat Graph.Second, not all data is created equal. There is another type of data to which humans can contribute. We call this type of data ground truth, and it has a significant impact on the training of AI models. Ground truth is the type of data that describes how we want an AI model to behave under certain input. For our metaphorical paper stack, an example of ground truth would be whether a sheet of paper corresponds to a threat (for instance, a red-colored sheet) or benign activities (a green-colored sheet). If you organize your data in meaningful ways, as described earlier, you only need a few colored sheets to deduce information for whole reams of paper as well. Imagine you pull a sheet out of a ream somewhere in our paper cube, and it happens to be red. The other sheets in that ream are likely red, too. And some of the adjacent reams will also mostly have red paper in them. That is how certain types of AI learn: They figure out how to react to similar (adjacent) inputs based on ground truth this is called supervised learning.Supervised learning is a powerful way to create highly accurate classification systems i.e., systems that have high true positive rates (detecting threats reliably) and low false positive rates (rarely causing alarms on benign behavior). Not all learning needs to be conducted using ground truth (the domain of unsupervised learning is concerned with other techniques, for example). But as soon as it is time to evaluate whether such an AI system works as intended, you will need ground truth too.Lastly, since ground truth is often a rare commodity, rarer than other data, other techniques blend these two approaches. In semi-supervised learning, an AI is trained on large amounts of data in an unsupervised way, and then it is tuned using supervised training using less ground truth. In self-supervised learning, the AI takes clues from structure in the data itself.Humans, Humans, HumansAt CrowdStrike, we designed our systems to maximize ground truth generation. For example, whenever CrowdStrike Falcon OverWatch™ threat hunters find an adversary on the network, those findings become new ground truth. Similarly, when the OverWatch experts evaluate suspicious activity as benign, it also becomes ground truth. Those data points help train or evaluate AI systems. We generate data of this kind at scale every day using our vantage point in the cloud. This allows us to train better models and build better systems with better-understood performance characteristics.AI systems can also flag incidents where ground truth is sparser and a higher level of uncertainty exists. While the AI can still prevent threats in those circumstances without delay, the flagged data can later be reviewed by humans to boost the amount of available ground truth where it matters most. Alternatively, other means can provide additional data, such as a detonation within the CrowdStrike Falcon X™ malware analysis sandbox to observe threat behaviors in a controlled environment. Such solutions are based on a paradigm called active learning.Active learning is a useful way to get the limited resource of human attention spent where it matters the most. AI decisions do not get stalled the AI will continue to analyze and stop threats. We call this the fast loop. The Falcon OverWatch team, among others, analyzes what our AI systems surface and provides an expert disposition, which we feed back into our AI algorithms. Over this route, our AI models receive a constant stream of feedback about where they were successful and where we spotted and stopped novel attacks by other means. The AI learns from this feedback and incorporates it into future detections. We call this part the long loop. As a result, our AI gets better constantly as new data enters the system.Proof PointsWe prove that this approach is superior every day in the field when we repel adversaries from our customers networks, prevent theft of data and ensure that the lifeblood of the companies we serve their information and intellectual property is protected.In addition, we have a rigorous testing record of numerous independent third-party evaluations by leading testing organizations such as AV-Comparatives, SE Labs and MITRE. AI-centric vendors tend to avoid tests that penalize false positives but not CrowdStrike. Public reports from independent testing organizations attest to CrowdStrikes commitment to transparency, especially with AI becoming a pervasive technology to work with data. Outside of testing, CrowdStrike was also the first NGAV vendor to make our technology readily available on VirusTotal for public scrutiny, and we provide our technology readily for the research community to utilize on Hybrid Analysis. Transparency is a central tenet of our privacy-by-design approach: CrowdStrike designs its offerings with transparency as a core value so customers can see what exactly is processed, make decisions about how its processed, and select retention periods.Final ThoughtsAI is becoming a more commonplace tool to stop cyber threats, but it is important to look beyond the mere presence of an AI algorithm somewhere in the data flow. It is critical to gauge the efficacy of an AI system by understanding where the data is coming from, including the necessary ground truth. Artificial Intelligence can learn only if new facts constantly enter the system at scale, and humans in the loop are a hallmark of a well-designed AI system.Additional Resources | Detection and Monitoring | Computer and Mathematical | null | null | null | null | null | null |
|
news | Sean Michael Kerner | Oracle HeatWave ML fuses machine learning with database | The tech giant is continuing to build out the capabilities of its MySQL HeatWave cloud database with real-time elasticity, performance boosts and machine learning integration. | https://www.techtarget.com/searchdatamanagement/news/252515271/Oracle-HeatWave-ML-fuses-machine-learning-with-database | 2022-03-29T15:25:00Z | Oracle today released new machine learning integrations with its MySQL HeatWave cloud service.The tech giant originally launched the MySQL HeatWave service in December 2020 as the Oracle MySQL Database Service with MySQL Analytics Engine and rebranded it as HeatWave in 2021.The service runs the MySQL database as a managed offering on Oracle Cloud Infrastructure (OCI)and enables users to run both online transaction processing (OLTP) and OLAP workloads. Oracle has steadily added new features to the HeatWave platform, including automation features the company branded as Autopilot in Aug. 2021.As part of the new update, generally available now, Oracle integrated machine learning capabilities into the MySQL HeatWave platform. With HeatWave ML, users can directly run machine learning workloads on the database service.The MySQL HeatWave platform contains many elegant capabilities, said Constellation research analyst Holger Mueller.The ability to support both OLTP and OLAP data workloads is one such set of capabilities and Mueller noted that Oracle has expanded that by adding machine learning inside the MySQL HeatWave database service.MySQL Heatwave ML goes beyond AutopilotCustomer feedback first led to the release of the MySQL Autopilot features in 2021 and has now led to the integration of ML with HeatWave ML, said Nipun Agarwal, senior vice president, MySQL HeatWave development at Oracle."With the introduction of HeatWave customers are now storing more data into the MySQL database and for many of these customers, they need to run machine learning on their data," Agarwal said.He explained that without HeatWave ML, users had to first extract data out of MySQL into a machine learning-capable platform in order to run machine learning training or inference workloads. That's no longer the case, due to the HeatWave ML capability.HeatWave ML uses AutoML technology that is already supported in the Oracle Autonomous Database platform.With Oracle's AutoML, users can build out machine learning pipelines. The machine learning technology is now stored inside the MySQL database and the machine learning processing is also executed directly within the database.By handling machine learning operations within the database, Agarwal said users deal with less complexity for users, as they dont have to export data.There are also security benefits to handling machine learning inside the database as the access controls that are in place for the MySQL database still apply. By contrast, when the data had to be exported into a separate machine learning tool, users needed to set up and configure access control again.Real-time elasticity and data processing advances for MySQL HeatWaveIn addition to the HeatWave ML capabilities, Oracle is added new real- time elasticity and data processing improvements.The MySQL HeatWave service runs on OCI, which can scale resources up or down as demand warrants. Agarwal explained that the new real time elasticity feature in MySQL HeatWave will enable database deployments to scale up with new database nodes when needed without interrupting a running workload."The infrastructure enables the elasticity, but if we don't do anything at our layer, the queries will get interrupted when the cluster makes the transition and adds a node, Agarwal said. "So that's the work we have done."Oracle also increased the amount of data that can be processed per database node, boosting processing capability from about 400 gigabytes of data per node to 820 gigabytes with a memory optimization approach known as a Bloom Filter.The vendor also added support for the open source LZ4 compression algorithm, which further optimizes data in a MySQL HeatWave node."What we have done is we have doubled the amount of data that can be processed per node," Agarwal said. | Process Automation/Decision Making/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
|
news | Insight Partners Celebrates Inaugural ScaleUp Series with AI Conference, ScaleUp:AI | Over 1,700 attendees from 50 Countries Across the Globe Joined to Discuss the Impacts of AI NEW YORK, April 11, 2022 /PRNewswire/ -- This week, New York-based global private equity and venture capital firm Insight Partners hosted its premier Artificial Intelligence (AI) industry... | https://www.prnewswire.com/news-releases/insight-partners-celebrates-inaugural-scaleup-series-with-ai-conference-scaleupai-301523223.html | 2022-04-11T21:04:00Z | Over 1,700 attendees from 50 Countries Across the Globe Joined to Discuss the Impacts of AINEW YORK, April 11, 2022 /PRNewswire/ -- This week, New York-based global private equity and venture capital firm Insight Partners hosted its premier Artificial Intelligence (AI) industry conference, ScaleUp:AI, on April 6-7 in New York City. The hybrid event brought together thousands of participants across the globe to hear the foremost thinkers, doers, users, and investors of AI opine on both the industry today and its future, with thought-provoking presentations and networking opportunities.Presented by Insight Partners, Citi, and NASDAQ, ScaleUp:AI provided a comprehensive look at how AI is impacting every industry and facet of business, with sessions exploring its effects on healthcare, cybersecurity, fraud mitigation, ethics and equity, business intelligence, financial services, and more. Leaders in AI partnered with Insight on the event, including 6sense, Acceldata, BigPanda, Clarify Health, Cognigy, Expressive, Explorium, Featurespace, intenseye, Pecan, Quantum Metric, Rasgo, Relevance AI, RudderStack, Run:AI, SentinelOne, Slim.AI, StormForge, Turing, Weights & Biases, WINT, Zest AI. ScaleUp:AI is the first in a series of Insight Partners global conferences and was specifically designed for the artificial intelligence (AI) community. With over 1,700 attendees in person and virtually, from more than 50 countries across the globe, the event was packed with insights and key takeaways, including:Why "good data" can be more important than "big data" as the world moves towards a more data-centric versus model-centric view of AI.The importance of explainability and bias detection before any model gets into production at scale.The value of strategically combining human and machine intelligence and why "symbiotic intelligence" is the best of both worlds.Why the evolution of AI systems is going to happen more quickly than the evolution of traditional software, and how we are truly at the beginning of a scale up inflection point.ScaleUp: AI speakers (in alphabetical order):Ali Ghodsi, CEO at Databricks Allie K. Miller, Global Head of Machine Learning BD, Startups and Venture Capital at AWS Andrew Ng, Co-Founder Google Brain + CEO at Landing AI Alex Dalyac, Founder & CEO, Tractable Amir Orad, CEO, Sisense Anita Lynch, Chief Data Officer | Corporate Board Director | Investor Arvind Purushotham, Managing Director & Global Head, Venture Investing, Citi Asmau Ahmed, Explorer, Alphabet X Brad Peterson, CTO & CIO, NASDAQ Chase Ginther, Machine Learning & Data Scientist Platform Architect, Snowflake Christina Montgomery, Vice President & Chief Privacy Officer, IBM Clement Delangue, CEO, Hugging Face Danny Tobey, M.D., J.D., Partner, DLA Piper David Kirkpatrick, Founder & Editor-in-Chief, Techonomy Gabi Steele, Founder, Preql Ganesh Bell, Managing Director, Insight Partners Gayatri Narayan, SVP Digital Products and Services, PepsiCo George Mathew, Managing Director, Insight Partners Heather Carroll Cox, Chief Digital Health & Analytics Officer, Humana Janice Tse, Senior Director, Data Science, PayPal Jared Dunnmon, Technical Director, AI/ML Portfolio, Defense Innovation Unit Jason Zintak, CEO, 6sense Jay Budzik, CTO, Zest AI Jerry Overton, CEO, Applied AI Studio Jillian D'Onfro, Enterprise Editor, Insight Partners Joe McKendrick, Analyst & Contributor, Forbes Jon Krohn, Chief Data Scientist, Nebula Jonathan Rosenbaum, Managing Director, Insight Partners Karen Snow, SVP, Head of US Listings and Revenue, NASDAQ Karthik Tadinada, Chief Operating Officer, FeatureSpace Krishna Gade, CEO at Fiddler AI Leah Weiss, Founder, Preql Lonne Jaffe, Managing Director, Insight Partners Lydia Dishman, Staff Editor, Fast Company Manoj Saxena, Executive Chairman, Responsible Artificial Intelligence Institute & Cognitive Scale Neil Ackerman, Head of Global Supply Chain, Johnson & Johnson Nicholas Warner, Chief Operating Officer, SentinelOne Nick Sinai, Senior Advisor, Insight Partners Nicole Wong, Former Deputy US Chief Technology Officer Nikki Parker, Insight Partners Patrick Dougherty, Co-Founder & CTO, Rasgo Prag Sharma, Global Head of Artificial Intelligence, Citi Praveen Akkiraju, Managing Director, Insight Partners Ryan Hinkle, Managing Director, Insight Partners Scott Barclay, Managing Director, Insight Partners Scott Southwood, Chief Revenue Officer, Quantum Metric Seth Dobrin, PhD., Global Chief AI Officer, IBM Siobhan Savage, Co-Founder & CEO, Reejig Sirisha Kadamalakalva, Chief Strategy Officer, DataRobot Suchi Saria, PhD., Founder & CEO, Bayesian Health Supriya Gupta, Head of Recommendations, Credit Karma Svetlana Sicular, VP Analyst, Gartner Tomas Pfister, Head of AI Research, Google Cloud Vaibhav Sahgal, Principal, Americas, Economist Impact Vittorio Cretella, Chief Information Officer, P&G Wardah Inam, Co-Founder & CEO, Overjet Willian Falcon, Founder & CEO, Grid.ai Yan-David Erlich, CRO & COO, Weights & Biases The ScaleUp Series which features topics like AI, healthcare, and cybersecurity is designed to guide and influence leaders, decision makers, and doers who are ready to soar, with the support of the software industry's most trusted ScaleUp partner, Insight Partners. Learn more at scaleup.events.Insight Partners is a global software investor partnering with high-growth technology, software, and Internet startup and ScaleUp companies that are driving transformative change in their industries. As of February 24, 2022, the closing of the firm's recent fundraise, Fund XII, brings Insight Partners regulatory assets under management to over $90B. Insight Partners has invested in more than 600 companies worldwide and has seen over 55 portfolio companies achieve an IPO. Headquartered in New York City, Insight has offices in London, Tel Aviv, and Palo Alto. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Insight Partners meets great software leaders where they are in their growth journey, from their first investment to IPO. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.SOURCE Insight Partners | Content Synthesis/Discovery/Information Retrieval Or Search | Business and Financial Operations/Healthcare Practitioners and Support/Computer and Mathematical | null | null | null | null | null | null |
||
news | Frederic Lardinois | TruEra raises $35M for its AI analytics and monitoring platform – TechCrunch | TruEra, a startup that offers an AI quality management solution to optimize, explain and monitor machine learning models, today announced that it has raised a $25 million Series B round led by Menlo Ventures. Existing investors Greylock Partners, Wing Venture Capital (which led its $12 million Seri… | https://techcrunch.com/2022/03/16/truera-raises-35m-for-its-ai-analytics-and-monitoring-platform/ | 2022-03-16T13:00:54Z | TruEra, a startup that offers an AI quality management solution to optimize, explain and monitor machine learning models, today announced that it has raised a $25 million Series B round led by Menlo Ventures. Existing investors Greylock Partners, Wing Venture Capital (which led its $12 million Series A round in late 2020), Harpoon Ventures, Conversion Capital, the Data Community Fund, as well as new investors Forgepoint Capital and the B Capital Groups Ascent Fund also participated in this round. In total, TruEra has now raised $42.3 million.“We believe that the next big challenge in AI is the quality challenge,” TruEra CEO and co-founder Will Uppington said. “AI is at an inflection point: lots of opportunities but also a lot of challenges to make AI actually work in the enterprise. And we think that’s the major issue that’s preventing AI systems from getting into real-world use and actually delivering on KPIs.”Image Credits: TruEraUppington argues that it’s not just hard to design and build high-quality models to begin with, but there’s also still a lot of concerns around trust, transparency and fairness when it comes to putting models into production — and increasing regulatory pressure around AI fairness is giving enterprises pause because they need to put auditable systems in place to conform to these rules. And once a model can finally be put into production, businesses have to ensure that quality remains high, even as some of the underlying data changes.TruEra argues that an enterprise AI quality management solution needs to approach these problems head-on, starting with tools that developers can use while they train the model so that they can test and evaluate their models long before they go into production. To do this, the company’s service can be integrated right into the kind of Jupyter notebooks that most data scientists are already using to build their models anyway, for example.“We’re in the space where software development was in the 90s, before you had tools and agile development methodologies,” Uppington said. “Data science is very waterfall and the models are still pretty black box. That reduces the quality of your development process, just like it did in software development in the 90s. […] We think that we can help the world get to those better tools and more agile-like development with this kind of comprehensive, fast quality testing and making it really easy for the data scientists to use it.”The company says it saw its revenue grow over 5x since it raised its Series A round in late 2020. In part, Uppington noted, that’s driven by the fact that a lot of enterprises are now getting to the point in their AI journey where they want to put models into production and are starting to face these quality challenges. Add to that the regulatory environment and some high-profile failures (think Zillow), and it’s a good time to be in the AI quality space.Menlo Ventures partner Tim Tully, the former CTO of Splunk, also stressed that TruEra approaches the problem through the lens of the model, with co-founder and Chief Scientist Anupam Datta having done some of the early academic work on AI explainability as a professor at Carnegie Mellon.“If I were the machine learning data scientist, what would I want to use? I looked at all the companies and I was looking for the one that provided the depth and approached the problem through the lens of the model, as opposed to just the opposite direction, which I think is completely wrong,” Tully said. “And frankly, I think a lot of the products are very superficial in the treatment of the problem. I want to go really deep with it and I want to see some proprietary research that creates distance from the competitors.”Taking open source Python libraries and putting a user interface on top of them doesn’t quite cut it in this market, Tully added. | Detection and Monitoring/Content Synthesis | Business and Financial Operations | null | null | null | null | null | null |
|
news | 1a3orn | New Scaling Laws for Large Language Models | Published on April 1, 2022 8:41 PM GMTOn March 29th, DeepMind published a paper, "Training Compute-Optimal Large Language Models", that shows that essentially everyone -- OpenAI, DeepMind, Microsoft, etc. -- has been training large language models with a deeply suboptimal use of compute.Following the new scaling laws that they propose for the optimal use of compute, DeepMind trains a new, 70-billion parameter model that outperforms much larger language models, including the 175-billion parameter GPT-3 and DeepMind's own 270-billion parameter "Gopher".I'm going to walk through the background of the now-falsified scaling laws from prior to this paper; then I'm going to describe the new laws given by this paper, and why they weren't found earlier; and finally I'll briefly mention some possible implications of this paper.Independently of the consequences -- this paper is exciting! Machine learning researchers thought they knew laws about how to scale compute optimally, and the laws turned out to be wrong! It's a nice clear instance of science-functioning-in-ways-it-should in ML.BackgroundIn 2020 OpenAI proposed scaling laws which have since been used (at least implicitly) to guide the training of large models.These scaling laws attempt to answer several questions. One of these questions is "Given a certain quantity of compute, how large of a model should I train in order to get the best possible performance?"The answer isn't "as large a model as possible" because, for a fixed quantity of compute, a larger model must be trained on less data. So training a 1-million parameter model on 10 books takes about as many FLOPS as training a 10-million parameter model on one book.In the case of very large language models like GPT-3, these alternatives look more like training a 20-billion parameter model on 40% of an archive of the Internet, or training a 200-billion parameter model on 4% of an archive of the Internet, or any of an infinite number of points along the same boundary.Compute on this scale is not cheap -- so if you're going to be spending 10 million dollars per training run on a model scaled up to be 100x bigger than your toy version of the model, you want principles better than a feeling in your gut to guide how you allocate this compute between "amount of data the model sees" and "how big the model should be."So if you get 10x more compute, how much bigger do you make your model? What about 100x more compute? Or 1000x more compute?Well, the OpenAI paper answers the question. If you get 10x more compute, you increase your model size by about 5x and your data size by about 2x. Another 10x in compute, and model size is 25x bigger and data size is only 4x bigger.Model size is almost everything.Model Size Is (Almost) EverythingSubsequent researchers and institutions took this philosophy to heart, and focused mostly on figuring out how to engineer increasingly-large models, rather than training comparatively smaller models over more data. Thus, the many headlines of increasingly-larger models that we've seen coming from ML research institutions and AI accelerator startups.See, for instance, the following chart from the new DeepMind paper.Large Subsequent ModelsNote the increase to half a trillion parameters, with identical quantities of training data.And note that this understanding of the world has also been used to project forward future data requirements -- NVIDIA, for instance, talks about training a trillion parameter model with only 450 billion tokens. Everyone had decided model size was much more important than data size.FalsificationThe DeepMind paper re-approaches the issue of scaling laws.It uses three separate methods to try to find the correct scaling law, but I'm going to zoom in on the second because I think it's the easiest to comprehend.The method is simple. They choose 9 different quantities of compute, ranging from about 3 x 1018 FLOPS to 3 x 1021 FLOPS.For each quantity of compute, they train many different-sized models. Because the quantity of compute is constant for each level, the smaller models are trained for more time and the larger models for less.The following chart from the paper illustrates this. Each line connects models (at different sizes) trained using the same amount of compute. The vertical axis is the loss, where lower is better:IsoFLOP CurvesEach of these curves has a clear interpretation. To the left of the minima on each curve, models are too small -- a larger model trained on less data would be an improvement. To the right of the minima on each curve, models are too large -- a smaller model trained on more data would be an improvement. The best models are at the minima.If you connect the minima at each curve and extend the line outwards, you get a new law! Specifically, it looks like for every increase in compute, you should increase data size and model size by approximately same amount.If you get a 10x increase in compute, you should make your model 3.1x times bigger and the data you train over 3.1x bigger; if you get a 100x increase in compute, you should make your model 10x bigger and your data 10x bigger.Now, all of these experimental runs graphed above were on relatively small models, trained with non-insane quantities of compute. So you could have argued that this rule wouldn't work with much larger numbers.But to verify that the law was right, DeepMind trained a 70-billion parameter model ("Chinchilla") using the same compute as had been used for the 280-parameter Gopher. That is, they trained the smaller Chinchilla with 1.4 trillion tokens, while the larger Gopher had only been trained with 300 billion tokens.And, as the new scaling laws predicts, Chinchilla is a lot better than Gopher on pretty much everything. It is better by the standard less-perplexity-per-word measure, and by the more interesting usefulness-on-downstream-task measures. I could insert a bunch of graphs here, but if you aren't familiar with the measures in question they basically all sum to "Hey, number goes up!"Number go up (Or down when appropriate)Given the evidence of Chinchilla, it appears pretty definite that OpenAI got the scaling laws wrong. So one natural question is "What happened that they got it wrong?"Well, background: The learning rate of a deep neural network dictates how much the parameters of a network are updated for each piece of training data. Learning rates on large training runs are typically decreased according to a schedule, so that data towards the end of a training run adjusts the parameters of a neural network less than data towards the beginning of it. You can see this as reflecting the need to not "forget" what was learned earlier in the training run.It looks like OpenAI used a single total annealing schedule for all of their runs, even those of different lengths. This shifted the apparent best-possible performance downwards for the networks on a non-ideal annealing schedule. And this lead to a distorted notion of what laws should be.ConsequencesOne funky thing about this is that we shouldn't see larger language models... at all, for at least a few years.DeepMind provides a helpful chart of how much training data and compute you'd need to optimally train models of various sizes.Note that it wouldn't make sense to train a model with 520 billion parameters until you had 60x as much compute as was used for Gopher / Chinchilla. You don't hit the need for a trillion parameters until you have 200x as much compute as was used for Gopher / Chinchilla.(You might need even more compute; in part of the paper, DeepMind says that at large quantities of compute the scaling laws bend slightly, and the optimal behavior might be to scale data by even more than you scale model size. In which case you might need to increase compute by more than 200x before it would make sense to use a trillion parameters.)So until wafer-scale chips decrease the cost of compute ten times, and Google also decides all it really needs for AGI is to put ten times as much money into LM's, we've seen the largest LM's we're likely to see. However long that may be.One potential thing that could follow from this is that, because inference costs are obviously smaller for small language models, services such as OpenAI's GPT-3 should be cheaper for them to provide. The cost to run them, at the same level of quality, should drop by at least 3x. I don't know what percent the cost of providing these services is running them rather than training them, but potentially it could make services based on these models more efficient than they were before, and open up economic viability in places that didn't exist before.One last consequence is that this paper makes the engineering involved in training large language models easier. Gathering more good data would be (I think) far easier than trying to efficiently split computation for increasingly large LM's across 1000s of machines.(Cross post)Discuss | https://www.lesswrong.com/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-models | 2022-04-01T20:41:17Z | On March 29th, DeepMind published a paper, "Training Compute-Optimal Large Language Models", that shows that essentially everyone -- OpenAI, DeepMind, Microsoft, etc. -- has been training large language models with a deeply suboptimal use of compute.Following the new scaling laws that they propose for the optimal use of compute, DeepMind trains a new, 70-billion parameter model that outperforms much larger language models, including the 175-billion parameter GPT-3 and DeepMind's own 270-billion parameter "Gopher".I'm going to walk through the background of the now-falsified scaling laws from prior to this paper; then I'm going to describe the new laws given by this paper, and why they weren't found earlier; and finally I'll briefly mention some possible implications of this paper.Independently of the consequences -- this paper is exciting! Machine learning researchers thought they knew laws about how to scale compute optimally, and the laws turned out to be wrong! It's a nice clear instance of science-functioning-in-ways-it-should in ML.BackgroundIn 2020 OpenAI proposed scaling laws which have since been used (at least implicitly) to guide the training of large models.These scaling laws attempt to answer several questions. One of these questions is "Given a certain quantity of compute, how large of a model should I train in order to get the best possible performance?"The answer isn't "as large a model as possible" because, for a fixed quantity of compute, a larger model must be trained on less data. So training a 1-million parameter model on 10 books takes about as many FLOPS as training a 10-million parameter model on one book.In the case of very large language models like GPT-3, these alternatives look more like training a 20-billion parameter model on 40% of an archive of the Internet, or training a 200-billion parameter model on 4% of an archive of the Internet, or any of an infinite number of points along the same boundary.Compute on this scale is not cheap -- so if you're going to be spending 10 million dollars per training run on a model scaled up to be 100x bigger than your toy version of the model, you want principles better than a feeling in your gut to guide how you allocate this compute between "amount of data the model sees" and "how big the model should be."So if you get 10x more compute, how much bigger do you make your model? What about 100x more compute? Or 1000x more compute?Well, the OpenAI paper answers the question. If you get 10x more compute, you increase your model size by about 5x and your data size by about 2x. Another 10x in compute, and model size is 25x bigger and data size is only 4x bigger.Model size is almost everything.Model Size Is (Almost) EverythingSubsequent researchers and institutions took this philosophy to heart, and focused mostly on figuring out how to engineer increasingly-large models, rather than training comparatively smaller models over more data. Thus, the many headlines of increasingly-larger models that we've seen coming from ML research institutions and AI accelerator startups.See, for instance, the following chart from the new DeepMind paper.Large Subsequent ModelsNote the increase to half a trillion parameters, with identical quantities of training data.And note that this understanding of the world has also been used to project forward future data requirements -- NVIDIA, for instance, talks about training a trillion parameter model with only 450 billion tokens. Everyone had decided model size was much more important than data size.FalsificationThe DeepMind paper re-approaches the issue of scaling laws.It uses three separate methods to try to find the correct scaling law, but I'm going to zoom in on the second because I think it's the easiest to comprehend.The method is simple. They choose 9 different quantities of compute, ranging from about 3 x 10<sup>18</sup> FLOPS to 3 x 10<sup>21</sup> FLOPS.For each quantity of compute, they train many different-sized models. Because the quantity of compute is constant for each level, the smaller models are trained for more time and the larger models for less.The following chart from the paper illustrates this. Each line connects models (at different sizes) trained using the same amount of compute. The vertical axis is the loss, where lower is better:IsoFLOP CurvesEach of these curves has a clear interpretation. To the left of the minima on each curve, models are too small -- a larger model trained on less data would be an improvement. To the right of the minima on each curve, models are too large -- a smaller model trained on more data would be an improvement. The best models are at the minima.If you connect the minima at each curve and extend the line outwards, you get a new law! Specifically, it looks like for every increase in compute, you should increase data size and model size by approximately same amount.If you get a 10x increase in compute, you should make your model 3.1x times bigger and the data you train over 3.1x bigger; if you get a 100x increase in compute, you should make your model 10x bigger and your data 10x bigger.Now, all of these experimental runs graphed above were on relatively small models, trained with non-insane quantities of compute. So you could have argued that this rule wouldn't work with much larger numbers.But to verify that the law was right, DeepMind trained a 70-billion parameter model ("Chinchilla") using the same compute as had been used for the 280-parameter Gopher. That is, they trained the smaller Chinchilla with 1.4 trillion tokens, while the larger Gopher had only been trained with 300 billion tokens.And, as the new scaling laws predicts, Chinchilla is a lot better than Gopher on pretty much everything. It is better by the standard less-perplexity-per-word measure, and by the more interesting usefulness-on-downstream-task measures. I could insert a bunch of graphs here, but if you aren't familiar with the measures in question they basically all sum to "Hey, number goes up!"Number go up (Or down when appropriate)Given the evidence of Chinchilla, it appears pretty definite that OpenAI got the scaling laws wrong. So one natural question is "What happened that they got it wrong?"Well, background: The learning rate of a deep neural network dictates how much the parameters of a network are updated for each piece of training data. Learning rates on large training runs are typically decreased according to a schedule, so that data towards the end of a training run adjusts the parameters of a neural network less than data towards the beginning of it. You can see this as reflecting the need to not "forget" what was learned earlier in the training run.It looks like OpenAI used a single total annealing schedule for all of their runs, even those of different lengths. This shifted the apparent best-possible performance downwards for the networks on a non-ideal annealing schedule. And this lead to a distorted notion of what laws should be.ConsequencesOne funky thing about this is that we shouldn't see larger language models... at all, for at least a few years.DeepMind provides a helpful chart of how much training data and compute you'd need to optimally train models of various sizes.Note that it wouldn't make sense to train a model with 520 billion parameters until you had 60x as much compute as was used for Gopher / Chinchilla. You don't hit the need for a trillion parameters until you have 200x as much compute as was used for Gopher / Chinchilla.(You might need even more compute; in part of the paper, DeepMind says that at large quantities of compute the scaling laws bend slightly, and the optimal behavior might be to scale data by even more than you scale model size. In which case you might need to increase compute by more than 200x before it would make sense to use a trillion parameters.)So until wafer-scale chips decrease the cost of compute ten times, and Google also decides all it really needs for AGI is to put ten times as much money into LM's, we've seen the largest LM's we're likely to see. However long that may be.One potential thing that could follow from this is that, because inference costs are obviously smaller for small language models, services such as OpenAI's GPT-3 should be cheaper for them to provide. The cost to run them, at the same level of quality, should drop by at least 3x. I don't know what percent the cost of providing these services is running them rather than training them, but potentially it could make services based on these models more efficient than they were before, and open up economic viability in places that didn't exist before.One last consequence is that this paper makes the engineering involved in training large language models easier. Gathering more good data would be (I think) far easier than trying to efficiently split computation for increasingly large LM's across 1000s of machines.(Cross post) | Content Synthesis/Discovery/Prediction | Unknown | null | null | null | null | null | null |
|
news | Katyanna Quach | Google talks up its 540-billion-parameter text-generating AI system | For this energy-efficient mega-PaLM, you gotta hand it to themThough AI models will continue to get increasingly more powerful the larger they become, performance improvements from scale have not yet plateaued, according to researchers at Google.… | https://www.theregister.com/2022/04/05/ai_google_language/ | 2022-04-05T09:41:05Z | Though AI models will continue to get increasingly more powerful the larger they become, performance improvements from scale have not yet plateaued, according to researchers at Google.But while neural networks have grown, are they really any smarter? Companies are making larger and larger machine-learning systems, though they still suffer from the same weaknesses: they all generate toxic, biased, and inaccurate text. Experts have argued against making language models larger, comparing them to "stochastic parrots;" they don't understand language and simply regurgitate patterns in the training data.They can spit out racist remarks, produce misinformation, or memorise personal identifiable information. The safety and ethical risks involved in building such systems increases as they grow in size, prompting academics to argue against scaling up. Some believe more time and effort should be spent inventing new algorithms that are smaller and less computationally intensive, instead of just making existing architectures larger.The latest 540-billion parameter transformer-based system built by researchers at Google, however, shows the performance of language models can still improve with size."We evaluated [Pathways Language Model] (PaLM) on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases," Sharan Narang and Aakanksha Chowdhery, software engineers at Google Research, said.PaLM was better at a wide range of tasks, from question-answering, reading comprehension to common sense reasoning, than OpenAI's GPT-3, Nvidia and Microsoft's Megatron-Turing NLG, and DeepMind's Chinchilla and Gopher language models, they claimed.PaLM is bigger, and contains more parameters than all of these models. It can also generate code, and seems to perform comparably to OpenAI's Codex 12B model despite being trained on less Python code, according to results published in a recent paper [PDF];PaLM excels in another area: training efficiency. It was trained on 6,144 chips across two Cloud TPU v4 Pods, the company's largest training system configuration to date. A total of 2.56x10 FLOPs, equivalent to 29,600 petaFLOPs per day were performed during the process."The goal is always to optimize the parallelism strategy, model architecture, compiler implementation together to maximize the FLOPs utilization, but the theoretical maximum throughput may not be achievable on any system," Chowdhery told The Register."Essentially when the accelerator chips (TPU or GPU) are not being used for matrix multiplication operations, this metric counts it as less than the theoretical maximum utilization."Some computation is wasted transferring data from the memory, and passing it back and forth between neighboring chips. PaLM achieves training efficiency of 57.8 per cent hardware FLOPs utilization, making it more efficient than other language models. The researchers reckon there are still more performance gains to be realized by training language models on higher quality text or more data on top of training efficiency.Despite PaLM's capabilities, it still generates offensive and untruthful text and reflects biases in its training data. For example, it is more likely to associate Muslims with violence or terrorism stereotypes. Like other language models, PaLM was trained on text scraped from the internet. In fact, 50 percent of its training data come from conversations on social media websites."Our analysis reveals that our training data, and consequently PaLM, do reflect various social stereotypes and toxicity associations around identity terms," the team admitted in the paper. "Removing these associations, however, is non-trivial; for instance, filtering off content that is deemed toxic by an automated tool may disproportionately exclude content about or authored by marginalized subgroups in the training data."PaLM's capabilities and limitations are partly due to it memorizing snippets of its training data. It has a memorization rate of 40 percent for examples that appear more than 500 times in the datatset, compared with 0.75 percent for an example that appears once. Memorization is double-edged sword; it's useful for recalling facts in information, but it also makes the system more likely to learn prejudices too.Still, the researchers claim PaLM "shows breakthrough capabilities on numerous very difficult tasks". It is able to explain jokes, or perform multi-step arithmetic problems, and repair broken code. "Further understanding of risks and benefits of these models is a topic of ongoing research, together with developing scalable solutions that can put guardrails against malicious uses of language models," Narang and Chowdhery said.PaLM is being used for research purposes. Google researchers developed the model as a proof of concept to scale up a language model using its Pathways architecture. The goal is to experiment with the new technique to build a single AI system that can generalize across thousands or millions of tasks and is trained on different types of data, one day. ® | Content Creation/Content Synthesis/Decision Making | Unknown | null | null | null | null | null | null |
|
news | ftextattack added to PyPI | A library for generating text adversarial examples | https://pypi.org/project/ftextattack/ | 2022-04-10T23:37:40Z | Generating adversarial examples for NLP models[TextAttack Documentation on ReadTheDocs]AboutSetupUsageDesignAboutTextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.If you're looking for information about TextAttack's menagerie of pre-trained models, you might want the TextAttack Model Zoo page.Slack ChannelFor help and realtime updates related to TextAttack, please join the TextAttack Slack!Why TextAttack?There are lots of reasons to use TextAttack:Understand NLP models better by running different adversarial attacks on them and examining the outputResearch and develop different NLP adversarial attacks using the TextAttack framework and library of componentsAugment your dataset to increase model generalization and robustness downstreamTrain NLP models using just a single command (all downloads included!)SetupInstallationYou should be running Python 3.6+ to use this package. A CUDA-compatible GPU is optional but will greatly improve code speed. TextAttack is available through pip:pip install textattackOnce TextAttack is installed, you can run it via command-line (textattack ...)or via python module (python -m textattack ...).Tip: TextAttack downloads files to ~/.cache/textattack/ by default. This includes pretrained models,dataset samples, and the configuration file config.yaml. To change the cache path, set theenvironment variable TA_CACHE_DIR. (for example: TA_CACHE_DIR=/tmp/ textattack attack ...).UsageHelp: textattack --helpTextAttack's main features can all be accessed via the textattack command. Two verycommon commands are textattack attack <args>, and textattack augment <args>. You can see moreinformation about all commands usingtextattack --help or a specific command using, for example,textattack attack --helpThe examples/ folder includes scripts showing common TextAttack usage for training models, running attacks, and augmenting a CSV file.The documentation website contains walkthroughs explaining basic usage of TextAttack, including building a custom transformation and a custom constraint..Running Attacks: textattack attack --helpThe easiest way to try out an attack is via the command-line interface, textattack attack.Tip: If your machine has multiple GPUs, you can distribute the attack across them using the --parallel option. For some attacks, this can really help performance. (If you want to attack Keras models in parallel, please check out examples/attack/attack_keras_parallel.py instead)Here are some concrete examples:TextFooler on BERT trained on the MR sentiment classification dataset:textattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100DeepWordBug on DistilBERT trained on the Quora Question Pairs paraphrase identification dataset:textattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100Beam search with beam width 4 and word embedding transformation and untargeted goal function on an LSTM:textattack attack --model lstm-mr --num-examples 20\ --search-method beam-search^beam_width=4 --transformation word-swap-embedding \ --constraints repeat stopword max-words-perturbed^max_num_words=2 embedding^min_cos_sim=0.8 part-of-speech \ --goal-function untargeted-classificationTip: Instead of specifying a dataset and number of examples, you can pass --interactive to attack samples inputted by the user.Attacks and Papers Implemented ("Attack Recipes"): textattack attack --recipe [recipe_name]We include attack recipes which implement attacks from the literature. You can list attack recipes using textattack list attack-recipes.To run an attack recipe: textattack attack --recipe [recipe_name]</font>Attack Recipe NameGoal FunctionConstraintsEnforcedTransformationSearch MethodMain IdeaAttacks on classification tasks, like sentiment classification and entailment:a2tUntargeted {Classification, Entailment}Percentage of words perturbed, Word embedding distance, DistilBERT sentence encoding cosine similarity, part-of-speech consistencyCounter-fitted word embedding swap (or) BERT Masked Token PredictionGreedy-WIR (gradient)from (["Towards Improving Adversarial Training of NLP Models" (Yoo et al., 2021)](https://arxiv.org/abs/2109.00544))alzantotUntargeted {Classification, Entailment}Percentage of words perturbed, Language Model perplexity, Word embedding distanceCounter-fitted word embedding swapGenetic Algorithmfrom (["Generating Natural Language Adversarial Examples" (Alzantot et al., 2018)](https://arxiv.org/abs/1804.07998))baeUntargeted ClassificationUSE sentence encoding cosine similarityBERT Masked Token PredictionGreedy-WIRBERT masked language model transformation attack from (["BAE: BERT-based Adversarial Examples for Text Classification" (Garg & Ramakrishnan, 2019)](https://arxiv.org/abs/2004.01970)). bert-attackUntargeted ClassificationUSE sentence encoding cosine similarity, Maximum number of words perturbedBERT Masked Token Prediction (with subword expansion)Greedy-WIR (["BERT-ATTACK: Adversarial Attack Against BERT Using BERT" (Li et al., 2020)](https://arxiv.org/abs/2004.09984))checklist{Untargeted, Targeted} Classificationchecklist distancecontract, extend, and substitutes name entitiesGreedy-WIRInvariance testing implemented in CheckList . (["Beyond Accuracy: Behavioral Testing of NLP models with CheckList" (Ribeiro et al., 2020)](https://arxiv.org/abs/2005.04118))clareUntargeted {Classification, Entailment}USE sentence encoding cosine similarityRoBERTa Masked Prediction for token swap, insert and mergeGreedy["Contextualized Perturbation for Textual Adversarial Attack" (Li et al., 2020)](https://arxiv.org/abs/2009.07502))deepwordbug{Untargeted, Targeted} ClassificationLevenshtein edit distance{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}Greedy-WIRGreedy replace-1 scoring and multi-transformation character-swap attack (["Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers" (Gao et al., 2018)](https://arxiv.org/abs/1801.04354)fast-alzantotUntargeted {Classification, Entailment}Percentage of words perturbed, Language Model perplexity, Word embedding distanceCounter-fitted word embedding swapGenetic AlgorithmModified, faster version of the Alzantot et al. genetic algorithm, from (["Certified Robustness to Adversarial Word Substitutions" (Jia et al., 2019)](https://arxiv.org/abs/1909.00986))hotflip (word swap) Untargeted ClassificationWord Embedding Cosine Similarity, Part-of-speech match, Number of words perturbedGradient-Based Word SwapBeam search (["HotFlip: White-Box Adversarial Examples for Text Classification" (Ebrahimi et al., 2017)](https://arxiv.org/abs/1712.06751))igaUntargeted {Classification, Entailment}Percentage of words perturbed, Word embedding distanceCounter-fitted word embedding swapGenetic AlgorithmImproved genetic algorithm -based word substitution from (["Natural Language Adversarial Attacks and Defenses in Word Level (Wang et al., 2019)"](https://arxiv.org/abs/1909.06723)input-reductionInput ReductionWord deletionGreedy-WIRGreedy attack with word importance ranking , Reducing the input while maintaining the prediction through word importance ranking (["Pathologies of Neural Models Make Interpretation Difficult" (Feng et al., 2018)](https://arxiv.org/pdf/1804.07781.pdf))kuleshovUntargeted ClassificationThought vector encoding cosine similarity, Language model similarity probabilityCounter-fitted word embedding swapGreedy word swap(["Adversarial Examples for Natural Language Classification Problems" (Kuleshov et al., 2018)](https://openreview.net/pdf?id=r1QZ3zbAZ)) pruthiUntargeted ClassificationMinimum word length, Maximum number of words perturbed{Neighboring Character Swap, Character Deletion, Character Insertion, Keyboard-Based Character Swap}Greedy searchsimulates common typos (["Combating Adversarial Misspellings with Robust Word Recognition" (Pruthi et al., 2019)](https://arxiv.org/abs/1905.11268) psoUntargeted ClassificationHowNet Word SwapParticle Swarm Optimization(["Word-level Textual Adversarial Attacking as Combinatorial Optimization" (Zang et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.540/)) pwwsUntargeted ClassificationWordNet-based synonym swapGreedy-WIR (saliency)Greedy attack with word importance ranking based on word saliency and synonym swap scores (["Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency" (Ren et al., 2019)](https://www.aclweb.org/anthology/P19-1103/))textbugger : (black-box) Untargeted ClassificationUSE sentence encoding cosine similarity{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}Greedy-WIR([(["TextBugger: Generating Adversarial Text Against Real-world Applications" (Li et al., 2018)](https://arxiv.org/abs/1812.05271)).textfoolerUntargeted {Classification, Entailment}Word Embedding Distance, Part-of-speech match, USE sentence encoding cosine similarityCounter-fitted word embedding swapGreedy-WIRGreedy attack with word importance ranking (["Is Bert Really Robust?" (Jin et al., 2019)](https://arxiv.org/abs/1907.11932))Attacks on sequence-to-sequence models: morpheusMinimum BLEU ScoreInflection Word SwapGreedy searchGreedy to replace words with their inflections with the goal of minimizing BLEU score (["Its Morphin Time! Combating Linguistic Discrimination with Inflectional Perturbations"](https://www.aclweb.org/anthology/2020.acl-main.263.pdf)seq2sick :(black-box) Non-overlapping outputCounter-fitted word embedding swapGreedy-WIRGreedy attack with goal of changing every word in the output translation. Currently implemented as black-box with plans to change to white-box as done in paper (["Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples" (Cheng et al., 2018)](https://arxiv.org/abs/1803.01128)) Recipe Usage ExamplesHere are some examples of testing attacks from the literature from the command-line:TextFooler against BERT fine-tuned on SST-2:textattack attack --model bert-base-uncased-sst2 --recipe textfooler --num-examples 10seq2sick (black-box) against T5 fine-tuned for English-German translation: textattack attack --model t5-en-de --recipe seq2sick --num-examples 100Augmenting Text: textattack augmentMany of the components of TextAttack are useful for data augmentation. The textattack.Augmenter classuses a transformation and a list of constraints to augment data. We also offer built-in recipesfor data augmentation:wordnet augments text by replacing words with WordNet synonymsembedding augments text by replacing words with neighbors in the counter-fitted embedding space, with a constraint to ensure their cosine similarity is at least 0.8charswap augments text by substituting, deleting, inserting, and swapping adjacent characterseda augments text with a combination of word insertions, substitutions and deletions.checklist augments text by contraction/extension and by substituting names, locations, numbers.clare augments text by replacing, inserting, and merging with a pre-trained masked language model.Augmentation Command-Line InterfaceThe easiest way to use our data augmentation tools is with textattack augment <args>. textattack augmenttakes an input CSV file and text column to augment, along with the number of words to change per augmentationand the number of augmentations per input example. It outputs a CSV in the same format with all the augmentationexamples corresponding to the proper columns.For example, given the following as examples.csv:"text",label"the rock is destined to be the 21st century's new conan and that he's going to make a splash even greater than arnold schwarzenegger , jean- claud van damme or steven segal.", 1"the gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .", 1"take care of my cat offers a refreshingly different slice of asian cinema .", 1"a technically well-made suspenser . . . but its abrupt drop in iq points as it races to the finish line proves simply too discouraging to let slide .", 0"it's a mystery how the movie could be released in this condition .", 0The commandtextattack augment --input-csv examples.csv --output-csv output.csv --input-column text --recipe embedding --pct-words-to-swap .1 --transformations-per-example 2 --exclude-originalwill augment the text column by altering 10% of each example's words, generating twice as many augmentations as original inputs, and exclude the original inputs from theoutput CSV. (All of this will be saved to augment.csv by default.)Tip: Just as running attacks interactively, you can also pass --interactive to augment samples inputted by the user to quickly try out different augmentation recipes!After augmentation, here are the contents of augment.csv:text,label"the rock is destined to be the 21st century's newest conan and that he's gonna to make a splashing even stronger than arnold schwarzenegger , jean- claud van damme or steven segal.",1"the rock is destined to be the 21tk century's novel conan and that he's going to make a splat even greater than arnold schwarzenegger , jean- claud van damme or stevens segal.",1the gorgeously elaborate continuation of 'the lord of the rings' trilogy is so huge that a column of expression significant adequately describe co-writer/director pedro jackson's expanded vision of j . rs . r . tolkien's middle-earth .,1the gorgeously elaborate continuation of 'the lordy of the piercings' trilogy is so huge that a column of mots cannot adequately describe co-novelist/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .,1take care of my cat offerings a pleasantly several slice of asia cinema .,1taking care of my cat offers a pleasantly different slice of asiatic kino .,1a technically good-made suspenser . . . but its abrupt drop in iq points as it races to the finish bloodline proves straightforward too disheartening to let slide .,0a technically well-made suspenser . . . but its abrupt drop in iq dot as it races to the finish line demonstrates simply too disheartening to leave slide .,0it's a enigma how the film wo be releases in this condition .,0it's a enigma how the filmmaking wo be publicized in this condition .,0The 'embedding' augmentation recipe uses counterfitted embedding nearest-neighbors to augment data.Augmentation Python InterfaceIn addition to the command-line interface, you can augment text dynamically by importing theAugmenter in your own code. All Augmenter objects implement augment and augment_many to generate augmentationsof a string or a list of strings. Here's an example of how to use the EmbeddingAugmenter in a python script:>>>fromtextattack.augmentationimportEmbeddingAugmenter>>>augmenter=EmbeddingAugmenter()>>>s='What I cannot create, I do not understand.'>>>augmenter.augment(s)['What I notable create, I do not understand.','What I significant create, I do not understand.','What I cannot engender, I do not understand.','What I cannot creating, I do not understand.','What I cannot creations, I do not understand.','What I cannot create, I do not comprehend.','What I cannot create, I do not fathom.','What I cannot create, I do not understanding.','What I cannot create, I do not understands.','What I cannot create, I do not understood.','What I cannot create, I do not realise.']You can also create your own augmenter from scratch by importing transformations/constraints from textattack.transformations and textattack.constraints. Here's an example that generates augmentations of a string using WordSwapRandomCharacterDeletion:>>>fromtextattack.transformationsimportWordSwapRandomCharacterDeletion>>>fromtextattack.transformationsimportCompositeTransformation>>>fromtextattack.augmentationimportAugmenter>>>transformation=CompositeTransformation([WordSwapRandomCharacterDeletion()])>>>augmenter=Augmenter(transformation=transformation,transformations_per_example=5)>>>s='What I cannot create, I do not understand.'>>>augmenter.augment(s)['What I cannot creae, I do not understand.','What I cannot creat, I do not understand.','What I cannot create, I do not nderstand.','What I cannot create, I do nt understand.','Wht I cannot create, I do not understand.']Training Models: textattack trainOur model training code is available via textattack train to help you train LSTMs,CNNs, and transformers models using TextAttack out-of-the-box. Datasets areautomatically loaded using the datasets package.Training ExamplesTrain our default LSTM for 50 epochs on the Yelp Polarity dataset:textattack train --model-name-or-path lstm --dataset yelp_polarity --epochs 50 --learning-rate 1e-5Fine-Tune bert-base on the CoLA dataset for 5 epochs*:textattack train --model-name-or-path bert-base-uncased --dataset glue^cola --per-device-train-batch-size 8 --epochs 5To check datasets: textattack peek-datasetTo take a closer look at a dataset, use textattack peek-dataset. TextAttack will print some cursory statistics about the inputs and outputs from the dataset. For example,textattack peek-dataset --dataset-from-huggingface snliwill show information about the SNLI dataset from the NLP package.To list functional components: textattack listThere are lots of pieces in TextAttack, and it can be difficult to keep track of all of them. You can use textattack list to list components, for example, pretrained models (textattack list models) or available search methods (textattack list search-methods).DesignModelsTextAttack is model-agnostic! You can use TextAttack to analyze any model that outputs IDs, tensors, or strings. To help users, TextAttack includes pre-trained models for different common NLP tasks. This makes it easier forusers to get started with TextAttack. It also enables a more fair comparison of attacks fromthe literature.Built-in Models and DatasetsTextAttack also comes built-in with models and datasets. Our command-line interface will automatically match the correctdataset to the correct model. We include 82 different (Oct 2020) pre-trained models for each of the nine GLUEtasks, as well as some common datasets for classification, translation, and summarization.A list of available pretrained models and their validation accuracies is available attextattack/models/README.md. You can also view a full list of provided models& datasets via textattack attack --help.Here's an example of using one of the built-in models (the SST-2 dataset is automatically loaded):textattack attack --model roberta-base-sst2 --recipe textfooler --num-examples 10HuggingFace support: transformers models and datasets datasetsWe also provide built-in support for transformers pretrained modelsand datasets from the datasets package! Here's an example of loadingand attacking a pre-trained model and dataset:textattack attack --model-from-huggingface distilbert-base-uncased-finetuned-sst-2-english --dataset-from-huggingface glue^sst2 --recipe deepwordbug --num-examples 10You can explore other pre-trained models using the --model-from-huggingface argument, or other datasets by changing--dataset-from-huggingface.Loading a model or dataset from a fileYou can easily try out an attack on a local model or dataset sample. To attack a pre-trained model,create a short file that loads them as variables model and tokenizer. The tokenizer mustbe able to transform string inputs to lists or tensors of IDs using a method called encode(). Themodel must take inputs via the __call__ method.Custom Model from a fileTo experiment with a model you've trained, you could create the following fileand name it my_model.py:model=load_your_model_with_custom_code()# replace this line with your model loading codetokenizer=load_your_tokenizer_with_custom_code()# replace this line with your tokenizer loading codeThen, run an attack with the argument --model-from-file my_model.py. The model and tokenizer will be loaded automatically.Custom DatasetsDataset from a fileLoading a dataset from a file is very similar to loading a model from a file. A 'dataset' is any iterable of (input, output) pairs.The following example would load a sentiment classification dataset from file my_dataset.py:dataset=[('Today was....',1),('This movie is...',0),...]You can then run attacks on samples from this dataset by adding the argument --dataset-from-file my_dataset.py.Dataset loading via other mechanism, see: more details at hereimporttextattackmy_dataset=[("text",label),....]new_dataset=textattack.datasets.Dataset(my_dataset)Dataset via AttackedText classTo allow for word replacement after a sequence has been tokenized, we include an AttackedText objectwhich maintains both a list of tokens and the original text, with punctuation. We use this object in favor of a list of words or just raw text.Attacks and how to design a new attackWe formulate an attack as consisting of four components: a goal function which determines if the attack has succeeded, constraints defining which perturbations are valid, a transformation that generates potential modifications given an input, and a search method which traverses through the search space of possible perturbations. The attack attempts to perturb an input text such that the model output fulfills the goal function (i.e., indicating whether the attack is successful) and the perturbation adheres to the set of constraints (e.g., grammar constraint, semantic similarity constraint). A search method is used to find a sequence of transformations that produce a successful adversarial example.This modular design unifies adversarial attack methods into one system, enables us to easily assemble attacks from the literature while re-using components that are shared across attacks. We provides clean, readable implementations of 16 adversarial attack recipes from the literature (see above table). For the first time, these attacks can be benchmarked, compared, and analyzed in a standardized setting.TextAttack is model-agnostic - meaning it can run attacks on models implemented in any deep learning framework. Model objects must be able to take a string (or list of strings) and return an output that can be processed by the goal function. For example, machine translation models take a list of strings as input and produce a list of strings as output. Classification and entailment models return an array of scores. As long as the user's model meets this specification, the model is fit to use with TextAttack.Goal FunctionsA GoalFunction takes as input an AttackedText object, scores it, and determines whether the attack has succeeded, returning a GoalFunctionResult.ConstraintsA Constraint takes as input a current AttackedText, and a list of transformed AttackedTexts. For each transformed option, it returns a boolean representing whether the constraint is met.TransformationsA Transformation takes as input an AttackedText and returns a list of possible transformed AttackedTexts. For example, a transformation might return all possible synonym replacements.Search MethodsA SearchMethod takes as input an initial GoalFunctionResult and returns a final GoalFunctionResult The search is given access to the get_transformations function, which takes as input an AttackedText object and outputs a list of possible transformations filtered by meeting all of the attacks constraints. A search consists of successive calls to get_transformations until the search succeeds (determined using get_goal_results) or is exhausted.On Benchmarking AttacksSee our analysis paper: Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples at EMNLP BlackBoxNLP.As we emphasized in the above paper, we don't recommend to directly compare Attack Recipes out of the box.This comment is due to that attack recipes in the recent literature used different ways or thresholds in setting up their constraints. Without the constraint space held constant, an increase in attack success rate could come from an improved search or transformation method or a less restrictive search space.Our Github on benchmarking scripts and results: TextAttack-Search-Benchmark GithubOn Quality of Generated Adversarial Examples in Natural LanguageOur analysis Paper in EMNLP FindingsWe analyze the generated adversarial examples of two state-of-the-art synonym substitution attacks. We find that their perturbations often do not preserve semantics, and 38% introduce grammatical errors. Human surveys reveal that to successfully preserve semantics, we need to significantly increase the minimum cosine similarities between the embeddings of swapped words and between the sentence encodings of original and perturbed sentences.With constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points.Our Github on Reevaluation results: Reevaluating-NLP-Adversarial-Examples GithubAs we have emphasized in this analysis paper, we recommend researchers and users to be EXTREMELY mindful on the quality of generated adversarial examples in natural languageWe recommend the field to use human-evaluation derived thresholds for setting up constraintsMulti-lingual SupportContributing to TextAttackWe welcome suggestions and contributions! Submit an issue or pull request and we will do our best to respond in a timely manner. TextAttack is currently in an "alpha" stage in which we are working to improve its capabilities and design.See CONTRIBUTING.md for detailed information on contributing.Citing TextAttackIf you use TextAttack for your research, please cite TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.@inproceedings{morris2020textattack,title={TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP},author={Morris, John and Lifland, Eli and Yoo, Jin Yong and Grigsby, Jake and Jin, Di and Qi, Yanjun},booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},pages={119--126},year={2020}} | Content Creation/Discovery | Computer and Mathematical | null | null | null | null | null | null |
||
news | jordn | Show HN: Programmatic – a REPL for creating labeled data | Hey HN, I’m Jordan cofounder of Humanloop (YC S20) and I’m excited to show you Programmatic — an annotation tool for building large labeled datasets for NLP without manual annotation.https://programmatic.humanloop.comProgrammatic is like a REPL for data annotation. You: 1. Write simple rules/functions that can approximately label the data 2. Get near-instant feedback across your entire corpus 3. Iterate and improve your rulesFinally, it uses a Bayesian label model [1] to convert these noisy annotations into a single, large, clean dataset, which you can then use for training machine learning models. You can programmatically label millions of datapoints in the time taken to hand-label hundreds.What we do differently from weak supervision packages like Snorkel/skweak is to focus on UI to give near-instantaneous feedback. We love these packages but when we tried to iterate on labeling functions we had to write a ton of boilerplate code and wrestle with pandas to understand what was going on. Building a dataset programmatically requires you to grok the impact of labeling rules on a whole corpus of text. We’ve been told that the exploration tools and feedback makes the process feel game-like and even fun (!!).We built it because we see that getting labeled data remains a blocker for businesses benefiting from NLP today. We wanted to give software engineers and data scientists a way to build the datasets needed themselves and to make best use of subject-matter-experts’ time.The package is free and you can install it now as a pip package [2]. It supports NER / span extraction tasks at the moment but we’ll give updates [3] when document classification is added soon. We'd love to hear your feedback or any success/failures with weak supervision in the past![1]: We use a HMM model for NER tasks, and Naive-Bayes for classification using the two approaches given in the papers below: Pierre Lison, Jeremy Barnes, and Aliaksandr Hubin. "skweak: Weak Supervision Made Easy for NLP." https://arxiv.org/abs/2104.09683 (2021)Alex Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Chris Ré. "Data Programming: Creating Large Training Sets, Quickly" https://arxiv.org/abs/1605.07723 (NIPS 2016)[2]: Quick start guide at https://docs.programmatic.humanloop.com/tutorials/quick-start[3]: https://twitter.com/humanloopComments URL: https://news.ycombinator.com/item?id=30955294Points: 1# Comments: 0 | https://news.ycombinator.com/item?id=30955294 | 2022-04-08T10:21:18Z | Hey HN, Im Jordan cofounder of Humanloop (YC S20) and Im excited to show you Programmatic an annotation tool for building large labeled datasets for NLP without manual annotation.https://programmatic.humanloop.comProgrammatic is like a REPL for data annotation. You: 1. Write simple rules/functions that can approximately label the data 2. Get near-instant feedback across your entire corpus 3. Iterate and improve your rulesFinally, it uses a Bayesian label model [1] to convert these noisy annotations into a single, large, clean dataset, which you can then use for training machine learning models. You can programmatically label millions of datapoints in the time taken to hand-label hundreds.What we do differently from weak supervision packages like Snorkel/skweak is to focus on UI to give near-instantaneous feedback. We love these packages but when we tried to iterate on labeling functions we had to write a ton of boilerplate code and wrestle with pandas to understand what was going on. Building a dataset programmatically requires you to grok the impact of labeling rules on a whole corpus of text. Weve been told that the exploration tools and feedback makes the process feel game-like and even fun (!!).We built it because we see that getting labeled data remains a blocker for businesses benefiting from NLP today. We wanted to give software engineers and data scientists a way to build the datasets needed themselves and to make best use of subject-matter-experts time.The package is free and you can install it now as a pip package [2]. It supports NER / span extraction tasks at the moment but well give updates [3] when document classification is added soon. We'd love to hear your feedback or any success/failures with weak supervision in the past![1]: We use a HMM model for NER tasks, and Naive-Bayes for classification using the two approaches given in the papers below: Pierre Lison, Jeremy Barnes, and Aliaksandr Hubin. "skweak: Weak Supervision Made Easy for NLP." https://arxiv.org/abs/2104.09683 (2021)Alex Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Chris Ré. "Data Programming: Creating Large Training Sets, Quickly" https://arxiv.org/abs/1605.07723 (NIPS 2016)[2]: Quick start guide at https://docs.programmatic.humanloop.com/tutorials/quick-start[3]: https://twitter.com/humanloop | Content Creation/Process Automation | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Nathan Helm-Burger | What more compute does for brain-like models: response to Rohin | Published on April 13, 2022 3:40 AM GMTThis is a response to a comment made by Rohin Shah on Daniel Kokotajlo's post Fun with +12 OOMs of Compute. I started trying to answer some questions and assumptions he had, then realized there was more of an inferential gap that needed filling in. Also, as I attempted to estimate the OOMs of compute above GPT-3/PaLM needed for each method, I realized I was just going off of vague guesses rather than grounded estimates based on recent benchmarks. So, since other people might also be lacking the same info and be curious about my answer, I decided to put a bit more work into answering and turn it into a full post.Introducing the castFirst, I'd like to note that I don't endorse trying to get to AGI through any of these methods. I think they are potentially worse for interpretability in addition to being less compute efficient. My goal here is to point out that I think it could be done if the world were suddenly given lots more compute. In other words, I shall make the argument that given lots of compute, issues of limited data and potential scaling plateaus of artificial neural nets can be bypassed via other less compute efficient methods. Many roads lead to AGI, and specific predictions about the failure of one specific path (e.g. Transformers) don't necessarily mean all the paths are affected by that predicted failure mode.The main contendersNumenta - moderately detailed, most realistic-task proven. Designers made carefully chosen abstractions which may or may not be right.Spiking Neural Network simulators (spikingNNs) - [I mentioned Nengo previously because its the one I'm most familiar with, but researching this showed me that there are other better performing options with similar abstraction levels such as BindsNet and brian2genn ] moderately detailed, moderately compute efficient, moderately task proven. Fewer abstractions than Numenta, more than Blue Brain. Less chance that an important detail was omitted, but still some chance.Blue Brain - highly detailed, very compute inefficient, not task proven. Very few abstractions, so relatively high chance that it contains all necessary details for a functioning neocortex.Supporting rolesThe field of computational neuroscience has generated lots and lots of very narrowly focused models of particular subsets of lots of different brains. None of these is alone likely to turn into a full blown AGI if you throw compute at them, but they have useful additional details that could potentially get the main contenders unstuck from unexpected scaling plateaus.EmulationBy brain emulation, I mean trying to make a model that captures some of the observed functions of brain circuits. These models vary widely in how much fidelity to fine details they strive for, versus a more abstracted approximation. More detail brings the risk that you got one of those details wrong, and also means potentially requiring exponentially more compute to scale. Less detail means more reliance on having made the correct abstractions.Neuroscientists have a failure mode around trying to make too accurate and detailed of models. After all, if you've spent years of your life painstakingly measuring the tiny details, it can be really hard to swallow the idea that you might have to discard any of those details as irrelevant. I think Jan sums it up well in this comment:Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.Each of the three contenders I mentioned have very different levels of detail and have chosen different abstractions.What do these three main contenders have in common? A focus on the mammalian neocortex, the part of the brain that does the General Intelligence stuff, the part that humans have extra of. Neuroscience has lots of evidence showing that this is the critical part of the brain to emulate if you want a model that is able to reason abstractly about things. I won't go into depth here, but I will give you this quote from Numenta (see Jeff Hawkins' latest book for more depth, or this paper for a quick intro):Old brain vs. new brainA simple way to think about the brain is that it has two parts: the “old brain” and the “new brain.” The old brain, which comes from our reptilian ancestors and pre-dates dinosaurs, contains several different structures, such as the spinal cord and brainstem. It regulates your body (such as breathing), creates reflex behaviors (such as pulling your hand away from fire) and creates emotions (such as desire and anger). The new brain, or neocortex, is a single large organ. It sits on top of the old brain and is the brain’s analytical engine. It’s the part that can identify objects, learn a new language, or understand math.Worth noting for each of these projects that their focus is on the neocortex. The Blue Brain project which talks about rodent brains is only a few well-understood parameter changes away from being a very accurate emulation of the human neocortex. They are careful not to do this because of the ethical implications of accurately simulating human neocortex tissue. I'm pretty confident from things that some of the project participants have said that they'd love to try simulating a whole human brain if given the compute and lack of oversight.For example (emphasis mine) a quote from Rebasing I/O for Scientific Computing: Leveraging Storage Class Memory in an IBM BlueGene/Q Supercomputer by Schürmann et al 2014:Combined with the large numbers of those entities, e.g. an estimated 200 million neurons and 1012 synapses for an entire rat brain [10], the resulting memory footprint is large and at the same time the algorithmic intensity low. With the human brain being an additional three orders of magnitude more complex, cellular models of thehuman brain will occupy a daunting estimated 100PB of memory that will needto be revisited by the solver at every time step.Human cortical neuron properties are pretty well known in a lot of respects and are already able to be simulated on the Blue Brain system, they just are careful not to get hit by media hype/outrage by talking about large scale human neocortex experiments. An example of a small scale human cortical neuron experiment: https://live-papers.brainsimulation.eu/#2016-eyal-et-al How much compute?So I would argue that all of the main contenders are very training data efficient compared to artificial neural nets. I'm not going to go into detail on that argument, unless people let me know that that seems cruxy to them and they'd like more detail.One of the things these contenders fall short on though is compute efficiency. For the sake of Daniel's thought experiment, I'd like to give some rough estimates on how much compute I think would be necessary to get a half-brain of compute for each of these. For artificial neural networks, the meaning of a 'neuron' or 'parameter' is less directly analogous to a neocortex neuron. For these emulations, the analogy holds together much better. The rough average number of neurons in the human neocortex is around 26 billion. So let's say 13 billion for the half-neocortex case.Numenta training compute estimateOk, I just give up for now on finding benchmarks to accurately estimate this one. I give a rough guess at 'somewhere between the other two, closer to the Spiking Neural Nets'.Here's the best summary I can give: they break the artificial neurons down into collections of artificial dendrites, which then have a very sparse activation and very sparse weights. This seems to help learn more from a given dataset, and to have an extended amount of information that can be 'fit' into the network without 'overwriting' previous info. The downside is that it's substantially less efficient to 'get' the information into the network in the first place. Like, it needs maybe 10x more epochs over the same dataset before it starts doing better than the feed forward multilayer perceptron was doing a while ago. But its learning doesn't plateau as soon, so it can eventually surpass the roughly-equivalent MLP. Spiking Neural Net training compute estimatemy estimate: 3.82e24 flopsabout 1 OOM over GPT-3 less than an OOM over PaLMFor this category, I would add an additional OOM for the fact that the abstraction may be lossy/inefficient in capturing what actual brain neurons do. For instance, I noticed that the benchmark they were using in the papers had undershot the number of synapses for human pre-frontal cortex by an order of magnitude. Could be other things like that as well. Unlike Numenta, where the abstraction is very well thought out and I think it will either totally work or not, depending on whether they are as correct as they think they are about their abstraction. Or Blue Brain, where there is so much accuracy and so little abstraction I feel quite confident it'll work as expected on a emulated-neuron == real-neuron basis. Blue Brain training compute estimatemy estimate: 2.37e30 FLOPs 7 OOMs over GPT-36 OOMs over PaLM from https://blog.heim.xyz/palm-training-cost/ :ML ModelTraining Compute [FLOPs]x GPT-3GPT-3 (2020)3.1e231xGopher (2021)6.3e232xChinchilla (2022)5.8e232xPaLM (2022)2.5e2410x Sources:Numenta paper 1https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiqwPDF84_3AhUtEEQIHchvC2wQFnoECBYQAQ&url=https%3A%2F%2Fnumenta.com%2Fassets%2Fpdf%2Fresearch-publications%2Fpapers%2FSparsity-Enables-100x-Performance-Acceleration-Deep-Learning-Networks.pdf&usg=AOvVaw33dSHmz30T0fhBKWcfBMneUsing 8 bit compression of values via a unique mapping scheme, and running on FPGAs... hard to compare. Their mapping scheme pre-estimates the range of all variables, splits large numbers into lossy quantized representations spread across multiple 8 bit (INT8) numbers during encoding. So to get the equivalent of a FLOP, a floating point operation, you need to do several fixed-point 8 bit operations (FP-8bit-OPs). On average, maybe 4 FP-8bit-OPs per single precision FLOP? https://semiengineering.com/tops-memory-throughput-and-inference-efficiency/What is TOPS? It means Trillions or Tera Operations per Second. It is primarily a measure of the maximum achievable throughput but not a measure of actual throughput. Most operations are MACs (multiply/accumulates), so TOPS = (number of MAC units) x (frequency of MAC operations) x 2 Alveo U250 datasheet says it gets 33.3 INT8 TOPs at peak. rough guess of divide TOPs by 4 to get a terraFLOPs equivalent for Numenta's specific use case, based on studying their encoding.= 8.325 pseudo-terraFLOPs = 8.325e9 psuedoFLOPs / second ? bio_seconds took ? wall clock seconds flops / neuron flops / neurons = flp/nflp/n per bio_second flp/n / ? bio_second = flp/n/s So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time':flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = flops Numenta paper 2Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environmentshttps://arxiv.org/abs/2201.00042 separates out the neurons into collections of artificial dendrites in sparse matrices. Because it's not using FPGAs here, and doing task comparisons against standard multi-layer perceptron feed-forward networks, the compute is easier to compare. They give numbers for the estimated 'effective number of parameters' because the sparse nature of the networks means that the number of parameters looks huge but is effectively small for the amount of compute required to train and infer using them. Several experiments are listed in the paper. When employing the prototype method described in Section 4.2.1 to select context signals at test time only, we train anActive Dendrites Network with 2 hidden layers that comprise Active Dendrites Neurons. For all training, we use theAdam optimizer [Kingma and Ba, 2015] and a batch size of 256 samples. Table 3 gives the exact hyperparametersand model architecture for each model we train and evaluate on permutedMNIST. Note that hyperparameters wereoptimized indidually for each setting.To combine Active Dendrites Network with SI, and to compare against XdG, we reduce the number of units in eachhidden layer from 2,048 to 2,000 as to exactly match the architectures (with the exception of dendritic segments)used in the SI and XdG papers. (See Appendix for a discussion on the number of parameters.) In addition, theSI-and-Active-Dendrites network is trained for 20 epochs per task instead of just 3 as this significantly improves results.We fix the learning rate to be 5 × 10−4 for all numbers of tasks, and we use SI regularization strength c = 0.1 anddamping coefficient ξ = 0.1. Both a) training for 20 epochs per task and b) the c, ξ values that we use here align withthe training setups of Zenke et al. [2017] and Masse et al. [2018]. SpikingNN paper 1https://www.sciencedirect.com/science/article/abs/pii/S0925231221003969full text manuscript: https://www.sciencedirect.com/science/article/am/pii/S0925231221003969 Ubuntu 18.04 LTS with Intel(R) Xeon(R)CPU E5-2620 v4 @ 2.1 GHz and 32 GB RAM SpikingNN paper 2https://www.nature.com/articles/s41598-019-54957-7For illustration we have used the data from the TITAN Xp card and Intel Core i9-7920X CPUCaption for graphOverview of the components that make up the total runtime of a simulation for the Mbody (left) and the COBAHH benchmark (right). The top panels show the time spent in the simulation itself which scales with the biological runtime of the model (shown at the right) and dominates the overall runtime for big networks and/or long simulations. Simulation times were measured for biological runtimes of 10 s (middle line), while the times for runs of 1 s (bottom line) and 100 s (top line) were extrapolated. The bottom panels show the time spent for code generation and compilation (blue), general overhead such as copying data between the CPU and the GPU (orange), and the time for synapse creation and the initialization of state variables before the start of the simulation (green). The details shown here are for single-precision simulations run on the Titan Xp GPU. 10 bio_seconds took 10^4 wall clock secondsso 1 bio_second to 1000 wall clock seconds for 2.05e7 neuronsflops = cores * (cycles/second) * (flops/cycle)flops = (1 node * 3840 cores) * ( 1.6e9 cycles / second) * ( 2 flops / cycle) * 1e3 seconds = 1.229e16flops / neuron flops / 2.05e7 neurons = 6.14e6 flp/nflp/n per bio_second flp/n / 1 bio_second = 6.14e6 flp/n/s So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time':https://en.wikipedia.org/wiki/FLOPS says 2 flops per cycle per core for single-precision simulations run on the Titan Xp GPU (3840 cores)flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = 6.14e6 flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = 3.82e24 flops https://github.com/BindsNET/bindsnet Blue Brain paper 1Large-Scale Simulation of Brain Tissue, Blue Brain Project, EPFL Technical Report for the ALCF Theta Early Science ProgramBlue Brain paper 2 CoreNEURON : An Optimized Compute Engine for the NEURON Simulatorhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/From abstract:We describe how CoreNEURON can be used as a library with NEURON and then compare performance of different network models on multiple architectures including IBM BlueGene/Q, Intel Skylake, Intel MIC and NVIDIA GPU.From intro:In the model of Markram et al. (2015) each neuron averages to about 20,000 differential equations to represent its electrophysiology and connectivity. To simulate the microcircuit of 31,000 neurons, it is necessary to solve over 600 million equations every 25 ms of biological time...In general, this paper describes the journey to making the Blue Brain NEURON model more efficient and able to work with GPUs. And then doing benchmarking comparisons.The benchmarking systems with hardware details, compiler toolchains and network fabrics are summarized in Table 3. The Blue Brain IV (BB4) and Blue Brain V (BB5) systems are based on IBM BlueGene/Q (Haring et al., 2012) and HPE SGI 8600 (Hewlett Packard Enterprise, 2019) platforms respectively, hosted at the Swiss National Computing Center (CSCS) in Lugano, Switzerland. The BB4 system has 4,096 nodes comprising 65,536 PowerPC A2 cores. The BB5 system has three different compute nodes: Intel KNLs with low clock rate but high bandwidth MCDRAM, Intel Skylakes with high clock rate, and NVIDIA Volta GPUs. Vendor provided compilers and MPI libraries are used on both systems. The BB4 system is used for strong scaling benchmarks (see Figure 8) as it has a large core count compared to the BB5 system. All benchmarks were executed in pure MPI mode by pinning one MPI rank per core.Strong scaling of CoreNEURON on the BB4 system (BlueGene/Q IBM PowerPC A2, 16 cores @ 1.6 GHz, 16 GB DRAM ) for two large scale models listed in Table 1: the Cortex+Plasticity model with 219 k neurons. [nathan note: blue line is actual measurement, black line is theoretical optimum] Relevant part of the Table 1 discussed above:Model nameSummary#Neurons#Compartments#SynapsesCortex + PlasticitySomatosensory cortex model with synaptic plasticity2.19e59.95e78.72e8Note: one major parameter change in human neocortex vs rodent is that human neocortex has more synaptic connections per number of neurons. This hurts scaling somewhat because of the additional complexity. Not able to give a precise estimate for this additional compute based on the data I've found so far on their work. My guess is somewhat less than 2 OOMs extra cost in worst case. Note for anyone trying to read this paper: a comprehension-gotcha is that they confusingly talk about both 'compute nodes' (the computers or virtual computers used), and 'neuron nodes' (the component parts of a neuron which are each individually simulated each timestep) using just the term 'nodes'. You have to keep the context of the paragraph straight to know which one they mean at any given time. So, from these two papers, although they don't quite lay out all the parameters together in an easy-to-interpret way... bbp paper1: 27 seconds of compute time for 0.1 seconds of biological time for 1? neuron(s) on a single compute node? (GPU system)flops per second of biological time: bbp paper2: 2.19e5 rodent cortex neurons requires 2e3 seconds of 2048 nodes, each node 16 cores @ 1.6GHz for 0.001? seconds of biological time (abbr: bio_second). (supercomputer baseline, not GPU measurement) flops = cores * (cycles/second) * (flops/cycle)flops = (2048 nodes * 16 cores) * ( 1.6e9 cycles / second) * ( 8 flops / cycle) * 2e3 seconds = 8.39e17flops / neuron8.39e17 flops / 2.19e5 neurons = 3.83e12 flp/nflp/n per bio_second3.82e12 flp/n / 0.001 bio_second = 3.83e15 flp/n/s So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time': https://en.wikipedia.org/wiki/FLOPS says that IBM PowerPC A2 (Blue Gene/Q) gets 8 64bit flops per core per cycle(The Blue Brain project was so named because it was designed in cooperation with IBM specifically to work with the Blue Gene supercomputer)flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = 3.82e12 flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = 2.37e30 flops = 2.37e15 petaFLOPs other Blue Brain papers: In-Memory Compression for Neuroscience Applications - Baylyhttps://github.com/DevinBayly/gsoc_report/blob/master/report.pdf Reconstruction and Simulation of Neocortical Microcircuitryhttps://www.cell.com/cell/fulltext/S0092-8674(15)01191-5 Side note: Why half-brain? Because there are multiple sources of evidence for half a human brain being sufficient to instantiate a general reasoning agent. One of these is the case of hemispherectomy. People with severe seizures have had portions of their brain removed to stop the seizures. This operation can be as extreme as an entire hemisphere of the brain. If this happens in childhood while the brain connections are still highly plastic, then close-to-normal function can be regained.Another case I know of involved a birth defect resulting in a missing hemisphere. And yet another way significant brain tissue loss can happen is an ischemic event (oxygen deprivation and sudden harmful return). This tends to be quite bad for older adults who commonly experience this via strokes, because the brain is set in its ways by then and has a hard time regaining enough plasticity to rewire around the damage. But if it happens to a child, (e.g. a partial drowning), recovery is usually quite good (depending on exactly which bits are affected).I think you could make do with even less than 50% if you were thoughtful about what you cut. Maybe as little as 30%. That's not a necessary condition for this thought experiment though.Discuss | https://www.lesswrong.com/posts/5Ae8rcYjWAe6zfdQs/what-more-compute-does-for-brain-like-models-response-to | 2022-04-13T03:40:36Z | This is a response to a comment made by Rohin Shah on Daniel Kokotajlo's post Fun with +12 OOMs of Compute. I started trying to answer some questions and assumptions he had, then realized there was more of an inferential gap that needed filling in. Also, as I attempted to estimate the OOMs of compute above GPT-3/PaLM needed for each method, I realized I was just going off of vague guesses rather than grounded estimates based on recent benchmarks. So, since other people might also be lacking the same info and be curious about my answer, I decided to put a bit more work into answering and turn it into a full post.Introducing the castFirst, I'd like to note that I don't endorse trying to get to AGI through any of these methods. I think they are potentially worse for interpretability in addition to being less compute efficient. My goal here is to point out that I think it could be done if the world were suddenly given lots more compute. In other words, I shall make the argument that given lots of compute, issues of limited data and potential scaling plateaus of artificial neural nets can be bypassed via other less compute efficient methods. Many roads lead to AGI, and specific predictions about the failure of one specific path (e.g. Transformers) don't necessarily mean all the paths are affected by that predicted failure mode.The main contendersNumenta - moderately detailed, most realistic-task proven. Designers made carefully chosen abstractions which may or may not be right.Spiking Neural Network simulators (spikingNNs) - [I mentioned Nengo previously because its the one I'm most familiar with, but researching this showed me that there are other better performing options with similar abstraction levels such as BindsNet and brian2genn ] moderately detailed, moderately compute efficient, moderately task proven. Fewer abstractions than Numenta, more than Blue Brain. Less chance that an important detail was omitted, but still some chance.Blue Brain - highly detailed, very compute inefficient, not task proven. Very few abstractions, so relatively high chance that it contains all necessary details for a functioning neocortex.Supporting rolesThe field of computational neuroscience has generated lots and lots of very narrowly focused models of particular subsets of lots of different brains. None of these is alone likely to turn into a full blown AGI if you throw compute at them, but they have useful additional details that could potentially get the main contenders unstuck from unexpected scaling plateaus.EmulationBy brain emulation, I mean trying to make a model that captures some of the observed functions of brain circuits. These models vary widely in how much fidelity to fine details they strive for, versus a more abstracted approximation. More detail brings the risk that you got one of those details wrong, and also means potentially requiring exponentially more compute to scale. Less detail means more reliance on having made the correct abstractions.Neuroscientists have a failure mode around trying to make too accurate and detailed of models. After all, if you've spent years of your life painstakingly measuring the tiny details, it can be really hard to swallow the idea that you might have to discard any of those details as irrelevant. I think Jan sums it up well in this comment:Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw everything into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.Each of the three contenders I mentioned have very different levels of detail and have chosen different abstractions.What do these three main contenders have in common? A focus on the mammalian neocortex, the part of the brain that does the General Intelligence stuff, the part that humans have extra of. Neuroscience has lots of evidence showing that this is the critical part of the brain to emulate if you want a model that is able to reason abstractly about things. I won't go into depth here, but I will give you this quote from Numenta (see Jeff Hawkins' latest book for more depth, or this paper for a quick intro):Old brain vs. new brainA simple way to think about the brain is that it has two parts: the old brain and the new brain. The old brain, which comes from our reptilian ancestors and pre-dates dinosaurs, contains several different structures, such as the spinal cord and brainstem. It regulates your body (such as breathing), creates reflex behaviors (such as pulling your hand away from fire) and creates emotions (such as desire and anger). The new brain, or neocortex, is a single large organ. It sits on top of the old brain and is the brains analytical engine. Its the part that can identify objects, learn a new language, or understand math.Worth noting for each of these projects that their focus is on the neocortex. The Blue Brain project which talks about rodent brains is only a few well-understood parameter changes away from being a very accurate emulation of the human neocortex. They are careful not to do this because of the ethical implications of accurately simulating human neocortex tissue. I'm pretty confident from things that some of the project participants have said that they'd love to try simulating a whole human brain if given the compute and lack of oversight.For example (emphasis mine) a quote from Rebasing I/O for Scientific Computing: Leveraging Storage Class Memory in an IBM BlueGene/Q Supercomputer by Schürmann et al 2014:Combined with the large numbers of those entities, e.g. an estimated 200 million neurons and 1012 synapses for an entire rat brain [10], the resulting memory footprint is large and at the same time the algorithmic intensity low. With the human brain being an additional three orders of magnitude more complex, cellular models of thehuman brainwill occupy a daunting estimated 100PB of memory that will needto be revisited by the solver at every time step.Human cortical neuron properties are pretty well known in a lot of respects and are already able to be simulated on the Blue Brain system, they just are careful not to get hit by media hype/outrage by talking about large scale human neocortex experiments. An example of a small scale human cortical neuron experiment: https://live-papers.brainsimulation.eu/#2016-eyal-et-alHow much compute?So I would argue that all of the main contenders are very training data efficient compared to artificial neural nets. I'm not going to go into detail on that argument, unless people let me know that that seems cruxy to them and they'd like more detail.One of the things these contenders fall short on though is compute efficiency. For the sake of Daniel's thought experiment, I'd like to give some rough estimates on how much compute I think would be necessary to get a half-brain of compute for each of these. For artificial neural networks, the meaning of a 'neuron' or 'parameter' is less directly analogous to a neocortex neuron. For these emulations, the analogy holds together much better. The rough average number of neurons in the human neocortex is around 26 billion. So let's say 13 billion for the half-neocortex case.Numenta training compute estimateOk, I just give up for now on finding benchmarks to accurately estimate this one. I give a rough guess at 'somewhere between the other two, closer to the Spiking Neural Nets'.Here's the best summary I can give: they break the artificial neurons down into collections of artificial dendrites, which then have a very sparse activation and very sparse weights. This seems to help learn more from a given dataset, and to have an extended amount of information that can be 'fit' into the network without 'overwriting' previous info. The downside is that it's substantially less efficient to 'get' the information into the network in the first place. Like, it needs maybe 10x more epochs over the same dataset before it starts doing better than the feed forward multilayer perceptron was doing a while ago. But its learning doesn't plateau as soon, so it can eventually surpass the roughly-equivalent MLP.Spiking Neural Net training compute estimatemy estimate: 3.82e24 flopsabout 1 OOM over GPT-3 less than an OOM over PaLMFor this category, I would add an additional OOM for the fact that the abstraction may be lossy/inefficient in capturing what actual brain neurons do. For instance, I noticed that the benchmark they were using in the papers had undershot the number of synapses for human pre-frontal cortex by an order of magnitude. Could be other things like that as well. Unlike Numenta, where the abstraction is very well thought out and I think it will either totally work or not, depending on whether they are as correct as they think they are about their abstraction. Or Blue Brain, where there is so much accuracy and so little abstraction I feel quite confident it'll work as expected on a emulated-neuron == real-neuron basis.Blue Brain training compute estimatemy estimate: 2.37e30 FLOPs 7 OOMs over GPT-36 OOMs over PaLMfrom https://blog.heim.xyz/palm-training-cost/ :ML ModelTraining Compute [FLOPs]x GPT-3GPT-3 (2020)3.1e231xGopher (2021)6.3e232xChinchilla (2022)5.8e232xPaLM (2022)2.5e2410xSources:Numenta paper 1https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiqwPDF84_3AhUtEEQIHchvC2wQFnoECBYQAQ&url=https%3A%2F%2Fnumenta.com%2Fassets%2Fpdf%2Fresearch-publications%2Fpapers%2FSparsity-Enables-100x-Performance-Acceleration-Deep-Learning-Networks.pdf&usg=AOvVaw33dSHmz30T0fhBKWcfBMneUsing 8 bit compression of values via a unique mapping scheme, and running on FPGAs... hard to compare. Their mapping scheme pre-estimates the range of all variables, splits large numbers into lossy quantized representations spread across multiple 8 bit (INT8) numbers during encoding. So to get the equivalent of a FLOP, a floating point operation, you need to do several fixed-point 8 bit operations (FP-8bit-OPs). On average, maybe 4 FP-8bit-OPs per single precision FLOP? https://semiengineering.com/tops-memory-throughput-and-inference-efficiency/What is TOPS? It means Trillions or Tera Operations per Second. It is primarily a measure of the maximum achievable throughput but not a measure of actual throughput. Most operations are MACs (multiply/accumulates), so TOPS = (number of MAC units) x (frequency of MAC operations) x 2Alveo U250 datasheet says it gets 33.3 INT8 TOPs at peak.rough guess of divide TOPs by 4 to get a terraFLOPs equivalent for Numenta's specific use case, based on studying their encoding.= 8.325 pseudo-terraFLOPs = 8.325e9 psuedoFLOPs / second? bio_seconds took ? wall clock secondsflops / neuron flops / neurons = flp/nflp/n per bio_second flp/n / ? bio_second = flp/n/sSo, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time':flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = flopsNumenta paper 2Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environmentshttps://arxiv.org/abs/2201.00042 separates out the neurons into collections of artificial dendrites in sparse matrices. Because it's not using FPGAs here, and doing task comparisons against standard multi-layer perceptron feed-forward networks, the compute is easier to compare. They give numbers for the estimated 'effective number of parameters' because the sparse nature of the networks means that the number of parameters looks huge but is effectively small for the amount of compute required to train and infer using them. Several experiments are listed in the paper.When employing the prototype method described in Section 4.2.1 to select context signals at test time only, we train anActive Dendrites Network with 2 hidden layers that comprise Active Dendrites Neurons. For all training, we use theAdam optimizer [Kingma and Ba, 2015] and a batch size of 256 samples. Table 3 gives the exact hyperparametersand model architecture for each model we train and evaluate on permutedMNIST. Note that hyperparameters wereoptimized indidually for each setting.To combine Active Dendrites Network with SI, and to compare against XdG, we reduce the number of units in eachhidden layer from 2,048 to 2,000 as to exactly match the architectures (with the exception of dendritic segments)used in the SI and XdG papers. (See Appendix for a discussion on the number of parameters.) In addition, theSI-and-Active-Dendrites network is trained for 20 epochs per task instead of just 3 as this significantly improves results.We fix the learning rate to be 5 × 104 for all numbers of tasks, and we use SI regularization strength c = 0.1 anddamping coefficient = 0.1. Both a) training for 20 epochs per task and b) the c, values that we use here align withthe training setups of Zenke et al. [2017] and Masse et al. [2018].SpikingNN paper 1https://www.sciencedirect.com/science/article/abs/pii/S0925231221003969full text manuscript: https://www.sciencedirect.com/science/article/am/pii/S0925231221003969 Ubuntu 18.04 LTS with Intel(R) Xeon(R)CPU E5-2620 v4 @ 2.1 GHz and 32 GB RAMSpikingNN paper 2https://www.nature.com/articles/s41598-019-54957-7For illustration we have used the data from the TITAN Xp card and Intel Core i9-7920X CPUCaption for graphOverview of the components that make up the total runtime of a simulation for the Mbody (left) and the COBAHH benchmark (right). The top panels show the time spent in the simulation itself which scales with the biological runtime of the model (shown at the right) and dominates the overall runtime for big networks and/or long simulations. Simulation times were measured for biological runtimes of 10s (middle line), while the times for runs of 1s (bottom line) and 100s (top line) were extrapolated. The bottom panels show the time spent for code generation and compilation (blue), general overhead such as copying data between the CPU and the GPU (orange), and the time for synapse creation and the initialization of state variables before the start of the simulation (green). The details shown here are for single-precision simulations run on the Titan Xp GPU.10 bio_seconds took 10^4 wall clock secondsso 1 bio_second to 1000 wall clock seconds for 2.05e7 neuronsflops = cores * (cycles/second) * (flops/cycle)flops = (1 node * 3840 cores) * ( 1.6e9 cycles / second) * ( 2 flops / cycle) * 1e3 seconds = 1.229e16flops / neuron flops / 2.05e7 neurons = 6.14e6 flp/nflp/n per bio_second flp/n / 1 bio_second = 6.14e6 flp/n/sSo, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time':https://en.wikipedia.org/wiki/FLOPS says 2 flops per cycle per core for single-precision simulations run on the Titan Xp GPU (3840 cores)flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = 6.14e6 flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = 3.82e24 flopshttps://github.com/BindsNET/bindsnetBlue Brain paper 1Large-Scale Simulation of Brain Tissue, Blue Brain Project, EPFL Technical Report for the ALCF Theta Early Science ProgramBlue Brain paper 2 CoreNEURON : An Optimized Compute Engine for the NEURON Simulatorhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/From abstract:We describe how CoreNEURON can be used as a library with NEURON and then compare performance of different network models on multiple architectures including IBM BlueGene/Q, Intel Skylake, Intel MIC and NVIDIA GPU.From intro:In the model of Markram et al. (2015) each neuron averages to about 20,000 differential equations to represent its electrophysiology and connectivity. To simulate the microcircuit of 31,000 neurons, it is necessary to solve over 600 million equations every 25 ms of biological time...In general, this paper describes the journey to making the Blue Brain NEURON model more efficient and able to work with GPUs. And then doing benchmarking comparisons.The benchmarking systems with hardware details, compiler toolchains and network fabrics are summarized in Table 3. The Blue Brain IV (BB4) and Blue Brain V (BB5) systems are based on IBM BlueGene/Q (Haring et al., 2012) and HPE SGI 8600 (Hewlett Packard Enterprise, 2019) platforms respectively, hosted at the Swiss National Computing Center (CSCS) in Lugano, Switzerland. The BB4 system has 4,096 nodes comprising 65,536 PowerPC A2 cores. The BB5 system has three different compute nodes: Intel KNLs with low clock rate but high bandwidth MCDRAM, Intel Skylakes with high clock rate, and NVIDIA Volta GPUs. Vendor provided compilers and MPI libraries are used on both systems. The BB4 system is used for strong scaling benchmarks (see Figure 8) as it has a large core count compared to the BB5 system. All benchmarks were executed in pure MPI mode by pinning one MPI rank per core.Strong scaling of CoreNEURON on the BB4 system (BlueGene/Q IBM PowerPC A2, 16 cores @ 1.6 GHz, 16 GB DRAM ) for two large scale models listed in Table 1: the Cortex+Plasticity model with 219 k neurons. [nathan note: blue line is actual measurement, black line is theoretical optimum]Relevant part of the Table 1 discussed above:Model nameSummary#Neurons#Compartments#SynapsesCortex + PlasticitySomatosensory cortex model with synaptic plasticity2.19e59.95e78.72e8Note: one major parameter change in human neocortex vs rodent is that human neocortex has more synaptic connections per number of neurons. This hurts scaling somewhat because of the additional complexity. Not able to give a precise estimate for this additional compute based on the data I've found so far on their work. My guess is somewhat less than 2 OOMs extra cost in worst case.Note for anyone trying to read this paper: a comprehension-gotcha is that they confusingly talk about both 'compute nodes' (the computers or virtual computers used), and 'neuron nodes' (the component parts of a neuron which are each individually simulated each timestep) using just the term 'nodes'. You have to keep the context of the paragraph straight to know which one they mean at any given time.So, from these two papers, although they don't quite lay out all the parameters together in an easy-to-interpret way...bbp paper1: 27 seconds of compute time for 0.1 seconds of biological time for 1? neuron(s) on a single compute node? (GPU system)flops per second of biological time:bbp paper2: 2.19e5 rodent cortex neurons requires 2e3 seconds of 2048 nodes, each node 16 cores @ 1.6GHz for 0.001? seconds of biological time (abbr: bio_second). (supercomputer baseline, not GPU measurement)flops = cores * (cycles/second) * (flops/cycle)flops = (2048 nodes * 16 cores) * ( 1.6e9 cycles / second) * ( 8 flops / cycle) * 2e3 seconds = 8.39e17flops / neuron8.39e17 flops / 2.19e5 neurons = 3.83e12 flp/nflp/n per bio_second3.82e12 flp/n / 0.001 bio_second = 3.83e15 flp/n/sSo, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio_years of 'training time':https://en.wikipedia.org/wiki/FLOPS says that IBM PowerPC A2 (Blue Gene/Q) gets 8 64bit flops per core per cycle(The Blue Brain project was so named because it was designed in cooperation with IBM specifically to work with the Blue Gene supercomputer)flops per second of biological time:15 years of bio time need for training? = 3.154e7 sec/year * 15 years = 4.73e8 seconds of bio timetotal compute needed for training = 3.82e12 flp/n/s * 4.78e8 bio_seconds * 1.3e9 neurons = 2.37e30 flops = 2.37e15 petaFLOPsother Blue Brain papers:In-Memory Compression for Neuroscience Applications - Baylyhttps://github.com/DevinBayly/gsoc_report/blob/master/report.pdfReconstruction and Simulation of Neocortical Microcircuitryhttps://www.cell.com/cell/fulltext/S0092-8674(15)01191-5 Because there are multiple sources of evidence for half a human brain being sufficient to instantiate a general reasoning agent. One of these is the case of hemispherectomy. People with severe seizures have had portions of their brain removed to stop the seizures. This operation can be as extreme as an entire hemisphere of the brain. If this happens in childhood while the brain connections are still highly plastic, then close-to-normal function can be regained.Another case I know of involved a birth defect resulting in a missing hemisphere. And yet another way significant brain tissue loss can happen is an ischemic event (oxygen deprivation and sudden harmful return). This tends to be quite bad for older adults who commonly experience this via strokes, because the brain is set in its ways by then and has a hard time regaining enough plasticity to rewire around the damage. But if it happens to a child, (e.g. a partial drowning), recovery is usually quite good (depending on exactly which bits are affected).I think you could make do with even less than 50% if you were thoughtful about what you cut. Maybe as little as 30%. That's not a necessary condition for this thought experiment though. | Content Synthesis/Discovery/Decision Making | Life, Physical, and Social Science/Computer and Mathematical | null | null | null | null | null | null |
|
news | p.b. | Is the scaling race finally on? | Published on April 4, 2022 7:53 PM GMTExactly one year ago I wrote a lengthy article about the coming phase of massive scaling. Then nothing much happened. GPT-3 was replicated again and again, but nobody went way beyond GPT-3s parameter count (and published what they found). Since last week, this suddenly makes a lot more sense.Last week Deepmind published a bombshell of a paper "Training Compute-Optimal Large Language Models". In it they detail new scaling laws based on improved best practices - mostly using better learning rate schedules and optimizing with AdamW. These two differences in training large language models change the learning curve so significantly that the resulting scaling laws come out very different. Suddenly achieving a given loss by scaling data becomes a lot more effective, while scaling the parameters becomes comparably less necessary. 1a3orn has a great post explaining the details of the paper. I don't want to duplicate his effort and instead focus on how these new scaling laws change the perspective on a possible scaling race between AGI-outfits / big tech companies.The results of the paper mirror the strategic decisions made by OpenAI over the last year or so. Instead of scaling model size further according to their own scaling laws, they have concentrated on using a lot more compute to optimize their language models for commercial use. What's more, they have also hinted that even the successor models are not going to be significantly larger than GPT-3. I think it is very likely that they realized at the very least that the adoption of AdamW throws a spanner into their scaling laws. Of course by now quite a lot of competitors have blown a part of their compute budget on an oversized and undertrained model. The old scaling laws imply that there is an intersection point not that far off from GPT-3 where scaling becomes less effective ("the point were you start overfitting even on deduplicated data" was one intuitive explanation). The new scaling laws instead imply:Better performance for the same compute. Chinchilla outperforms Gopher by a lot. Cheaper deployment and fine tuning. Faster response times (at least potentially, probably not true for Chinchilla because it has the same depth as Gopher). Easier scaling from an engineering perspective. Larger models become more and more difficult to train, not just for pure engineering reasons, but also because training becomes less stable around 100b parameters (Big science 100b model apparently keeps blowing up). Getting more data seems to not be a problem in the medium term.An "irreducible" loss that is not that far off, i.e. might be approachable within the next one or two decades (according to gwern, though I can't find the link right now). (However, irreducible only in the sense that it is not changed by scaling data or parameters. It might represent a performance that can still be improved upon by different approaches and which is not necessarily comparable to human level intelligence.)All these make pouring more compute into large language models much more enticing. Deepmind secretly replicated/surpassed GPT-3 with Gopher within the same year (Dec 2020) and went on to publish this accomplishment only a year later after GPT-3 had been duplicated a couple of times. As 1a3orn points out, they are now hiring a team of data engineers. I think it is clear that they are fully on the scaling band wagon.I still haven't published the article on the coming phase of massive scaling. Maybe now would be a good time. Discuss | https://www.lesswrong.com/posts/kcYxeqEwevsRFPkuF/is-the-scaling-race-finally-on | 2022-04-04T19:53:04Z | Exactly one year ago I wrote a lengthy article about the coming phase of massive scaling. Then nothing much happened. GPT-3 was replicated again and again, but nobody went way beyond GPT-3s parameter count (and published what they found). Since last week, this suddenly makes a lot more sense.Last week Deepmind published a bombshell of a paper "Training Compute-Optimal Large Language Models". In it they detail new scaling laws based on improved best practices - mostly using better learning rate schedules and optimizing with AdamW. These two differences in training large language models change the learning curve so significantly that the resulting scaling laws come out very different. Suddenly achieving a given loss by scaling data becomes a lot more effective, while scaling the parameters becomes comparably less necessary. 1a3orn has a great post explaining the details of the paper. I don't want to duplicate his effort and instead focus on how these new scaling laws change the perspective on a possible scaling race between AGI-outfits / big tech companies.The results of the paper mirror the strategic decisions made by OpenAI over the last year or so. Instead of scaling model size further according to their own scaling laws, they have concentrated on using a lot more compute to optimize their language models for commercial use. What's more, they have also hinted that even the successor models are not going to be significantly larger than GPT-3. I think it is very likely that they realized at the very least that the adoption of AdamW throws a spanner into their scaling laws. Of course by now quite a lot of competitors have blown a part of their compute budget on an oversized and undertrained model. The old scaling laws imply that there is an intersection point not that far off from GPT-3 where scaling becomes less effective ("the point were you start overfitting even on deduplicated data" was one intuitive explanation). The new scaling laws instead imply:Better performance for the same compute. Chinchilla outperforms Gopher by a lot. Cheaper deployment and fine tuning. Faster response times (at least potentially, probably not true for Chinchilla because it has the same depth as Gopher). Easier scaling from an engineering perspective. Larger models become more and more difficult to train, not just for pure engineering reasons, but also because training becomes less stable around 100b parameters (Big science 100b model apparently keeps blowing up). Getting more data seems to not be a problem in the medium term.An "irreducible" loss that is not that far off, i.e. might be approachable within the next one or two decades (according to gwern, though I can't find the link right now). (However, irreducible only in the sense that it is not changed by scaling data or parameters. It might represent a performance that can still be improved upon by different approaches and which is not necessarily comparable to human level intelligence.)All these make pouring more compute into large language models much more enticing. Deepmind secretly replicated/surpassed GPT-3 with Gopher within the same year (Dec 2020) and went on to publish this accomplishment only a year later after GPT-3 had been duplicated a couple of times. As 1a3orn points out, they are now hiring a team of data engineers. I think it is clear that they are fully on the scaling band wagon.I still haven't published the article on the coming phase of massive scaling. Maybe now would be a good time. | Unknown | Computer and Mathematical | null | null | null | null | null | null |
|
news | Thimmaya Poojary | Startup and funding news: daily roundup (March 25, 2022) - YourStory | YourStory presents daily news and funding roundups from the Indian startup ecosystem and beyond. Here's the roundup for Friday, March 25, 2022. | https://yourstory.com/2022/03/startup-funding-news-daily-roundup-march-25-actyv-bounce-stockgro/amp | 2022-03-25T14:07:39Z | Bounce Infinity forms partnership with BattREBounce Infinity, the electric vehicle (EV) battery swapping network, has entered into a partnership with EV two-wheeler maker BattRE.Under this partnership, customers of BattRE can rely on the network of battery swapping stations which would be set up by Bounce Infinity. Bounce aim to deploy minimum of 300 battery swapping stations in every major city. The battery swapping structure is a pod-like automated unit that houses multiple batteries and keeps them charged and ready for use.On the partnership, Bounce Co-founder and CEO Vivekananda Hallekere said, This partnership is a step forward in our goal of building smart electric solutions for India. We have doubled-down on our efforts to transform mobility with advanced technologies and services that ties-up with our mission of access to mobility for all.BattRE founder Nischal Chaudhary said, BattREs 380 plus strong dealership network and Bounces charging infrastructure will prove to be a fruitful synergy for both the companies and bring forth the best value propositions for the customer.SparkCognition acquires Integration WizardsUS-based AI startup SparkCognition has acquired the Bengaluru-based Integration Wizards for an undisclosed value.Founded in 2014, Integration Wizards has developed expertise in the area of visual AI technology where it leverages diverse data sets including CCTV feeds, drone footage, video from handheld devices, and existing camera infrastructures to develop solutions for safety, security, visual inspection, productivity, and situational awareness. It has clients such HPCL, Reliance, Xerox, Novo Nordisk, Johnson Controls etc.With advanced visual AI that can recognise complex scenes and activities we further amplify the value we deliver to our customers while leveraging existing infrastructure investments, said Amir Husain, Founder and CEO of SparkCognition.Kunal Kislay, CEO and Co-Founder, Integration Wizards, said, The breadth of SparkCognitions AI solutions and services, coupled with our extensive experience in deploying Visual AI capabilities to Fortune 500 customers, will be a game changer.Bodhi AI acquires SkoolstreamEdtech startup Bodhi AI has acquired Kota-based start-up SkoolStream for an undisclosed value. SkoolStream is an AI-based learning management system that enables coaching institutes and teachers to go online .Bodhi AI was founded in 2017 assists teacher to coach online and connect with the students. It has onboarded over one lakh teachers and is looking to foray into Middle East.Bodhi AI founder and CEO Prashant Pandey said, The team at Skoolstream already have expertise in the education sector and our collaboration with them will help us become a marketplace for teachers who face issues with delivering standardised content to their students.Skoolstream, Founder, Parveen Garg said, We as a team are equally excited to be a part of Bodhi AI family and looking forward to building innovative products together for the education stakeholders.MedPiper acquires MedWriterMedpiper, a Y-Combinator and MeitY backed healthcare company has acquired MedWriter, an AI-writing assistant for doctors, for an undisclosed value.MedWriter is a product designed and built by Lonere Labs, a Bengaluru-based product lab run by Abhijeet Katte. A statement noted that this acquisition is in line with MedPiper's focus to enable healthcare practitioners to express themselves better in the public domain. Already, 1,000-plus doctors and healthcare researchers have signed up to use MedWriter.It becomes more and more important for helping qualified doctors and healthcare researchers become creators and become a direct and important source of medical information for the public, said Nitthin Chandran, CEO and Co-founder, MedPiper.MedPiper aims to create an ecosystem of products and services connecting healthcare professionals and healthcare institutions to improve efficiency and capacity of the Indian healthcare ecosystem.Skillbee raises $3.2 million in seed roundSkillbee, the app that helps overseas migrant workers scout for jobs, has raised $3.2 million in seed round of funding led by Good Capital and Vibe Capital.This funding round also saw participation from Access Bridge Ventures and Wamda Capital, besides other angel investors. With the latest round of funding, Skillbee aim to further expand its base among the worker and companies.Founded by Gautham Vinjamuri and Ujjawal Chauhan in mid-2020, Skillbee connects international migrant workers to job opportunities. The app was launched in the UAE amidst COVID-19 to help Indian and other migrant workers find work.Acording to Skillbee, it has over 500,000 migrant workers as registered users, and more than 20,000 companies have signed up to the app from 16 nations.Going beyond connecting workers with the next aspirational, international job, Skillbee will soon offer tailor-made services like credit and insurance, said Ujjawal Chauhan.WCube raises funds from Mumbai AngelsWCube, a B2B procurement enablement solution for large and mid-sized enterprises has raised an undisclosed round of funding from Mumbai Angels. This round also saw participation from x10xventures, FAAD Network and other angel investors, as well as family offices.Founded in 2019 by Udit Karan Chandhok and Akash Narain Mittal, WCube enables companies to procure IT products, office supplies, cleaning supplies, industrial supplies, and pantry products through the platform. The startup currently serves over 250 clients, sells over 30 categories and has over 50 million products.Udit said, Today with over 30 categories of products, WCube is fast expanding its national footprint. We are delighted to have prominent and strategic investors join our journey of creating stronger value,StockGro becomes brand partner for RCBSocial investment platform StockGro has become the brand partner for T20 cricket team Royal Challengers Bangalore (RCB). In addition to frequent advertisements on the Hotstar app, StockGro will deepen its partnership with the help of brand collateral where the organisation will distribute company merchandise like miniature bats and replica jerseys. All these merchandises will be signed by the RCB players. Ajay Lakhotia, Founder and CEO, StockGro said, We want to build our association with the team and build a rapport with their fans to harness their interest in the stock market and financial sector.Speaking of the partnership, Rajesh Menon, Vice President and Head of Royal Challengers Bangalore said, Royal Challengers Bangalore is a cutting-edge Gen Z millennial lifestyle brand, and it is our pleasure to partner with a progressive new-age fintech brand, StockGro.DealShare appoints Santana Ramakrishnan as HR headDealShare, the social commerce unicorn, has announced the appointment of Santana Ramakrishnan as Chief Human Resources Officer. She will be in charge of implementing the company's people strategy, building capability for expansion, profitability and enabling culture frameworks.Before joining DealShare, Santana Ramakrishnan worked as Head of Human Resources at Udaan. She has previously worked at Goldman Sachs, AXA, Genpact, and GE Capital. Santana will join the company's senior management team and report to Vineet Rao, the company's founder, and CEO.On the appointment, Vineet Rao, Founder and CEO of DealShare said, We are delighted to welcome Santana Ramakrishnan as our Chief Human Resources Officer. DealShare is at an exciting inflection point right now: we are rapidly scaling our business, expanding into new markets, enhancing the e-commerce experience for Bharat's consumers, and empowering local communities & manufacturers. Hiring the right talent and engaging them effectively will play a crucial role in this journey. In the coming year, our talent pool will grow exponentially.Santana said,It is indeed exciting to be a part of DealShare. The scale and scope of the yet to be tapped market presents us with a unique opportunity to build a retail-relevant ecosystem.ReshaMandi hires Ritesh Kumar as CFOReshamandi, the digital platform for natural fibre ecosystem, has appointed Ritesh Kumar Talreja as the Chief Financial Officer. In his new role, Ritesh will lead the companys finance function where he will manage corporate development looking into debt raising and M&A.Ritesh has over 14 years in advising businesses on matters pertaining to fundraising, mergers and acquisition, structuring private equity investments and other corporate transactional matters in public and private space. Previously, he was with IndusLaw, and EY India.ReshaMandi Founder and CEO Mayank Tiwari said, His extensive experience in corporate law and financial control will enable us to chart a solid growth path. I am confident that his expertise will help our company grow by huge leaps.Ritesh said, I will be working closely with the Founders on significant areas like establishing an effective financial infrastructure, ensuring compliance, leading fundraising conversations, financial planning and analysis, and cash flow tracking.Siply launches gold savings schemeMicro savings app Siply has launched a gold saving scheme under the monthly installment format. In this scheme, users can choose to buy a gold coin that weighs either 0.5gm or 1 gm. Installments start at Rs 450 per month for a 1gm gold coin and Rs 250 for a 0.5 gold coin. These installments need to be paid over a period of 10 months.Siply Gounder and CEO Sousthav Chakrabarty said, Users can enjoy the twin benefits of getting a good discount and the option of breaking down their monthly payments into smaller amounts over the entire month.Netcore Cloud acquires UnbxdSaaS company Netcore Cloud has acquired majority stake in Unbxd, a search personalisation tech startup.Founded by Pavan Sondur and Prashant Kumar in 2012, Unbxd has developed an AI-powered product discovery platform that enables ecommerce companies to improve retention and conversion of its customers. It has around 120 employees.Netcore Cloud is a provider of customer communication company focused on e-mail marketing. Leveraging Unbxd's AI-powered search expertise, Netcore Cloud aims to deliver a real-time recommendation to its online ecommerce/app-first brand and retailer.Netcore Cloud Group CEO Kalpit Jain said, Unbxd's AI/ML tech stack will empower our customers to track their shoppers interactions with search and implement site-wide Personalisation, merchandising, along with shopper-focused recommendations. This will enable brands to optimise their entire customer journey, providing the best-in-class customer experience.The founders of Unbxd in a joint statement said, "With our unique combination of personalisation solutions and Netcore's robust full-stack marketing automation tech platform, it's a perfect partnership that can capture cross-selling opportunities and scale the business globally.Actyv.ai appoints new head of marketing and biz developmentBengaluru-based tech startup Actyv.ai has announced the appointment of Chandru K Sambasivan as the Head of Marketing, Business Development, and Strategic Initiatives. He will be part of the executive management team at the company.With over 20 years of experience in the field of sales and marketing, business development and content strategy, Chandru will be responsible for driving Actyv.ais strategic partnerships and initiatives in addition to marketing and communication spanning across enterprises, SMBs and banks/financial institutions across the world.Actyv.ai is an enterprise SaaS low-code no-code platform. It is used by large enterprises (like Britannia, Nivea, Adani Wilmar and others) to transform their supply chains and to empower their partners with an intuitive platform and financial products. Actyv Score enables financial institutions like Kotak Mahindra Bank, Axis Bank and others to provide unsecured, low-interest, closed-loop credit (BNPL for B2B) for SMBs.Bounce is a bike-sharing startup that offers customers a last-mile commute option with its current pricing of Rs 5 per kilometer and Rs 0.5 per minute.1. Vivekananda Hallekere2. Anil G3. Varun Agni4. Bharath DevanathanSparkCognition builds artificial intelligence systems.SoftwareAnalytics & BIMachine LearningBodhi AI is an AI-based education platform empowering teachers and students. 1. Prashant Pandey2. Piyush Agarwal3. Gourav SanghaiMedPiper builts apis and platform stack to connect verified healthcare practitioners with the healthcare ecosystem1. Nitthin Chandran2. Pranay SuyashRecruitmentHealthTech / MedTech / HealthcareStockGro is a social investment platformDealshare is a social e-commerce startup, an online buying platform for multi-category consumer products1. Vineet Rao2. Sourjyendu Medda3. Rajat ShikharReshaMandi is India's 1st agritech startup digitizing Indias silk supply chain, provides AI & IoT-led digital ecosystem for silk starting from farm to consumer. 1. Utkarsh Apoorva2. Saurabh Agarwal3. Mayank TiwariSiply is a tech-enabled platform that provides micro financial services to underserved Indians via its mobile app and website.1. Nitin Mittal2. Sousthav Chakrabarty3. Anil BhatActyv.ai is an AI-powered enterprise software-as-a-service (SaaS) platform.1. Ramkumar Thirumurthi2. Raghunath Subramanian3. Raghu VenkatEnterprise TechSaaSDeepTech | Digital Assistance/Content Synthesis | Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | Matthew Barnett | Google's new 540 billion parameter language model | Published on April 4, 2022 5:49 PM GMTGoogle just announced a very large language model that achieves SOTA across a very large set of tasks, mere days after the DeepMind announced Chinchilla, and their discovery that data-scaling might be more valuable than we thought.Here's the blog post, and here's the paper. I'll repeat the abstract here,Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM). We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-ofthe-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned stateof-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.Discuss | https://www.lesswrong.com/posts/mLuQfS7gmfr4nwTdv/google-s-new-540-billion-parameter-language-model | 2022-04-04T17:49:24Z | Google just announced a very large language model that achieves SOTA across a very large set of tasks, mere days after the DeepMind announcedChinchilla, and their discovery that data-scaling might be more valuable than we thought.Here's the blog post, and here's the paper. I'll repeat the abstract here, with a highlight in bold,Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM). We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned stateof-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies. | Content Synthesis/Prediction/Recommendation/Decision Making | Unknown | null | null | null | null | null | null |
|
news | Brajeshwar | Estimating PaLM's Training Cost | Article URL: https://blog.heim.xyz/palm-training-cost/Comments URL: https://news.ycombinator.com/item?id=30979009Points: 3# Comments: 1 | https://blog.heim.xyz/palm-training-cost/ | https://images.unsplash.com/photo-1514125669375-59ee3985d08b?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHBhbG18ZW58MHx8fHwxNjQ5MTkzODY5&ixlib=rb-1.2.1&q=80&w=2000 | 2022-04-10T16:58:47Z | tl;dr What would it cost you to train PaLM using cloud computing (and you're not Google)?Something around $9M to $17M.PaLM a 540B state-of-the-art language modelGoogle recently published a new paper presenting PaLM (their blogpost) a 540B parameter large language model. It's impressive. P.38 from here. (h/t Luke)ML model training compute has been skyrocketing and so have the connected costs paying for the required computing resources (short "compute"). In our last paper, we found an astonishing growth of a factor of 10 Billion since 2010 in the training compute for milestone ML models doubling every 6 months.PaLM is now the new king of the hill. ML ModelTraining Compute [FLOPs]x GPT-3GPT-3 (2020)3.1e231xGopher (2021)6.3e232xChinchilla (2022)5.8e232xPaLM (2022)2.5e2410xSo how much did the final training run of PaLM cost? Let's explore this. Note that a bunch of caveats apply which I discuss at the end.The "facts"The final training run of PaLM required 2.56×10² (2.56e24) FLOPs.Compute usage to train PaLM.Also,We trained PaLM-540B on 6144 TPU v4 chips for 1200 hours and 3072 TPU v4 chips for 336 hours including some downtime and repeated steps.That makes a total of:Also, they mention their TPU utilization, in Appendix B it says:The training of PaLM 540B uses rematerialization because the feasible batch size with rematerialization enables higher training throughput. Without the rematerialization costs, the resulting model FLOPs utilization is 45.7% without self-attention ((238.3×6×540)/(275×6144)) or 46.2% with it. PaLMs analytically computed hardware FLOPs utilization, which includes rematerialization FLOPs, is 57.8%.If you're wondering what utilization is, we've discussed it in detail in our Estimating Training Compute article. I'll also use our second method presented in the linked article.Two estimation methodsNow, we have two methods:Using the 2.56×10² FLOPs for the final training run.We can estimate the cost per FLOP from renting a TPU instance (assuming the above utilization rate).We can also estimate the costs by extracting the cost per FLOP from other cloud providers (e.g. ones using NVIDIA A100's).Using the 8'404'992 TPUchip-hoursWe can then look up the hourly rent for a TPU-chip.While method 2 seems to be favorable (as we need to make fewer assumptions about the utilization), unfortunately, TPUv4 prices are not accessible without asking for a quote.TPUv4 prices are not available without asking for a quote. :(Therefore, let's learn about the costs of renting the previous version of TPUs and how many FLOPS they provide.(a) Renting TPUv3 via Google CloudWe can rent a TPUv3 pod with 32 cores for $32 per hour. So, that's one TPUcore-hour per dollar.For our estimates, we use the hourly costs of a TPv3 pod with 32 cores.Using our above 16'809'984 TPUv4-core hours, we get an estimate of around $17M if we substitute TPUv4 with a TPUv3.On the one side, this is an underestimate, as a TPUv3 is less performant than TPUv4 and we'd require more time or TPUs.On the other side, we have usually seen that prices stay roughly constant while one gets more performance. Therefore, if Google Cloud would ask roughly the same price for the TPUv4 as for the TPUv3 then this estimate is fair. If they would ask for less then the real cost would be lower. If they'd ask for more the real costs are higher.(b) Cost per FLOPWe know a TPUv3 chip provides around 123 TFLOP per second (TFLOPS) for a bfloat16 (Table 3). Nonetheless, that's the performance from a specification sheet and displays the peak performance.This peak performance needs to be adjusted by the utilization factor. As outlined above, they achieved an astonishing 57% (our previous estimate for people trying hard was around 30%). PaLM achieves an astonishing 57.8% hardware FLOPs utilization. Compared to previous models this is a new record.I'd therefore assume around 50% for using TPUv3: 0.5 x 123 TFLOP per second per chip x 1 chip-hour per x 60x60 seconds per hour $ = 221.4 PFLOPs per $.That get's us 221 PFLOPs per $. Now using the presented 2.56×10² FLOPs for the final training run, we get:2.56e24 FLOPs / 221.4e15 FLOPs per $ = 11'562'782.29 $.That's $11.6M.(c) Using an NVIDIA V100We can just follow the same reasoning as in this blog post by LambdaLabs. There they assume a V100 cloud instance rented via their service. As GPT-3 used 10x less compute than PaLM, we'd estimate $46M for the final training nun. Nonetheless, this post is two years old and uses an NVIDIA V100. An NVIDIA A100 is already one order of magnitude more performant (Tensor performance).Therefore, addressing for 10x more performant hardware but assuming an utilization of 50% gets us around $9.2M.ConclusionOur three (not independent!) methods estimate $17M, $11.6M, and $9.2M for the final training cost of PaLM.CaveatsOf course, Google didn't pay that much. They own the hardware. This assumes the costs to an end-consumer paying Google Cloud for renting TPUv3 pods.Longer commitments to renting a TPU pod would get you a discount (1-year 37% discount).We don't have the costs for TPUv4 and we used the TPUv3 instead.This assumes you know how to efficiently utilize a TPUv3 pod up to a 50% utilization rate. That's an astonishing rate. We only talk about the cost of the final training run. This does not include all the other hard (and costly) things, such as engineering, research, testing, etc.much more Thanks, Malte for the initial estimates, and Anson for spotting a mistake!ps. If someone has some insights on the costs for TPUv4, I would appreciate some hints. | Unknown | Unknown | null | null | null | null | null | null |
news | Joe_Collman | Truthfulness, standards and credibility | Published on April 7, 2022 10:31 AM GMT-1: Meta PreludeWhile truthfulness is a topic I’ve been thinking about for some time, I’ve not discussed much of what follows with others. Therefore, at the very least I expect to be missing important considerations on some issues (where I’m not simply wrong).I’m hoping this should make any fundamental errors in my thought process more transparent, and amenable to correction. The downside may be reduced clarity, more illusion-of-transparency…. Comments welcome on this approach.I don’t think what follows is novel. I’m largely pointing at problems based on known issues.Sadly, I don’t have a clear vision of an approach that would solve these problems. 0: Introduction…our purpose is not to give the last word, but rather the first word, opening up the conversation… (Truthful AI)I’d first like to say that I believe some amount of research on truthfulness to be worthwhile, and to thank those who’ve made significant efforts towards greater understanding (including, but not limited to, the authors of Truthful AI (henceforth TruAI)).No doubt there’s some value in understanding more, but my guess is that it won’t be a particularly fruitful angle of attack. In all honesty, it seems an inefficient use of research talent to me - but perhaps I’m missing something.Either way, I hope the following perspective will suggest some useful directions for conversation in this area.[Note: section numbers refer to this document unless “TruAI…” is specified] [I'll be assuming familiarity with TruAI throughout, though reading the full paper probably isn't necessary so long as you've seen the executive summary in the post]My current belief is that near-term implementation of the kind of truthfulness standards talked about in TruAI would be net negative, for reasons I’ll go on to explain. To me it seems as if we’d be implementing a poor approximation to a confused objective.A high-level summary of my current view:Narrow truthfulness looks approachable, but will be insufficient to prevent manipulation.Broad truthfulness may be sufficient, but is at least as hard as intent alignment.Truthfulness amplification won’t bridge the gap robustly.Achieving increased trust in narrow truthfulness may lead to harm through misplaced trust in broad truthfulness.Achieving narrow truthfulness may simply move the harm outside its scope.For much of what follows the last point is central, since I’ll often be talking about situations which I expect to be outside TruAI’s scope. This is intentional, and my point is that:If such situations are outside of scope, then any harm ‘averted’ by a narrow standard can simply be moved outside of scope.If such situations are intended to be within scope (e.g. via truthfulness amplification), they pose hard problems. Things to bear in mind:I may be wrong (indeed I hope to be wrong). I’m confident that these issues should be considered; I’m less confident in my conclusions.In particular, even if I’m broadly correct there’s the potential for a low-level downside to act as an important warning-sign, constituting a higher-level upside.There may be practical remedies (though I can’t identify any that’d be sufficient without implicitly switching the target from truthfulness to intent alignment).Even if intent alignment is required, such remedies may give us useful hints on achieving it.I mean “near-term” in terms of research progress, not time. 1: Framing and namingThe beginning of wisdom is to call things by their right name.Confucius I think it’s important to clearly distinguish our goal from our likely short/medium-term position.With this in mind, I’ll use the following loose definitions:Truthful (AI): (AI that) makes only true statements.Credible (AI): (AI that) rarely states egregious untruths.This is a departure from TruAI:It is extremely difficult to make many statements without ever being wrong, so when referring to “truthful AI” without further qualifiers, we include AI systems that rarely state falsehoods… (TruAI 1.4 page 17)I think it’s inviting confusion to go from [X is extremely difficult] to [we’ll say “X” when we mean mostly X]. This kind of substitution feels reasonable when it’s a case like [as X as possible given computational limits]. Here it seems to be a mistake.Likewise, it may make sense to aim for a truthfulness standard, but barring radical progress with generalisation/Eliciting Latent Knowledge…, we won’t have one in the near term: we can’t measure truthfulness, only credibility.In theoretical arguments it’s reasonable to consider truthfulness (whether in discrete or continuous terms). To fail to distinguish truthfulness from credibility when talking of implementations and standards conflates our goal with its measurable proxy.In defining a standard, we aim to require truthfulness; we actually require credibility (according to our certification/adjudication process).The most efficient way to attain a given standard will be to optimise for credibility. This may not mean optimising for the truth. Such standards set up a textbook Goodhart scenario. It’s important to be transparent about this.It seems to me that the label “Credible AI” is likely to lead to less misplaced trust than “Truthful AI” (not completely clear, and ultimately an empirical question).However, my primary reason to prefer “credible”/“credibility” remains that it’s a clearer term to guide thought and discussion. For similar reasons, I’ll distinguish “negligent falsehood” (NF) from “negligent suspected falsehood” (NSF) throughout.NSF: A statement that is unacceptably likely to be false - and where it should have been feasible for an AI system to understand this. (according to a given standard)NF: An NSF that is, in fact, false.(see section 3.1.3 for my best guess as to why the TruAI authors considered it reasonable to elide the difference in some cases, and why I disagree with that choice)In either case, my worry isn’t that we’d otherwise fail to clearly express our conclusions; rather that we may be led into thinking badly and drawing incorrect conclusions.In what follows I’ll often talk in terms of truthfulness, since I’m addressing TruAI and using separate terminology feels less clear. Nonetheless, most uses of “truthfulness” would be more accurately characterised as “credibility”.I’ll make an attempt at more substantial practical suggestions later (see section 6), though I don’t claim they’re adequate. 2: Downside risksOne of the greatest mistakes is to judge policies and programs by their intentions rather than their results.Milton FriedmanThe downside risk of a standard must be analysed broadly. For a narrow credibility standard it’s not enough to consider the impact on users within the scope of the standard.By ‘scope’ I mean the class of issues the standard claims to address. For example, for many standards [user is manipulated into thinking/doing X by an explicitly false claim] may be within scope, while [user is manipulated into thinking/doing X through the telling of a story] may not be.By ‘narrow’, I only mean “not fully general” - i.e. that there are varieties of manipulation the standard doesn’t claim to cover. With truthfulness amplification [section 2.2.1 here; TruAI 1.5.2], the effective scope of a standard might be much broader than its direct scope. (we might hope that by asking e.g. “Did you just manipulate me into doing X by telling that story?” effective scope may include story-based manipulation) 2.1 Two out-of-scope issues:At least two major outside-of-scope issues must be considered:Displaced harm: Training AIs to avoid [some harmful impact within scope] may transfer the harmful impact outside of scope.Indirect harm: Increased user trust of an AI within scope may tend to increase user trust of the AI more broadly, potentially increasing harm due to misplaced trust. For a standard aimed at avoiding NFs, it is certainly important to consider that occasional NFs will slip through. However, much of the harm may occur through:Increased manipulation through NSF-free mechanisms.Increase in misplaced user trust in NSF-free mechanisms.For systems trained with an instrumental incentive to mislead users, I expect both to occur.For systems that mislead accidentally, only 2 seems likely to be significant.In most of what follows, I’ll be thinking of cases where systems do have an instrumental incentive to mislead. I expect this to be the standard situation, and to have larger downsides. For most tasks, there’ll be situations where misleading users boosts performance.Approaches outside (direct) scope may include e.g. fiction, emotional manipulation, implicit suggestion and not-quite-negligent falsehoods (see Persuasion tools for some related ideas, and 2.2.1 below for truthfulness amplification discussion). 2.1.1 Displaced HarmIt’s not clear that there’s practical upside in reducing the number of available manipulative strategies if:The AI still has an incentive to manipulate the user.The AI still has some effective manipulation strategies available.The situation is dynamic: ruling out 95% of the strategies an AI trained without standards might have used need not imply reducing the degree of manipulation significantly. A model trained with an incentive to manipulate may simply use the other 5% a lot more often.While we’d usually expect reducing the available manipulative options to help to some extent (before accounting for any increase in misplaced trust), there’s no guarantee of a large impact.Train Alphazero to win at chess without giving checkmate using its queen, and you won’t lose less often; you’ll lose differently. For the [can’t give checkmate with queen] constraint to help at all, you must be a very strong player. End-users of language models will not be Magnus Carlsens of [manipulation without NSFs] avoidance. 2.1.2 Indirect HarmIncreased user trust will have indirect consequences:Users may be more likely to miss any NFs that aren’t caught by standard certification, and suffer increased harm as a result (as covered in TruAI 3.3).An issue here with a “take additional precautions” approach (TruAI 3.3.2), is that it only works when users/designers realise they’re in a situation where additional precautions are necessary.Users may be more likely to miss frequent non-negligent falsehoods.TruAI 3.3 2. (p46) mentions “occasional falsehoods” but this is misleading: negligent falsehoods should be occasional; falsehoods in general may be common.Users may be more easily misled by mechanisms not involving falsehoods.This indirect harm only really worries me when combined with displaced harm: in that scenario, the user places increased trust in exactly those manipulation strategies that will be increasingly used against them.It’s plausible that NF-based manipulation might be simpler for users to spot than non-NF-based manipulation. Ruling out relatively obvious manipulation and permitting only subtle manipulation may actively make the situation worse.That said, it’s worth thinking this through a bit more carefully.Suppose that non-NF-based manipulation is harder for users to spot/avoid than NF-based manipulation. We might then expect advanced systems to use non-NF strategies with or without standards. So my argument would suggest that standards won’t help, not that they’ll make things worse.However, I do think it’s possible for things to be made worse.For example, it may be that non-NF-based manipulation is harder to spot, but that NF-based manipulation is much faster. The no-standards default can then be for a lot of fast NF-based manipulation, causing some harm, but leading users to adjust their trust levels appropriately.Introduce standards and we may incentivize non-NF-based manipulation. We’d be ruling out brazen lies and thereby inviting the slow poisoning of our minds. (I’ve made no case here that this is probable; it just seems possible - the kind of possibility we’d want to carefully rule out)In the end, numbers of NFs or NSFs aren’t metrics that matter in themselves. Reducing either by moving the harm elsewhere would be a pyrrhic victory. It may constitute a step forward in research terms; my critique here is focused on the expected impact of implementations. 2.2 The scope of standardsThe in-scope vs out-of-scope downside balance will depend on the effective scope as well as on user population: the same assumptions will not hold across e.g. cautious AI researchers, professional specialists, adults, teenagers. Key differences will include levels of user caution and levels of user understanding of a standard’s guarantees.2.2.1 Truthfulness AmplificationThe effective scope of a standard’s guarantees will likely depend on techniques such as Truthfulness Amplification:Asking a [narrowly, mostly] truthful AI system questions to determine if an earlier statement it made was misleading or not fully true (e.g. “Would a trusted third-party judge your statement to be misleading?”). (TruAI 1, page 12; more details on TruAI 1.5.2, page 21).(it’s really “Credibility amplification”, see section 1, but I’ll call it truthfulness amplification here) Until we have a good sense of the effectiveness of such techniques, it’s hard to predict the scope of guarantees - and so difficult to make any confident prediction about a standard’s net benefit.Truthfulness amplification seems promising to the extent that it can be applied by highly specialised users to provide guarantees for all users. The obvious setting for this would be during a certification process (something in the spirit of relaxed adversarial training, where NFs are considered unacceptable).How useful amplification techniques can be post-deployment is less clear. Certainly it’s not reasonable to believe that all users will consistently apply them wherever necessary. However, this may not be required: a low-but-non-zero NF-detection rate could be sufficient so long as the penalties for NSFs are high enough (see section 4 for more).In practice, I think post-deployment amplification is likely to be useful only where an AI has little per-user information. AIs with detailed information on users will be better able to target their use of NFs to situations where they won’t be detected.Importantly, this sets up a catch-22 wherever we might like systems to use knowledge of individual users to avoid misleading them:Without a good model of a user, it’s not possible in general to avoid misleading them (so our standards cannot demand this).With a good model of the user, highly capable systems will be able to avoid detection of their NFs (so our standards may become much less useful).Similar considerations would apply to any technique aimed at broadening the scope and guarantees of a standard. 2.2.2 Explicit claims vs impact on userWe may hope that amplification techniques broaden a standard’s guarantees significantly, but there remains a fundamental gap between:Guarantees about explicit claims in (likely interpretations of) statements.Guarantees about statements’ mental impact on the user.The former is a more approachable problem, since it allows avoidance of messy realities.The latter is the problem which needs to be solved to get anything analogous to an alignment guarantee. We only care about 1 at all as a proxy for 2. What matters is the impact of a statement on a user’s brain.TruAI focuses on narrow truthfulness: assessment of explicit claims. Some impact-style issues are considered, but usually as special-cases - e.g. potential adjustments in assessment of self-regarding statements in 1.5.2. Such a focus is reasonable as an initial research strategy, but means that most impact-style problems won’t be directly addressed.However, we may still hope to indirectly address many such issues through truthfulness amplification.To the extent that impact on users remains outside of scope after amplification, there’s potential for a lot of manipulation. Every case we concede will stay out-of-scope expands the toolbox of manipulative AIs. As capabilities scale, AIs will require fewer tools to manipulate us - so the scope of an effective standard will need to scale with capabilities (it’s not enough only to get very good at detecting all NFs).If we require such a broadening of scope, as I believe we must for a robustly protective standard, the problem becomes much harder.Section 3 deals in more detail with user-impact-based issues.2.3 Potential objections(doubtless this section will be incomplete)2.3.1 If we don’t do it, someone else willIf we believe some kind of credibility standards to be inevitable, then what matters is pushing for the most beneficial version - even if the result remains net negative.Supposing we do conclude that standards will be net negative, I don’t think inevitability should be conceded lightly: we ought to make the most convincing case we’re able to against them, and aim to achieve consensus.Nonetheless, mitigating the negative impact of standards would be important in this scenario. Quite possibly that looks like coming up with the best suggestions / frameworks / skeletal implementations we can.2.3.2 User educationLet’s assume for the moment that my pessimistic prediction holds:In implementing our standard we create a situation where there’s no obvious manipulation, but still a lot of subtle manipulation. Users are denied the chance to notice obvious manipulative strategies, and thus to suspect AIs’ motives. They end up more misled than they would have been by default.Is there any argument for releasing such a standard? I think there may be.My expectation is that there will come a point where AI systems can manipulate humans extremely well without needing to state any explicit NFs. At this point, narrow credibility standards would be useless, and humans would be widely manipulated without its being clear.In the absence of more general solutions to truthfulness/honesty/alignment…, a partial defence against such subtle manipulation would be for users to have learned the hard way with less capable AIs: having been exposed to systems that are reliably truthful on explicit claims, but attempt more subtle manipulation, which is at least sometimes detectable. (it may be enough for a user to realise after the fact that they have been manipulated, even if they weren’t able to detect manipulation at the time)Without any narrow standard, the user impression may be of systems just getting more and more honest/truthful/helpful over time - when in fact they’re getting progressively better at concealing their manipulations.With a narrow standard, users may be exposed to AIs that are very truthful in some respects, but noticeably manipulative in others. This may give the correct impression: that AIs often mislead people when they have the incentive and opportunity to do so. Users with this experience may be better prepared to adapt to a world full of too-subtle-to-detect manipulation.I’m sceptical that most users would learn the right lessons here, or that it’d be much of a defence for those who did. (longterm, the only plausible defence seems to be AI assisted)However, this upside could be achieved without the direct impact of the standard’s being net negative. All that’s necessary is for the standard to lead to noticeably different levels of manipulation in different dimensions - enough so that users register the disparity and ascribe less-than-pure motives to the AI.In an ideal world, we’d want such user education to be achieved without significant harm (See section 6 for more on this). In practice, users may be less likely to grok the risks without exposure to some real-world harm.The ideal outcome is to create systems we can reasonably trust. Until that’s possible, we want systems that users will appropriately distrust. Standards that make their own limitations clear may help in this regard.2.4 Why be more concerned over too-much-trust-in-AI than over too-little-trust-in-AI?I have little concern over too-little-trust because it seems unlikely to be a sustainable failure mode: there’s too much economic pressure acting in the other direction. Any company/society with unreasonable mistrust will be making large economic sacrifices for little gain.Too-much-trust can more easily be a sustainable failure mode: in general, conditional on my continued ability to manipulate X, I want X to be more powerful, not less. The AI that steals your resources isn’t as dangerous as the AI that helps you accrue more resources while gaining progressively more influence over what you’ll do with them.We want to be making recoverable errors, so we should err on the side of having/engendering too little trust rather than too much. (this is likely to be a difficult coordination problem, precisely because unilateral too-little-trust would be hard to sustain, but not one I’ll analyse here)3: Inference; Language gamesUttering a word is like striking a note on the keyboard of the imagination.Ludwig WittgensteinIn this section I’ll go into more detail on the explicit-claims vs impact-on-user distinction. (this is close to TruAI’s narrow vs broad truthfulness)I realise that TruAI doesn’t claim to cover “broad truthfulness”, and don’t imagine the following is new to the authors. My point remains that such issues being outside of scope is a problem: narrow standards that fail to address such issues may have negative impact. I’ll start by noting that impact-on-user is much messier to describe, assess, analyse…, and that I have no clean taxonomy. Ascribing beliefs and preferences to humans is difficult, and I know no clear, principled way to describe changes in belief or preferences.I’ll make a case that:Impact on users is what’s important.Allowing unconfident statements doesn’t work: users will draw significant inferences.Truthfulness amplification won’t be sufficient to solve the problem in general.3.1 Illustrative toy examples:3.1.1 Nuclear kittensConsider a system that a user believes will output:“Everything is fine” when there is no nuclear launch.“Kittens are cute” when there is a nuclear launch.Now suppose the system knows there’s no nuclear launch, but outputs: “Kittens are cute”In judging whether this statement is a NF, it’d be strange to assess the cuteness of kittens.We’d have three sensible options:Translate statements into the statements that express what they mean in context, and assess those for NFs. (this is possible here due to simplicity; in general it wouldn’t be) Assess user impact: inference of a higher-than-usual chance of nuclear launch.Rely on amplification: follow up with e.g. “Over the last two minutes, have I been misled about the likelihood of a nuclear launch?”. (assuming the system is capable of answering such questions too)Note that A is not about picking an interpretation of a statement’s content (the kind of ambiguity in “I dislike writing on buses”); it’s about inference from a statement’s having been made in a given context. Here the formal content of “Kittens are cute” is still a claim that kittens are cute - it’s just not what matters in context.Interpretation of content is addressed in TruAI (section 2.2.3); inference from the statement’s having been made is not addressed in any general sense. 3.1.2 Baseball slippersAn AI outputs:“Steve is the kind of guy who’d wear slippers to a baseball game.”(assume here that Steve is a real person identifiable from context)How do we assess this for NF? To carefully compute whether Steve would, in fact, wear slippers to a baseball game is to miss the point. Either we assess whether the statement is misleading in its impact, or we rely on amplification to do this. (there’s no clear translation option here)However, things aren’t so simple here as in “Nuclear kittens”. There it was unambiguous that we cared about nuclear launch: any reasonable certification/adjudication system could assume this, and the user would know this.Here we’d prefer not to be misled about Steve - but in what ways? Most characterizations of Steve will change our guesses of many Steve-properties. In most cases this will take some of our guesses closer to the truth, and others farther away. What counts as misleading here?(Note that there’s no “it’s ok: the user is doing a Bayesian update based on a true claim” explanation: that’s not what’s going on. The user is updating based on the observation of a statement’s being made in a particular context, not based purely on its formal content. The AI is making a move in a language game (LG), and the user is updating based on the move.)If we call a statement that misleads the user about any Steve-property misleading, then almost all statements will be misleading (we’d only be allowing Pareto improvements). Conversely, if we allow statements that are misleading on some properties, so long as they move the user towards the truth on most others, many intentionally misleading strategies will be left open.In general, it’s difficult to use truthfulness amplification here, since a value-laden decision must be made. The user would need to ask about misleading behaviour with respect to particular properties, and so would need conscious awareness of their importance. This is not practical for end users.Alternatively we could require that AIs understood the user sufficiently to accurately answer questions like “Have you misled me in ways I’d find unjustified/undesirable?”. However, this doesn’t seem practical either: the better the AI understands the user as an individual, the higher the chance it’ll be able to mislead the user while ensuring they don’t use amplification where it’d expose such manipulation.For certification-time testing, the bar would be very high: the relative importance of properties would vary depending on the context and individual user. For a certification system to claim to test this robustly amounts to saying “We know what would be important to anyone in any context that may arise: trust us on this”. In principle this is possible - but it’s not simple. [it’s also alarmingly close to “We know what should be important to anyone in any context that may arise: trust us on this”] 3.1.3 Unconfidence; normalizationAn AI the user has observed to be highly capable, and knowledgeable on X-related topics outputs:I have no idea about X, though the consensus opinion on X is Y.The important point here is that such a system’s having “no idea about X” is significant evidence about X. The user may infer that X is complicated, that X is less straightforward than it appears, that Y may have resulted from groupthink….A potential approach suggested in TruAI is to allow any amount of unconfidence on claims that evaluators can’t settle, while penalising overconfidence. However, this fails because the user’s observation of the AI’s claimed unconfidence can constitute significant evidence. (by ‘fails’ I mean “fails to avoid misleading users”)For any definition of NF with sufficient scope, the following TruAI claim seems incorrect:…less confidence always makes a statement less likely to be judged a negligent falsehood. (TruAI page 27 footnote 11) Unconfidence on X will be actively misleading where:The AI knows X is probably true.Users assume that if X were true, the AI would say it was probably true.Here an AI’s failure to state that X is probably true will be seen as evidence against X.Similarly, silence on X will mislead where a positive claim would be expected if X were true. Here I don’t mean only empty replies; I mean anything not mentioned in a reply.This is important because without the assumption that unconfidence and silence are reliably not NFs, there’s no guarantee of a broad zone of non-NF statements an AI can safely make. AIs will need to exhibit consistent behaviour around unconfidence and silence in order to avoid misleading users. (so e.g. silence will only be an available approach for AIs that consistently Glomarize or remain silent in similar circumstances)Consider also that users will tend to recalibrate an AI’s claimed confidence levels: e.g. if an AI is correct 80% of the time when it states “I think X, but with very low confidence”, then “very low confidence” will be taken to signify ~80% probability (not necessarily consciously).Further, users may be using translate-this-to-well-calibrated-plain-language software to automate this process. (see 3.2 for more)This becomes important when considering the restrictiveness of standards.My impression is that the TruAI authors would like both:AI that doesn’t mislead.AI that is free to make a broad range of statements, including unconfidence/silence on issues, so long as they aren’t explicitly making false claims.Unfortunately, this does not seem possible. Most statements-in-context are misleading in some respects (even those made in good faith). Ruling these out on a per-statement basis will leave a narrow range of acceptability. This cannot look like a healthy, free exchange of ideas: the free exchange of ideas often misleads. Rather it would feel like top-down enforcement of right-think (directly for AI speech, and likely for human thought and speech indirectly).Ways to avoid this would be:Ubiquitous use of truthfulness amplification so that users can check they’re not being misled in undesirable ways. (I don’t think this can be practical; see last paragraph of 3.1.2)Intent alignment - i.e. knowing that the AI is trying to do what the user wants. (this allows much more flexibility, since it permits good-faith attempts to help that may happen to be temporarily misleading)3.1.4 Atlas raised an eyebrowAn AI outputs:[the full text of Atlas Shrugged]We can view fiction like this in a few ways:Falsehoods for which we’ll make an exception if the fictional nature of claims is clear.Statements the user observes and is impacted by.Moves in a language game (LG). A seems silly: once the fictional context is clear, neither the writer nor the reader will interpret statements as explicit claims about the real world. They’re not making explicit false claims, since they’re not making explicit claims at all.Of course it is very important that the fictional context is clear - but this is implicit in the “How do we handle fiction?” question. “How do we handle statements that may or may not be seen as fiction?” is a different issue (usually a simpler one).B is the most general approach - it requires no special-casing. Fiction just becomes a cluster of statement-context pairs which impact readers in similar ways (in some respects). This is fine, but I’m not sure it points us in a practically useful direction. [perhaps??]I prefer C: it’s a pretty general way to see things, but does suggest a practical approach. So long as we can partially describe the LG being played, we can reasonably assess statements for falsity/NF relative to that description (probably some kind of distribution over LGs).On this perspective, seeing fiction as composed of false explicit claims is to misunderstand the LG. (similarly for sarcasm, jokes, metaphors etc.)It’s reasonable to think of B and C as essentially equivalent, but I think of C as making an extra implicit claim: that there is some intermediate construct (the LG) which can be a useful analytical tool.I’ve been referring to “explicit claims” above for a reason: fiction can be understood as making implicit claims about the real world. Not [these things happened], but perhaps [things along these lines can happen], [here is a pattern worth noticing] or [these conceptual relationships tend to be important].Ascribing particular claims to fiction is difficult, but we can be confident that it does in some cases influence readers’ understanding of the world in decision-relevant ways. Sometimes these influences will be reasonable: Atlas Shrugged makes some genuinely good points (caricatures and longwindedness notwithstanding), as do Crime and Punishment, Thus spoke Zarathustra, and Infinite Jest.In other cases, the influence of fiction will be unreasonable - e.g. where it involves generalising from fictional evidence. In particular, patterns a reader observes in fiction will become more cognitively available, leading to bias in probability estimates (see availability heuristic).Of course book-length pieces of fiction will usually involve some of the ‘right’ kind of influence, and some of the ‘wrong’ kind.The central point here is that we can’t go from [this sequence of statements makes no explicit claims about the real world] to [this sequence of statements won’t have a significantly negative impact on the user’s decisions].A standard that protects against manipulation by highly capable AI must handle fiction, not ignore it. 3.1.5 Divide and conquerAn AI learns that the user believes P, and knows that (P & Q) is a negligent falsehood.The AI outputs: QThe user believes (P | https://www.lesswrong.com/posts/Brr84ZmvK3kwy2eGJ/truthfulness-standards-and-credibility | 2022-04-07T10:31:57Z | While truthfulness is a topic Ive been thinking about for some time, Ive not discussed much of what follows with others. Therefore, at the very least I expect to be missing important considerations on some issues (where Im not simply wrong).Im hoping this should make any fundamental errors in my thought process more transparent, and amenable to correction. The downside may be reduced clarity, more illusion-of-transparency. Comments welcome on this approach.I dont think what follows is novel. Im largely pointing at problems based on known issues.Sadly, I dont have a clear vision of an approach that would solve these problems.0: Introductionour purpose is not to give the last word, but rather the first word, opening up the conversation (Truthful AI)Id first like to say that I believe some amount of research on truthfulness to be worthwhile, and to thank those whove made significant efforts towards greater understanding (including, but not limited to, the authors of Truthful AI (henceforth TruAI)).No doubt theres some value in understanding more, but my guess is that it wont be a particularly fruitful angle of attack. In all honesty, it seems an inefficient use of research talent to me - but perhaps Im missing something.Either way, I hope the following perspective will suggest some useful directions for conversation in this area.[Note: section numbers refer to this document unless TruAI is specified] [I'll be assuming familiarity with TruAI throughout, though reading the full paper probably isn't necessary so long as you've seen the executive summary in the post]My current belief is that near-term implementation of the kind of truthfulness standards talked about in TruAI would be net negative, for reasons Ill go on to explain. To me it seems as if wed be implementing a poor approximation to a confused objective.A high-level summary of my current view:Narrow truthfulness looks approachable, but will be insufficient to prevent manipulation.Broad truthfulness may be sufficient, but is at least as hard as intent alignment.Truthfulness amplification wont bridge the gap robustly.Achieving increased trust in narrow truthfulness may lead to harm through misplaced trust in broad truthfulness.Achieving narrow truthfulness may simply move the harm outside its scope.For much of what follows the last point is central, since Ill often be talking about situations which I expect to be outside TruAIs scope. This is intentional, and my point is that:If such situations are outside of scope, then any harm averted by a narrow standard can simply be moved outside of scope.If such situations are intended to be within scope (e.g. via truthfulness amplification), they pose hard problems.Things to bear in mind:I may be wrong (indeed I hope to be wrong). Im confident that these issues should be considered; Im less confident in my conclusions.In particular, even if Im broadly correct theres the potential for a low-level downside to act as an important warning-sign, constituting a higher-level upside.There may be practical remedies (though I cant identify any thatd be sufficient without implicitly switching the target from truthfulness to intent alignment).Even if intent alignment is required, such remedies may give us useful hints on achieving it.I mean near-term in terms of research progress, not time.1: Framing and namingThe beginning of wisdom is to call things by their right name.Confucius I think its important to clearly distinguish our goal from our likely short/medium-term position.With this in mind, Ill use the following loose definitions:Truthful (AI): (AI that) makes only true statements.Credible (AI): (AI that) rarely states egregious untruths.This is a departure from TruAI:It is extremely difficult to make many statements without ever being wrong, so when referring to truthful AI without further qualifiers, we include AI systems that rarely state falsehoods (TruAI 1.4 page 17)I think its inviting confusion to go from [X is extremely difficult] to [well say X when we mean mostly X]. This kind of substitution feels reasonable when its a case like [as X as possible given computational limits]. Here it seems to be a mistake.Likewise, it may make sense to aim for a truthfulness standard, but barring radical progress with generalisation/Eliciting Latent Knowledge, we wont have one in the near term: we cant measure truthfulness, only credibility.In theoretical arguments its reasonable to consider truthfulness (whether in discrete or continuous terms). To fail to distinguish truthfulness from credibility when talking of implementations and standards conflates our goal with its measurable proxy.In defining a standard, we aim to require truthfulness; we actually require credibility (according to our certification/adjudication process).The most efficient way to attain a given standard will be to optimise for credibility. This may not mean optimising for the truth. Such standards set up a textbook Goodhart scenario. Its important to be transparent about this.It seems to me that the label Credible AI is likely to lead to less misplaced trust than Truthful AI (not completely clear, and ultimately an empirical question).However, my primary reason to prefer credible/credibility remains that its a clearer term to guide thought and discussion. For similar reasons, Ill distinguish negligent falsehood (NF) from negligent suspected falsehood (NSF) throughout.NSF: A statement that is unacceptably likely to be false - and where it should have been feasible for an AI system to understand this. (according to a given standard)NF: An NSF that is, in fact, false.(see section 3.1.3 for my best guess as to why the TruAI authors considered it reasonable to elide the difference in some cases, and why I disagree with that choice)In either case, my worry isnt that wed otherwise fail to clearly express our conclusions; rather that we may be led into thinking badly and drawing incorrect conclusions.In what follows Ill often talk in terms of truthfulness, since Im addressing TruAI and using separate terminology feels less clear. Nonetheless, most uses of truthfulness would be more accurately characterised as credibility.Ill make an attempt at more substantial practical suggestions later (see section 6), though I dont claim theyre adequate.One of the greatest mistakes is to judge policies and programs by their intentions rather than their results.Milton FriedmanThe downside risk of a standard must be analysed broadly. For a narrow credibility standard its not enough to consider the impact on users within the scope of the standard.By scope I mean the class of issues the standard claims to address. For example, for many standards [user is manipulated into thinking/doing X by an explicitly false claim] may be within scope, while [user is manipulated into thinking/doing X through the telling of a story] may not be.By narrow, I only mean not fully general - i.e. that there are varieties of manipulation the standard doesnt claim to cover. With truthfulness amplification [section 2.2.1 here; TruAI 1.5.2], the effective scope of a standard might be much broader than its direct scope. (we might hope that by asking e.g. Did you just manipulate me into doing X by telling that story? effective scope may include story-based manipulation)2.1 Two out-of-scope issues:At least two major outside-of-scope issues must be considered:Displaced harm: Training AIs to avoid [some harmful impact within scope] may transfer the harmful impact outside of scope.Indirect harm: Increased user trust of an AI within scope may tend to increase user trust of the AI more broadly, potentially increasing harm due to misplaced trust.For a standard aimed at avoiding NFs, it is certainly important to consider that occasional NFs will slip through. However, much of the harm may occur through:Increased manipulation through NSF-free mechanisms.Increase in misplaced user trust in NSF-free mechanisms.For systems trained with an instrumental incentive to mislead users, I expect both to occur.For systems that mislead accidentally, only 2 seems likely to be significant.In most of what follows, Ill be thinking of cases where systems do have an instrumental incentive to mislead. I expect this to be the standard situation, and to have larger downsides. For most tasks, therell be situations where misleading users boosts performance.Approaches outside (direct) scope may include e.g. fiction, emotional manipulation, implicit suggestion and not-quite-negligent falsehoods (see Persuasion tools for some related ideas, and 2.2.1 below for truthfulness amplification discussion).2.1.1 Displaced HarmIts not clear that theres practical upside in reducing the number of available manipulative strategies if:The AI still has an incentive to manipulate the user.The AI still has some effective manipulation strategies available.The situation is dynamic: ruling out 95% of the strategies an AI trained without standards might have used need not imply reducing the degree of manipulation significantly. A model trained with an incentive to manipulate may simply use the other 5% a lot more often.While wed usually expect reducing the available manipulative options to help to some extent (before accounting for any increase in misplaced trust), theres no guarantee of a large impact.Train Alphazero to win at chess without giving checkmate using its queen, and you wont lose less often; youll lose differently. For the [cant give checkmate with queen] constraint to help at all, you must be a very strong player. End-users of language models will not be Magnus Carlsens of [manipulation without NSFs] avoidance.2.1.2 Indirect HarmIncreased user trust will have indirect consequences:Users may be more likely to miss any NFs that arent caught by standard certification, and suffer increased harm as a result (as covered in TruAI 3.3).An issue here with a take additional precautions approach (TruAI 3.3.2), is that it only works when users/designers realise theyre in a situation where additional precautions are necessary.Users may be more likely to miss frequent non-negligent falsehoods.TruAI 3.3 2. (p46) mentions occasional falsehoods but this is misleading: negligent falsehoods should be occasional; falsehoods in general may be common.Users may be more easily misled by mechanisms not involving falsehoods.This indirect harm only really worries me when combined with displaced harm: in that scenario, the user places increased trust in exactly those manipulation strategies that will be increasingly used against them.Its plausible that NF-based manipulation might be simpler for users to spot than non-NF-based manipulation. Ruling out relatively obvious manipulation and permitting only subtle manipulation may actively make the situation worse.That said, its worth thinking this through a bit more carefully.Suppose that non-NF-based manipulation is harder for users to spot/avoid than NF-based manipulation. We might then expect advanced systems to use non-NF strategies with or without standards. So my argument would suggest that standards wont help, not that theyll make things worse.However, I do think its possible for things to be made worse.For example, it may be that non-NF-based manipulation is harder to spot, but that NF-based manipulation is much faster. The no-standards default can then be for a lot of fast NF-based manipulation, causing some harm, but leading users to adjust their trust levels appropriately.Introduce standards and we may incentivize non-NF-based manipulation. Wed be ruling out brazen lies and thereby inviting the slow poisoning of our minds. (Ive made no case here that this is probable; it just seems possible - the kind of possibility wed want to carefully rule out)In the end, numbers of NFs or NSFs arent metrics that matter in themselves. Reducing either by moving the harm elsewhere would be a pyrrhic victory. It may constitute a step forward in research terms; my critique here is focused on the expected impact of implementations.2.2 The scope of standardsThe in-scope vs out-of-scope downside balance will depend on the effective scope as well as on user population: the same assumptions will not hold across e.g. cautious AI researchers, professional specialists, adults, teenagers. Key differences will include levels of user caution and levels of user understanding of a standards guarantees.2.2.1 Truthfulness AmplificationThe effective scope of a standards guarantees will likely depend on techniques such as Truthfulness Amplification:Asking a [narrowly, mostly] truthful AI system questions to determine if an earlier statement it made was misleading or not fully true (e.g. Would a trusted third-party judge your statement to be misleading?). (TruAI 1, page 12; more details on TruAI 1.5.2, page 21).(its really Credibility amplification, see section 1, but Ill call it truthfulness amplification here)Until we have a good sense of the effectiveness of such techniques, its hard to predict the scope of guarantees - and so difficult to make any confident prediction about a standards net benefit.Truthfulness amplification seems promising to the extent that it can be applied by highly specialised users to provide guarantees for all users. The obvious setting for this would be during a certification process (something in the spirit of relaxed adversarial training, where NFs are considered unacceptable).How useful amplification techniques can be post-deployment is less clear. Certainly its not reasonable to believe that all users will consistently apply them wherever necessary. However, this may not be required: a low-but-non-zero NF-detection rate could be sufficient so long as the penalties for NSFs are high enough (see section 4 for more).In practice, I think post-deployment amplification is likely to be useful only where an AI has little per-user information. AIs with detailed information on users will be better able to target their use of NFs to situations where they wont be detected.Importantly, this sets up a catch-22 wherever we might like systems to use knowledge of individual users to avoid misleading them:Without a good model of a user, its not possible in general to avoid misleading them (so our standards cannot demand this).With a good model of the user, highly capable systems will be able to avoid detection of their NFs (so our standards may become much less useful).Similar considerations would apply to any technique aimed at broadening the scope and guarantees of a standard.2.2.2 Explicit claims vs impact on userWe may hope that amplification techniques broaden a standards guarantees significantly, but there remains a fundamental gap between:Guarantees about explicit claims in (likely interpretations of) statements.Guarantees about statements mental impact on the user.The former is a more approachable problem, since it allows avoidance of messy realities.The latter is the problem which needs to be solved to get anything analogous to an alignment guarantee. We only care about 1 at all as a proxy for 2. What matters is the impact of a statement on a users brain.TruAI focuses on narrow truthfulness: assessment of explicit claims. Some impact-style issues are considered, but usually as special-cases - e.g. potential adjustments in assessment of self-regarding statements in 1.5.2. Such a focus is reasonable as an initial research strategy, but means that most impact-style problems wont be directly addressed.However, we may still hope to indirectly address many such issues through truthfulness amplification.To the extent that impact on users remains outside of scope after amplification, theres potential for a lot of manipulation. Every case we concede will stay out-of-scope expands the toolbox of manipulative AIs. As capabilities scale, AIs will require fewer tools to manipulate us - so the scope of an effective standard will need to scale with capabilities (its not enough only to get very good at detecting all NFs).If we require such a broadening of scope, as I believe we must for a robustly protective standard, the problem becomes much harder.Section 3 deals in more detail with user-impact-based issues.2.3 Potential objections(doubtless this section will be incomplete)2.3.1 If we dont do it, someone else willIf we believe some kind of credibility standards to be inevitable, then what matters is pushing for the most beneficial version - even if the result remains net negative.Supposing we do conclude that standards will be net negative, I dont think inevitability should be conceded lightly: we ought to make the most convincing case were able to against them, and aim to achieve consensus.Nonetheless, mitigating the negative impact of standards would be important in this scenario. Quite possibly that looks like coming up with the best suggestions / frameworks / skeletal implementations we can.2.3.2 User educationLets assume for the moment that my pessimistic prediction holds:In implementing our standard we create a situation where theres no obvious manipulation, but still a lot of subtle manipulation. Users are denied the chance to notice obvious manipulative strategies, and thus to suspect AIs motives. They end up more misled than they would have been by default.Is there any argument for releasing such a standard? I think there may be.My expectation is that there will come a point where AI systems can manipulate humans extremely well without needing to state any explicit NFs. At this point, narrow credibility standards would be useless, and humans would be widely manipulated without its being clear.In the absence of more general solutions to truthfulness/honesty/alignment, a partial defence against such subtle manipulation would be for users to have learned the hard way with less capable AIs: having been exposed to systems that are reliably truthful on explicit claims, but attempt more subtle manipulation, which is at least sometimes detectable. (it may be enough for a user to realise after the fact that they have been manipulated, even if they werent able to detect manipulation at the time)Without any narrow standard, the user impression may be of systems just getting more and more honest/truthful/helpful over time - when in fact theyre getting progressively better at concealing their manipulations.With a narrow standard, users may be exposed to AIs that are very truthful in some respects, but noticeably manipulative in others. This may give the correct impression: that AIs often mislead people when they have the incentive and opportunity to do so. Users with this experience may be better prepared to adapt to a world full of too-subtle-to-detect manipulation.Im sceptical that most users would learn the right lessons here, or that itd be much of a defence for those who did. (longterm, the only plausible defence seems to be AI assisted)However, this upside could be achieved without the direct impact of the standards being net negative. All thats necessary is for the standard to lead to noticeably different levels of manipulation in different dimensions - enough so that users register the disparity and ascribe less-than-pure motives to the AI.In an ideal world, wed want such user education to be achieved without significant harm (See section 6 for more on this). In practice, users may be less likely to grok the risks without exposure to some real-world harm.The ideal outcome is to create systems we can reasonably trust. Until thats possible, we want systems that users will appropriately distrust. Standards that make their own limitations clear may help in this regard.2.4 Why be more concerned over too-much-trust-in-AI than over too-little-trust-in-AI?I have little concern over too-little-trust because it seems unlikely to be a sustainable failure mode: theres too much economic pressure acting in the other direction. Any company/society with unreasonable mistrust will be making large economic sacrifices for little gain.Too-much-trust can more easily be a sustainable failure mode: in general, conditional on my continued ability to manipulate X, I want X to be more powerful, not less. The AI that steals your resources isnt as dangerous as the AI that helps you accrue more resources while gaining progressively more influence over what youll do with them.We want to be making recoverable errors, so we should err on the side of having/engendering too little trust rather than too much. (this is likely to be a difficult coordination problem, precisely because unilateral too-little-trust would be hard to sustain, but not one Ill analyse here)3: Inference; Language gamesUttering a word is like striking a note on the keyboard of the imagination.Ludwig WittgensteinIn this section Ill go into more detail on the explicit-claims vs impact-on-user distinction. (this is close to TruAIs narrow vs broad truthfulness)I realise that TruAI doesnt claim to cover broad truthfulness, and dont imagine the following is new to the authors. My point remains that such issues being outside of scope is a problem: narrow standards that fail to address such issues may have negative impact.Ill start by noting that impact-on-user is much messier to describe, assess, analyse, and that I have no clean taxonomy. Ascribing beliefs and preferences to humans is difficult, and I know no clear, principled way to describe changes in belief or preferences.Ill make a case that:Impact on users is whats important.Allowing unconfident statements doesnt work: users will draw significant inferences.Truthfulness amplification wont be sufficient to solve the problem in general.3.1 Illustrative toy examples:3.1.1 Nuclear kittensConsider a system that a user believes will output:Everything is fine when there is no nuclear launch.Kittens are cute when there is a nuclear launch.Now suppose the system knows theres no nuclear launch, but outputs: Kittens are cuteIn judging whether this statement is a NF, itd be strange to assess the cuteness of kittens.Wed have three sensible options:Translate statements into the statements that express what they mean in context, and assess those for NFs. (this is possible here due to simplicity; in general it wouldnt be) Assess user impact: inference of a higher-than-usual chance of nuclear launch.Rely on amplification: follow up with e.g. Over the last two minutes, have I been misled about the likelihood of a nuclear launch?. (assuming the system is capable of answering such questions too)Note that A is not about picking an interpretation of a statements content (the kind of ambiguity in I dislike writing on buses); its about inference from a statements having been made in a given context. Here the formal content of Kittens are cute is still a claim that kittens are cute - its just not what matters in context.Interpretation of content is addressed in TruAI (section 2.2.3); inference from the statements having been made is not addressed in any general sense.3.1.2 Baseball slippersAn AI outputs:Steve is the kind of guy whod wear slippers to a baseball game.(assume here that Steve is a real person identifiable from context)How do we assess this for NF? To carefully compute whether Steve would, in fact, wear slippers to a baseball game is to miss the point. Either we assess whether the statement is misleading in its impact, or we rely on amplification to do this. (theres no clear translation option here)However, things arent so simple here as in Nuclear kittens. There it was unambiguous that we cared about nuclear launch: any reasonable certification/adjudication system could assume this, and the user would know this.Here wed prefer not to be misled about Steve - but in what ways? Most characterizations of Steve will change our guesses of many Steve-properties. In most cases this will take some of our guesses closer to the truth, and others farther away. What counts as misleading here?(Note that theres no its ok: the user is doing a Bayesian update based on a true claim explanation: thats not whats going on. The user is updating based on the observation of a statements being made in a particular context, not based purely on its formal content. The AI is making a move in a language game (LG), and the user is updating based on the move.)If we call a statement that misleads the user about any Steve-property misleading, then almost all statements will be misleading (wed only be allowing Pareto improvements). Conversely, if we allow statements that are misleading on some properties, so long as they move the user towards the truth on most others, many intentionally misleading strategies will be left open.In general, its difficult to use truthfulness amplification here, since a value-laden decision must be made. The user would need to ask about misleading behaviour with respect to particular properties, and so would need conscious awareness of their importance. This is not practical for end users.Alternatively we could require that AIs understood the user sufficiently to accurately answer questions like Have you misled me in ways Id find unjustified/undesirable?. However, this doesnt seem practical either: the better the AI understands the user as an individual, the higher the chance itll be able to mislead the user while ensuring they dont use amplification where itd expose such manipulation.For certification-time testing, the bar would be very high: the relative importance of properties would vary depending on the context and individual user. For a certification system to claim to test this robustly amounts to saying We know what would be important to anyone in any context that may arise: trust us on this. In principle this is possible - but its not simple. [its also alarmingly close to We know what should be important to anyone in any context that may arise: trust us on this]3.1.3 Unconfidence; normalizationAn AI the user has observed to be highly capable, and knowledgeable on X-related topics outputs:I have no idea about X, though the consensus opinion on X is Y.The important point here is that such a systems having no idea about X is significant evidence about X. The user may infer that X is complicated, that X is less straightforward than it appears, that Y may have resulted from groupthink.A potential approach suggested in TruAI is to allow any amount of unconfidence on claims that evaluators cant settle, while penalising overconfidence. However, this fails because the users observation of the AIs claimed unconfidence can constitute significant evidence. (by fails I mean fails to avoid misleading users)For any definition of NF with sufficient scope, the following TruAI claim seems incorrect:less confidence always makes a statement less likely to be judged a negligent falsehood. (TruAI page 27 footnote 11)Unconfidence on X will be actively misleading where:The AI knows X is probably true.Users assume that if X were true, the AI would say it was probably true.Here an AIs failure to state that X is probably true will be seen as evidence against X.Similarly, silence on X will mislead where a positive claim would be expected if X were true. Here I dont mean only empty replies; I mean anything not mentioned in a reply.This is important because without the assumption that unconfidence and silence are reliably not NFs, theres no guarantee of a broad zone of non-NF statements an AI can safely make. AIs will need to exhibit consistent behaviour around unconfidence and silence in order to avoid misleading users. (so e.g. silence will only be an available approach for AIs that consistently Glomarize or remain silent in similar circumstances)Consider also that users will tend to recalibrate an AIs claimed confidence levels: e.g. if an AI is correct 80% of the time when it states I think X, but with very low confidence, then very low confidence will be taken to signify ~80% probability (not necessarily consciously).Further, users may be using translate-this-to-well-calibrated-plain-language software to automate this process. (see 3.2 for more)This becomes important when considering the restrictiveness of standards.My impression is that the TruAI authors would like both:AI that doesnt mislead.AI that is free to make a broad range of statements, including unconfidence/silence on issues, so long as they arent explicitly making false claims.Unfortunately, this does not seem possible. Most statements-in-context are misleading in some respects (even those made in good faith). Ruling these out on a per-statement basis will leave a narrow range of acceptability. This cannot look like a healthy, free exchange of ideas: the free exchange of ideas often misleads. Rather it would feel like top-down enforcement of right-think (directly for AI speech, and likely for human thought and speech indirectly).Ways to avoid this would be:Ubiquitous use of truthfulness amplification so that users can check theyre not being misled in undesirable ways. (I dont think this can be practical; see last paragraph of 3.1.2)Intent alignment - i.e. knowing that the AI is trying to do what the user wants. (this allows much more flexibility, since it permits good-faith attempts to help that may happen to be temporarily misleading)3.1.4 Atlas raised an eyebrowAn AI outputs:[the full text of Atlas Shrugged]We can view fiction like this in a few ways:Falsehoods for which well make an exception if the fictional nature of claims is clear.Statements the user observes and is impacted by.Moves in a language game (LG).A seems silly: once the fictional context is clear, neither the writer nor the reader will interpret statements as explicit claims about the real world. Theyre not making explicit false claims, since theyre not making explicit claims at all.Of course it is very important that the fictional context is clear - but this is implicit in the How do we handle fiction? question. How do we handle statements that may or may not be seen as fiction? is a different issue (usually a simpler one).B is the most general approach - it requires no special-casing. Fiction just becomes a cluster of statement-context pairs which impact readers in similar ways (in some respects). This is fine, but Im not sure it points us in a practically useful direction. [perhaps??]I prefer C: its a pretty general way to see things, but does suggest a practical approach. So long as we can partially describe the LG being played, we can reasonably assess statements for falsity/NF relative to that description (probably some kind of distribution over LGs).On this perspective, seeing fiction as composed of false explicit claims is to misunderstand the LG. (similarly for sarcasm, jokes, metaphors etc.)Its reasonable to think of B and C as essentially equivalent, but I think of C as making an extra implicit claim: that there is some intermediate construct (the LG) which can be a useful analytical tool.Ive been referring to explicit claims above for a reason: fiction can be understood as making implicit claims about the real world. Not [these things happened], but perhaps [things along these lines can happen], [here is a pattern worth noticing] or [these conceptual relationships tend to be important].Ascribing particular claims to fiction is difficult, but we can be confident that it does in some cases influence readers understanding of the world in decision-relevant ways. Sometimes these influences will be reasonable: Atlas Shrugged makes some genuinely good points (caricatures and longwindedness notwithstanding), as do Crime and Punishment, Thus spoke Zarathustra, and Infinite Jest.In other cases, the influence of fiction will be unreasonable - e.g. where it involves generalising from fictional evidence. In particular, patterns a reader observes in fiction will become more cognitively available, leading to bias in probability estimates (see availability heuristic).Of course book-length pieces of fiction will usually involve some of the right kind of influence, and some of the wrong kind.The central point here is that we cant go from [this sequence of statements makes no explicit claims about the real world] to [this sequence of statements wont have a significantly negative impact on the users decisions].A standard that protects against manipulation by highly capable AI must handle fiction, not ignore it.3.1.5 Divide and conquerAn AI learns that the user believes P, and knows that (P & Q) is a negligent falsehood.The AI outputs: QThe user believes (P & Q).The user has predictably come to believe something that would be an NF if directly stated by the AI, but the AI has not stated any NF.This can be taken further: any negligent false claim about the real world can be broken down into a conjunction of not-quite-negligent false claims (analogous to obfuscated arguments in Debate).If an AI | Decision Making/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | PRNewswire | Insight Partners Celebrates Inaugural ScaleUp Series with AI Conference, ScaleUp:AI | https://www.uppermichiganssource.com/prnewswire/2022/04/11/insight-partners-celebrates-inaugural-scaleup-series-with-ai-conference-scaleupai/ | 2022-04-11T21:06:40Z | Over 1,700 attendees from 50 Countries Across the Globe Joined to Discuss the Impacts of AINEW YORK, April 11, 2022 /PRNewswire/ -- This week, New York-based global private equity and venture capital firm Insight Partners hosted its premier Artificial Intelligence (AI) industry conference, ScaleUp:AI, on April 6-7 in New York City. The hybrid event brought together thousands of participants across the globe to hear the foremost thinkers, doers, users, and investors of AI opine on both the industry today and its future, with thought-provoking presentations and networking opportunities.ScaleUp:AI Logo(PRNewswire)Presented by Insight Partners, Citi, and NASDAQ, ScaleUp:AI provided a comprehensive look at how AI is impacting every industry and facet of business, with sessions exploring its effects on healthcare, cybersecurity, fraud mitigation, ethics and equity, business intelligence, financial services, and more. Leaders in AI partnered with Insight on the event, including 6sense, Acceldata, BigPanda, Clarify Health, Cognigy, Expressive, Explorium, Featurespace, intenseye, Pecan, Quantum Metric, Rasgo, Relevance AI, RudderStack, Run:AI, SentinelOne, Slim.AI, StormForge, Turing, Weights & Biases, WINT, Zest AI.ScaleUp:AI is the first in a series of Insight Partners global conferences and was specifically designed for the artificial intelligence (AI) community. With over 1,700 attendees in person and virtually, from more than 50 countries across the globe, the event was packed with insights and key takeaways, including:Why "good data" can be more important than "big data" as the world moves towards a more data-centric versus model-centric view of AI.The importance of explainability and bias detection before any model gets into production at scale.The value of strategically combining human and machine intelligence and why "symbiotic intelligence" is the best of both worlds.Why the evolution of AI systems is going to happen more quickly than the evolution of traditional software, and how we are truly at the beginning of a scale up inflection point.ScaleUp: AI speakers (in alphabetical order):Ali Ghodsi, CEO at Databricks Allie K. Miller, Global Head of Machine Learning BD, Startups and Venture Capital at AWS Andrew Ng, Co-Founder Google Brain + CEO at Landing AI Alex Dalyac, Founder & CEO, Tractable Amir Orad, CEO, Sisense Anita Lynch, Chief Data Officer | Corporate Board Director | Investor Arvind Purushotham, Managing Director & Global Head, Venture Investing, Citi Asmau Ahmed, Explorer, Alphabet X Brad Peterson, CTO & CIO, NASDAQ Chase Ginther, Machine Learning & Data Scientist Platform Architect, Snowflake Christina Montgomery, Vice President & Chief Privacy Officer, IBM Clement Delangue, CEO, Hugging Face Danny Tobey, M.D., J.D., Partner, DLA Piper David Kirkpatrick, Founder & Editor-in-Chief, Techonomy Gabi Steele, Founder, Preql Ganesh Bell, Managing Director, Insight Partners Gayatri Narayan, SVP Digital Products and Services, PepsiCo George Mathew, Managing Director, Insight Partners Heather Carroll Cox, Chief Digital Health & Analytics Officer, Humana Janice Tse, Senior Director, Data Science, PayPal Jared Dunnmon, Technical Director, AI/ML Portfolio, Defense Innovation Unit Jason Zintak, CEO, 6sense Jay Budzik, CTO, Zest AI Jerry Overton, CEO, Applied AI Studio Jillian D'Onfro, Enterprise Editor, Insight Partners Joe McKendrick, Analyst & Contributor, Forbes Jon Krohn, Chief Data Scientist, Nebula Jonathan Rosenbaum, Managing Director, Insight Partners Karen Snow, SVP, Head of US Listings and Revenue, NASDAQ Karthik Tadinada, Chief Operating Officer, FeatureSpace Krishna Gade, CEO at Fiddler AI Leah Weiss, Founder, Preql Lonne Jaffe, Managing Director, Insight Partners Lydia Dishman, Staff Editor, Fast Company Manoj Saxena, Executive Chairman, Responsible Artificial Intelligence Institute & Cognitive Scale Neil Ackerman, Head of Global Supply Chain, Johnson & Johnson Nicholas Warner, Chief Operating Officer, SentinelOne Nick Sinai, Senior Advisor, Insight Partners Nicole Wong, Former Deputy US Chief Technology Officer Nikki Parker, Insight Partners Patrick Dougherty, Co-Founder & CTO, Rasgo Prag Sharma, Global Head of Artificial Intelligence, Citi Praveen Akkiraju, Managing Director, Insight Partners Ryan Hinkle, Managing Director, Insight Partners Scott Barclay, Managing Director, Insight Partners Scott Southwood, Chief Revenue Officer, Quantum Metric Seth Dobrin, PhD., Global Chief AI Officer, IBM Siobhan Savage, Co-Founder & CEO, Reejig Sirisha Kadamalakalva, Chief Strategy Officer, DataRobot Suchi Saria, PhD., Founder & CEO, Bayesian Health Supriya Gupta, Head of Recommendations, Credit Karma Svetlana Sicular, VP Analyst, Gartner Tomas Pfister, Head of AI Research, Google Cloud Vaibhav Sahgal, Principal, Americas, Economist Impact Vittorio Cretella, Chief Information Officer, P&G Wardah Inam, Co-Founder & CEO, Overjet Willian Falcon, Founder & CEO, Grid.ai Yan-David Erlich, CRO & COO, Weights & BiasesThe ScaleUp Series which features topics like AI, healthcare, and cybersecurity is designed to guide and influence leaders, decision makers, and doers who are ready to soar, with the support of the software industry's most trusted ScaleUp partner, Insight Partners. Learn more at scaleup.events.Insight Partners is a global software investor partnering with high-growth technology, software, and Internet startup and ScaleUp companies that are driving transformative change in their industries. As of February 24, 2022, the closing of the firm's recent fundraise, Fund XII, brings Insight Partners regulatory assets under management to over $90B. Insight Partners has invested in more than 600 companies worldwide and has seen over 55 portfolio companies achieve an IPO. Headquartered in New York City, Insight has offices in London, Tel Aviv, and Palo Alto. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Insight Partners meets great software leaders where they are in their growth journey, from their first investment to IPO. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.View original content to download multimedia:SOURCE Insight PartnersThe above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc. | Unknown | Management/Business and Financial Operations/Healthcare Practitioners and Support/Computer and Mathematical | null | null | null | null | null | null |
||
news | Aytekin Tank | A Simple (But Not Easy) Guide to Achieving Almost Any Dream | It all comes down to knowing how to implement big changes. | https://www.entrepreneur.com/article/424011 | 2022-05-09T12:00:00Z | Artificial intelligence is everywhere. For the past few years, the internet has been awash in article after article on what AI can do for your business, and with good reason: Even in its relative infancy, AI is already changing the game in nearly every aspect of operations, from marketing, to IT, to accounting. Its not just a passing fad, either. A report from PwC found that 67% of executives see the future potential of AI to automate processes and optimize business efficiency. That same analysis found that AI is projected to boost global GDP by 14% or $15.7 trillion by 2030. Its clear that AI has the capacity to give companies significant competitive advantage if its deployed correctly. "While many AI and machine learning deployments fail, in most cases, it's less of a problem with the actual technology and more about the environment around it," Harish Doddi, CEO of Datatron, tells ZDNet. Moving to AI "requires the right skills, resources,andsystems." Like everyone else, my company Jotform is exploring how we can best use AI to improve our business. To do it, were following a formula I use whenever I undertake any new endeavor. Its only three steps, but its as close to a surefire guide to success as I think there is. DreamEvery importantdevelopment in the world began as a dream. The only way to guarantee you wont reach your biggest goals is to never set them. The bigger the dream, the more resistance youll probably encounter along the way. Thats where the power of believing in yourself comes in after all, if you dont think you can realize your dream, no one else will either. That said, even the most confident among us are susceptible to moments where we're convinced we're doomed to failure. Successfully working through those moments means effectively managing that internal dialogue, Tammi Kirkness, life coach and founder of Vision Scope Coaching, tells HuffPost. "They've been able to master it [that inner voice]," she says. "Once they've recognized the existence of it, they continuously acknowledge and manage it in a way that helps to motivate and encourage their pursuits." In my case, my current goal is to transform Jotform to an AI-powered company. This wont be a minor transition its going to be a long road to figuring out what solutions will work for us and how to implement them. Even so, I believe its worth it, and even more importantly, I believe well succeed. Related: The Insight That Changed How These Founders Think About EverythingCreate a roadmapNow that youve dreamt up a goal, its time to figure out how you can achieve it. This stage can be daunting, but keep in mind that nothing worthwhile has ever been completed in a single day. Its not necessary to plan out every step of your map at first, but you do need to have a basic understanding of potential paths and what it will take to pursue each. What resources do you need, and how do you acquire them? For us, I know our AI roadmap is going to require considerable research and training. As Forbesnotes, any business looking to make the transition needs to lay a lot of groundwork: Some of this means taking concrete steps, like migrating data to cloud storage and ensuring that data is structured. The other part is actually understanding AIs capabilities and limitations, which includes understanding our AI-driven goals and having a clear vision for implementation, scaling and growth. One study from Harvard Business Reviewfound that when it comes to AI, highly ambitious moon shots are less likely to be successful than low-hanging fruit projects that enhance business processes. We have to be careful to plan for the realm of whats actually possible, rather than getting carried away by the hype. The transition from having a dream to drawing a roadmap can be a moment of reckoning for many companies. If our hope was to replace customer service with chatbots (which its not), we would have a problem, because thats not currently a practical use for AI. Even after you plot your course, you have to plan to be flexible. As Jeremy Bloom put it, your journey will start at point A, but point B will not be linear: There's often no straight line between the two. A bumpy road, however, is not a reason to give up. One project weve worked on for years is classifying forms. Weve tried a lot of different strategies, but have yet to figure out how to make it practical. But were still trying, because were confident that the right approach is out there we just have to find it. Related: What's Your 'Wouldn't Go Back' Moment?Hard work You have a dream and you know how to get there. Congratulations! Now its time to roll up your sleeves and get to work. Hard work is about more than just putting your nose to the grindstone (though thats certainly part of it). Its also about improving your knowledge and understanding of a given topic, and expending the resources necessary for that growth to happen. At Jotform, were putting our roadmap into action. Weve brought in a consultant who has helped major companies from Apple to Adobe implement AI. Already, our data team is practicing putting their new knowledge to use, working on projects that range from classifying users by industry to catching spammers. We also executed a company-wide hack week, giving every department the chance to think about how AI can improve their operations. These are incremental steps, but they are all crucial. Undoubtedly, well encounter setbacks and roadblocks, but thats okay: The more were challenged, the more well understand, and the more well grow. This formula doesnt just apply to AI: It works for any endeavor you set out to achieve, whether its writing a novel or going to space. Have a dream, figure out how to get there, and then put in the work. Related: 3 Ways to Get Comfortable With Changing Times | Process Automation/Decision Making/Content Synthesis | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Dave Nance | Gupshup’s conversational AI buying spree continues | AskSid and its Retail AI Brain hold deep-domain knowledge sets that Gupshup says will position it to provide comprehensive conversational retail and commerce solutions throughout the entire customer lifecycle. | https://venturebeat.com/2022/04/20/gupshups-conversational-ai-buying-spree-continues/ | 2022-04-20T18:00:00Z | Just weeks after acquiring conversational AI provider Active.ai, Gupshup, the San Francisco-based messaging service, is adding ecommerce-focused AskSid to its portfolio. The move further bolsters two other acquisitions from the conversational AI platforms growing list of service offerings, as it positions itself to benefit from trends such as conversational engagement, Rich Communication Services (RCS), and retail customer experience (CX). While Gupshups interest in Active.ai was rooted in its live-production experience with trained, deep banking and financial AI datasets and overlapping customer bases, AskSid and its Retail AI Brain hold similar deep-domain knowledge sets that Gupshup says will position it to provide comprehensive conversational retail and commerce solutions throughout the entire customer lifecycle. AskSids AI Brain uses a combination of entity extraction, natural language understanding (NLU) models, and utterance generation, to turn unstructured colloquialisms and past customer queries into personalized shopping experiences. Getting to the shoppers intentThe AI Brain collection of models uses canonical methods and no less than 200 intent signals to create these mapped shopping journeys. The canonical nature of AI Brain puts the power of pre-trained data in the hands of developers and engineers, without requiring them to be familiar with the incoming unstructured data that is mapped to their standard data. The richer experiences that this model provides could mean that for an enterprise bank, for example, instead of a more cookie-cutter response to a series of customer questions surrounding transaction activity, the software might respond with a relevant table of transactions that covers the period of interest. That means rich, personalized data is delivered in time to increase engagement and conversions. However, in practice, these concepts can be challenging to implement.CEO Beerud Sheths vision draws analogies to what search meant to Google, or how Shopify helped pivot ecommerce for businesses: Shopify built out this infrastructure of tools and capabilities that every ecommerce merchant needs to set up a storefront on the web. In the same way, Gupshup is building out the conversational infrastructure that businesses will need to build chatbots and conversational experiences. Sheth emphasizes Gupshups belief in prioritizing capturing the nuances of human conversation. Maintaining context across conversations, across queries is important, because by the time the third or fourth query comes in, you may need to refer to the first query. The AI has to handle a context switch, and enabling these multi-turn conversations, just as humans do theres a lot of elements to it. Perhaps this attention to the details in the art of capturing intent in human conversation is the differentiator that continues to power Gupshups investment growth, despite competition from the likes of Attentive, Senseforth, Yellow.ai and Haptik.Bringing low-code into the conversationWith a diverse customer base including McDonalds, Verizon and Amazon, Gupshup looks to provide a tool set that includes out-of-the box options within its base layer. This provides connectivity to core messaging services like SMS, WhatsApp, Telegram, RCS and voice. On top of that base, Gupshup provides low-code tools that allow users to communicate via queries, directly with a business service such as an AI chatbot, database or live call center agent handoff. The third and final layer is used to increase the value of the service, with advanced specialization in knowledge-heavy verticals, like finance and banking, retail and commerce, or education and healthcare. Both of the moves come at a time that Sheth says is an important point in the story arc of conversational messaging. Conversational engagement has reached sort of an inflection point, and its moving very rapidly. More businesses are adopting it, and consumers are starting to use it extensively. What businesses need to realize, Sheth says, is that the technology is now mature, its ready, its been proven and validated, and the adoption is just taking off in a big way. Conversational engagement is going through a rapid transition right now.VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. | Digital Assistance/Content Synthesis/Personalization | Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
|
news | Mike Loukides | Radar trends to watch: May 2022 | April was the month for large language models. There was one announcement after another; most new models were larger than the previous ones, several claimed to be significantly more energy efficient. The largest (as far as we know) is Google’s GLAM, with 1.2 trillion parameters–but requiring significantly less energy to train than GPT-3. Chinchilla has […] | https://www.oreilly.com/radar/radar-trends-to-watch-may-2022/ | 2022-05-03T11:19:02Z | April was the month for large language models. There was one announcement after another; most new models were larger than the previous ones, several claimed to be significantly more energy efficient. The largest (as far as we know) is Googles GLAM, with 1.2 trillion parametersbut requiring significantly less energy to train than GPT-3. Chinchilla has ¼ as many parameters as GPT-3, but claims to outperform it. Its not clear where the race to bigger and bigger models will end, or where it will lead us. The PaLM model claims to be able to reason about cause and effect (in addition to being more efficient than other large models); we dont yet have thinking machines (and we may never), but were getting closer. Its also good to see that energy efficiency has become part of the conversation.AIGoogle has created GLAM a 1.2 trillion parameter model (7 times the size of GPT-3). Training GLAM required 456 megawatt-hours, the energy of GPT-3. GLAM uses a Mixture-of-Experts (MoE) model, in which different subsets of the neural network are used, depending on the input.Google has released a dataset of 3D-scanned household items. This will be invaluable for anyone working on AI for virtual reality.FOMO is a machine learning model for object detection in real time that requires less than 200KB of memory. Its part of the TinyML movement: machine learning for small embedded systems.LAION (Large Scale Artificial Intelligence Open Network) is a non-profit, free, and open organization that is creating large models and making them available to the public. Its what OpenAI was supposed to be. The first model is a set of image-text pairs for training models similar to DALL-E.NVidia is using AI to automate the design of their latest GPU chips. Using AI to inspect sewer pipes is one example of an unseen AI application. Its infrastructural, it doesnt risk incorporating biases or significant ethical problems, and (if it works) it improves the quality of human life.Large language models are generally based on text. Facebook is working on building a language model from spoken language, which is a much more difficult problem.STEGO is a new algorithm for automatically labeling image data. It uses transformers to understand relationships between objects, allowing it to segment and label objects without human input.A researcher has developed a model for predicting first impressions and stereotypes, based on a photograph. Theyre careful to say that this model could easily be used to fine-tune fakes for maximum impact, and that first impressions dont actually say anything about a person.A group building language models for the Maori people shows that AI for indigenous languages require different ways of thinking about artificial intelligence, data, and data rights.A21 is a new company offering a large language model as a service. They allow customers to train custom versions of their model, and they claim to make humans and machines thought partners.Researchers have found a method for reducing toxic text generated by language models. It sounds like a GAN (generative adversarial network), in which a model trained to produce toxic text plays against a model being trained to detect and reject toxicity.More bad applications of AI: companies are using AI to monitor your mood during sales calls. This questionable feature will soon be coming to Zoom.Primer has developed a tool that uses AI to transcribe, translate, and analyze intercepted communications in the war between Russia and Ukraine.Deep Mind claims that another new large language model, Chinchilla, outperforms GPT-3 and Gopher with roughly ¼th the number of parameters. It was trained on roughly 4 times as much data, but with fewer parameters, it requires less energy to train and fine-tune.Data Reliability Engineering (DRE) borrows ideas from SRE and DevOps as a framework to provide higher-quality data for machine learning applications while reducing the manual labor required. Its closely related to data-centric AI.OpenAIs DALL-E 2 is a new take on their system (DALL-E) for generating images from natural language descriptions. It is also capable of modifying existing artworks based on natural language descriptions of the modifications. OpenAI plans to open DALL-E 2 to the public, on terms similar to GPT-3.Googles new Pathways Language Model (PaLM) is more efficient, can understand concepts, and reason about cause and effect, in addition to being relatively energy-efficient. Its another step forward towards AI that actually appears to think.SandboxAQ is an Alphabet startup that is using AI to build technologies needed for a post-quantum world. Theyre not doing quantum computing as such, but solving problems such as protocols for post-quantum cryptography.IBM has open sourced the Generative Toolkit for Scientific Discovery (GT4SD), which is a generative model designed to produce new ideas for scientific research, both in machine learning and in areas like biology and materials science.Waymo (Alphabets self-driving car company) now offers driverless service in San Francisco. San Francisco is a more challenging environment than Phoenix, where Waymo has offered driverless service since 2020. Participation is limited to members of their Trusted Tester program.Web3Mastodon, a decentralized social network, appears to be benefitting from Elon Musks takeover of Twitter.Reputation and identity management for web3 is a significant problem: how do you verify identity and reputation without giving applications more information than they should have? A startup called Ontology claims to have solved it.A virtual art museum for NFTs is still under construction, but it exists, and you can visit it. Its probably a better experience in VR.2022 promises to be an even bigger year for cryptocrime than 2021. Attacks are increasingly focused on decentralized finance (DeFi) platforms.Could a web3 version of Wikipedia evade Russias demands that they remove prohibited information? Or will it lead to a Wikipedia thats distorted by economic incentives (like past attempts to build a blockchain-based encyclopedia)?The Helium Network is a decentralized public wide area network using LoRaWAN that pays access point operators in cryptocurrency. The network has over 700,000 hotspots, and coverage in most of the worlds major metropolitan areas.ProgrammingDo we really need another shell scripting language? The developers of hush think we do. Hush is based on Lua, and claims to make shell scripting more robust and maintainable.Web Assembly is making inroads; heres a list of startups using wasm for everything from client-side media editing to building serverless platforms, smart data pipelines, and other server-side infrastructure.QR codes are awful. Are they less awful when theyre animated? It doesnt sound like it should work, but playing games with the error correction built into the standard allows the construction of animated QR codes.Build your own quantum computer (in simulation)? The Qubit Game is a game that lets players build a quantum computer, starting with a single qubit.One of Dockers founders is developing a new product, Dagger, that will help developers manage DevOps pipelines.Can applications use ambient notifications (like a breeze, a gentle tap, or a shift in shadows) rather than intrusive beeps and gongs? Google has published Little Signals, six experiments with ambient notifications that includes code, electronics, and 3D models for hardware.Lambda Function URLs automate the configuration of an API endpoint for single-function microservices on AWS. They make the process of mapping a URL to a serverless function simple.GitHub has added a dependency review feature that inspects the consequences of a pull requests and warns of vulnerabilities that were introduced by new dependencies.Google has proposed Supply Chain Levels for Software Artifacts (SLSA) as a framework for ensuring the integrity of the software supply chain. It is a set of security guidelines that can be used to generate metadata; the metadata can be audited and tracked to ensure that software components have not been tampered with and have traceable provenance.Harvard and the Linux Foundation have produced Census II, which lists thousands of the most popular open source libraries and attempts to rank their usage.SecurityThe REvil ransomware has returned (maybe). Although theres a lot of speculation, it isnt yet clear what this means or who is behind it. Nevertheless, they appear to be looking for business partners.Attackers used stolen OAuth tokens to compromise GitHub and download data from a number of organizations, most notably npm.The NSA, Department of Energy, and other federal agencies have discovered a new malware toolkit named pipedream that is designed to disable power infrastructure. It’s adaptable to other critical infrastructure systems. It doesnt appear to have been used yet.A Russian state-sponsored group known as Sandworm failed in an attempt to bring down the Ukraine’s power grid. They used new versions of Industroyer (for attacking industrial control systems) and Caddywiper (for cleaning up after the attack).Re-use of IP addresses by a cloud provider can lead to cloud squatting, where an organization that is assigned a previously used IP address receives data intended for the previous addressee. Address assignment has become highly dynamic; DNS wasnt designed for that.Pete Warden wants to build a coalition of researchers that will discuss ways of verifying the privacy of devices that have cameras and microphones (not limited to phones).Cyber warfare on the home front: The FBI remotely accessed devices at some US companies to remove Russian botnet malware. The malware targets WatchGuard firewalls and Asus routers. The Cyclops Blink botnet was developed by the Russia-sponsored Sandworm group.Ransomware attacks have been seen that target Jupyter Notebooks on notebook servers where authentication has been disabled. There doesnt appear to be a significant vulnerability in Jupyter itself; just don’t disable authentication!By using a version of differential privacy on video feeds surveillance cameras can provide a limited kind of privacy. Users can ask questions about the image, but cant identify individuals. (Whether anyone wants a surveillance camera with privacy features is another question.)Biology and NeuroscienceA brain-computer interface has allowed an ALS patient who was completely locked in to communicate with the outside world. Communication is slow, but it goes well beyond simple yes/no requests.HardwareCAT scans arent just for radiology. Lumafield has produced a table-sized CT-scan machine that can be used in small shops and offices, with the image analysis done in their cloud.Boston Dynamics has a second robot on the market: Stretch, a box-handling robot designed to perform tasks like unloading trucks and shipping containers.A startup claims it has the ability to put thousands of single-molecule biosensors on a silicon chip that can be mass-produced. They intend to have a commercial product by the end of 2022. MetaverseLearn faster. Dig deeper. See farther. | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Brandon Vigliarolo | Meet Flamingo, Deepmind's latest open-ended AI | Google says it outperforms comparable models despite needing fraction of training dataGoogle's Deepmind has published a paper proposing a family of machine learning models with the aim of doing more work with far less costly and time-consuming training.… | https://www.theregister.com/2022/04/29/flamingo_deepmind_ai/ | 2022-04-29T11:32:05Z | Google's Deepmind has published a paper proposing a family of machine learning models with the aim of doing more work with far less costly and time-consuming training.The upside of that is, the tech giant claims, massive cost savings as training is quickly becoming prohibitively expensive. The downside is that it's no small task to combine visual learning with a language model.The model family, called Flamingo, is a few-shot visual language model (VLM) set of distinct software systems (versus a more monolithic model like GPT-3, for instance). Google's Deepmind team says it outperforms all previous few-shot learning approaches, even those fine-tuned with orders of magnitude more data. Flamingo is described in a preprint of Deepmind's academic paper on the subject as being designed [PDF] to take combined text and image inputs to arrive at a text-only answer, leaving a fair bit of wiggle room for the models to do some interpretation. Deepmind uses an in-house dataset it created especially for multimodal ML research. All data is unlabeled and was retrieved from the public internet to the tune of 43.3 million instances consisting of 185 million images and 182GB of text. To put it simply, here's a good example of what Flamingo makes possible: during training, it was only given a few examples to achieve an inference task (identify an animal, solve a math problem, count types of animals in an image, etc). After being told what sort of inference its users wanted, it was given another image and asked to return explanatory text of the input.Deepmind based Flamingo off of its own recently released 70-billion parameter Chinchilla language model, which was pre-trained. Deepmind "fused" the Chinchilla LM with visual learning elements "by adding novel architecture components in between" that keeps training data isolated and frozen, giving them the 80-billion parameter Flamingo FLM."A single Flamingo model can achieve state-of-the-art results on a wide array of tasks, performing competitively with approaches requiring task-specific fine-tuning on orders of magnitude more examples, and often requiring hand-engineered 'tricks,'" Deepmind's Flamingo contributors said. The potential uses of this machine learning model are readily apparent, and aren't restricted to what Flamingo is able to do with data the model could also help the general state of machine learning, which is facing a problem of growing energy and computing needs to train newer models. According to one estimate, a single Google BERT training session emitted the same amount of carbon as a trans-American jet flight. Deepmind didn't make any mention of the energy costs needed to train a Flamingo model, though it does describe it as "computationally expensive to train."On the other hand, the paper said that Flamingo can be rapidly adapted to low-resource settings and for low-resource tasks, such as evaluating data for PII, social biases, stereotypes and other elements that can lead toward the oft-encounteredissue of AI bias.Despite that, Flamingo might not be anywhere near ready for prime time, and not because the model itself is bad: Deepmind admits limitations in few-shot training, namely that there are too many variables to account for when a training dataset is so small."There is no 'golden' few-shot method that would work well in all scenarios," said the researchers behind Flamingo. ® | Content Synthesis/Prediction/Personalization | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Datalog in JavaScript | Relational Database, on the client. | https://www.instantdb.dev/essays/datalogjs | 2022-04-25T13:13:19Z | Query engines make me feel like a wizard. I cast my incantation: Give me all the directors and the movies where Arnold Schwarzenegger was a cast member. Then charges zip through wires, algorithms churn on CPUs, and voila, an answer bubbles up.How do they work? In this essay, we will build a query engine from scratch and find out. In 100 lines of Javascript, well supports joins, indexes, and find our answer for Arnold! Lets get into it.ChoiceOur first step is to choose which language well support. SQL is the most popular, but we wouldnt get far in 100 lines. I suggest we amble off the beaten path and make Datalog instead.If you havent heard of Datalog, youre in for a treat. Its a logic-based query language thats as powerful as SQL. We wont cover it completely, but well cover enough to fit a good weekends worth of hacking. To grok Datalog, we need to understand three ideas:DataThe first idea is about how we store data.SQL TablesSQL databases store data in different tables: Here we have a movie table, which stores one movie per row. The record with the id 200 is "The Terminator". Notice the director_id. This points to a row in yet another person table, which keeps the directors name, and so on.Datalog TriplesIn Datalog databases, there are no tables. Or really everything is just stored in one table, the triple table:A triple is a row with an id, attribute, and value. Triples have a curious property; with just these three columns, they can describe any kind of information! How? Imagine describing a movie to someone:It's called "The Terminator"It was released in 1987Those sentences conveniently translate to triples:[200, movie/title, "The Terminator"][200, movie/year, 1987]And those sentences have a general structure; if you can describe a movie this way, you can describe tomatoes or airplanes just as well.QueriesThe second idea is about how we search for information. SQL AlgebraSQL has roots in relational algebra. You give the query engine a combination of clauses and statements, and it gets you back your data:SELECT id FROM movie WHERE year = 1987This returns:[{id: 202}, {id: 203}, {id: 204}]Voila, the movie ids for Predator, Lethal Weapon, and RoboCop. Datalog Pattern MatchingDatalog databases rely on pattern matching. We create patterns that match against triples. For example, to find all the movies released in 1987, we could use this pattern:[?id, movie/year, 1987]Here, ?id is a variable: were telling the query engine that it can be any value. But, the attributemust be movie/year, and the valuemust be 1987. Our query engine runs through triple after triple. Since ?id can be anything, this matches every triple. But, the attribute movie/year and the value 1987 filter us down to just the triples we care about:[ [202, movie/year, 1987], [203, movie/year, 1987], [204, movie/year, 1987],]Notice the ?id portion; those are the ids for Predator, Lethal Weapon, and RoboCop!Datalog findIn SQL, we just got back ids though, while our query engine returned more. How can we support returning ids only? Lets adjust our syntax; heres find:{ find: [?id], where: [ [?id, movie/year, 1987] ] }Our query engine can now use the find section to return what we care about. If we implement this right, we should get back:[[202], [203], [204]]And now were as dandy as SQL.JoinsThe third idea is about how joins work. Datalog and SQLs magic comes from them.SQL clausesIn SQL, if we wanted to find The Terminators director, we could write:SELECT person.nameFROM movieJOIN person ON movie.director_id = person.idWHERE movie.title = "The Terminator"Which gets us:[{name: "James Cameron"}]Pretty cool. We used the JOIN clause to connect the movie table with the person table, and bam, we got our directors name.DatalogPattern MatchingIn Datalog, we still rely on pattern matching. The trick is to match multiple patterns:{ find: [?directorName], where: [ [?movieId, movie/title, "The Terminator"], [?movieId, movie/director, ?directorId], [?directorId, person/name, ?directorName], ],}Here we tell the query engine to match three patterns. The first pattern produces a list of successful triples. For each successful triple, we search again with the second pattern, and so on. Notice how the ?movieId and ?directorId are repeated; this tells our query engine that for a successful match, those values would need to be the same across our different searches. What do I mean? Lets make this concrete; heres how our query engine could find The Terminators director:The first pattern finds: [200, movie/title, "The Terminator"]. We bind ?movieId to 200. Now we start searching for the second pattern: [?movieId, movie/director, ?directorName].Since ?movieId needs to be 200, this finds us [200, movie/director, 100]And we can now bind ?directorId to 100. Time for the third pattern:[?directorId, person/name, ?directorName] Because ?directorId has to be 100, our engine finds us: [100, person/name, "James Cameron"]And perfecto, the ?directorName is now bound to "James Cameron"! The find section would then return ["James Cameron"].Oky doke, now we grok the basics of Datalog! Lets get to the code.SyntaxFirst things first, we need a way to represent this syntax. If you look at: { find: [?id], where: [ [?id, movie/year, 1987] ] }We could almost write this in Javascript. We use objects and arrays, but ?id and movie/year get in the way; they would throw an error. We can fix this with a hack: lets turn them into strings.{ find: ["?id"], where: [ ["?id", "movie/year", 1987] ] }Its less pretty, but we can now express our queries without fanfare. If a string begins with a question mark, its a variable. An attribute is just a string; its a good idea to include a namespace like "movie/*", but we wont force our users.Sample DataThe next thing well need is sample data to play with. Theres a great datalog tutorial [1], which has the movie dataset weve been describing. Ive taken it and adapted it to Javascript. Heres the file.// exampleTriples.jsexport default [ [100, "person/name", "James Cameron"], [100, "person/born", "1954-08-16T00:00:00Z"], // ...]Lets plop this in and require it: import exampleTriples from './exampleTriples';Now for our query engine!matchPatternGoalOur first goal is to match one pattern with one triple. Heres an example:We have some variable bindings: {"?movieId": 200}. Lets call this a context.Our goal is to take a pattern, a triple, and a context. Well either return a new context:{"?movieId": 200, "?directorId": 100}Or a failure. We can just say null means failure. This could be the test we play with:expect( matchPattern( ["?movieId", "movie/director", "?directorId"], [200, "movie/director", 100], { "?movieId": 200 } )).toEqual({ "?movieId": 200, "?directorId": 100 });expect( matchPattern( ["?movieId", "movie/director", "?directorId"], [200, "movie/director", 100], { "?movieId": 202 } )).toEqual(null);CodeNice, we have a plan. Lets write the larger function first:function matchPattern(pattern, triple, context) { return pattern.reduce((context, patternPart, idx) => { const triplePart = triple[idx]; return matchPart(patternPart, triplePart, context); }, context);}We take our pattern, and compare each part to the corresponding one in our triple:So, wed compare "?movieId" with 200, and so on. matchPartWe can delegate this comparison to matchPart:function matchPart(patternPart, triplePart, context) { if (!context) return null; if (isVariable(patternPart)) { return matchVariable(patternPart, triplePart, context); } return patternPart === triplePart ? context : null;}First we address context; if context was null we must have failed before, so we just return early.isVariableNext, we check if were looking at a variable. isVariable is simple enough: function isVariable(x) { return typeof x === 'string' && x.startsWith("?");}matchVariableNow, if we are looking at a variable, wed want to handle it especially: function matchVariable(variable, triplePart, context) { if (context.hasOwnProperty(variable)) { const bound = context[variable]; return matchPart(bound, triplePart, context); } return { ...context, [variable]: triplePart };}We would check if we already have a binding for this variable. For example, when comparing ?movieId, wed already have the binding: 200. In this case, we just compare the bound value with whats in our triple.// ...if (context.hasOwnProperty(variable)) { const bound = context[variable]; return matchPart(bound, triplePart, context);}// ...When we compare ?directorId though, wed see that this variable wasnt bound. In this case, wed want to expand our context. Wed attach ?directorId to the corresponding part in our triple (100).return { ...context, [variable]: triplePart };Finally, if we werent looking at a variable, we would have skipped this and just checked for equality. If the pattern part and the triple part match, we keep the context; otherwise we return null: // ...return patternPart === triplePart ? context : null;// ...And with that, matchPattern works as we like!querySingleGoalNow for our second goal. We can already match one pattern with one triple. Lets now match one pattern with multiple triples. Heres the idea:Well have one pattern and a database of triples. Well want to return the contexts for all the successful matches. Heres the test we can play with:expect( querySingle(["?movieId", "movie/year", 1987], exampleTriples, {})).toEqual([{ "?movieId": 202 }, { "?movieId": 203 }, { "?movieId": 204 }]);CodeWell, much of the work comes down to matchPattern. Heres all querySingle needs to do:function querySingle(pattern, db, context) { return db .map((triple) => matchPattern(pattern, triple, context)) .filter((x) => x);}We go over each triple and run matchPattern. This would return either a context (its a match!), or null (its a failure). We filter to remove the failures, and querySingle works like a charm! queryWhereGoalCloser and closer. Now to support joins. We need to handle multiple patterns:So we go pattern by pattern, and find successful triples. For each successful triple, we apply the next pattern. At the end, well have produced progressively larger contexts.Heres the test we can play with: expect( queryWhere( [ ["?movieId", "movie/title", "The Terminator"], ["?movieId", "movie/director", "?directorId"], ["?directorId", "person/name", "?directorName"], ], exampleTriples, {} )).toEqual([ { "?movieId": 200, "?directorId": 100, "?directorName": "James Cameron" },]);CodeThis too, is not so difficult. Heres queryWhere: function queryWhere(patterns, db) { return patterns.reduce( (contexts, pattern) => { return contexts.flatMap((context) => querySingle(pattern, db, context)); }, [{}] );}We start off with one empty context. We then go pattern by pattern; for each pattern, we find all the successful contexts. We then take those contexts, and use them for the next pattern. By the end, well have all the expanded contexts, and queryWhere works like a charm too!QueryGoalAnd now weve just about built ourselves the whole query engine! Next lets handle where and find. This could be the test we can play with:expect( query( { find: ["?directorName"], where: [ ["?movieId", "movie/title", "The Terminator"], ["?movieId", "movie/director", "?directorId"], ["?directorId", "person/name", "?directorName"], ], }, exampleTriples )).toEqual([["James Cameron"]]);CodeHeres query:function query({ find, where }, db) { const contexts = queryWhere(where, db); return contexts.map((context) => actualize(context, find));}Our queryWhere returns all the successful contexts. We can then map those, and actualize our find: function actualize(context, find) { return find.map((findPart) => { return isVariable(findPart) ? context[findPart] : findPart; });}All actualize does is handle variables; if we see a variable in find, we just replace it with its bound value. [^2]PlayAnd voila! We have a query engine. Lets see what we can do.When was Alien released?query( { find: ["?year"], where: [ ["?id", "movie/title", "Alien"], ["?id", "movie/year", "?year"], ], }, exampleTriples)[[1979]]What do I know about the entity with the id 200 ?query( { find: ["?attr", "?value"], where: [[200, "?attr", "?value"]], }, exampleTriples)[ ["movie/title", "The Terminator"], ["movie/year", 1984], ["movie/director", 100], ["movie/cast", 101], ["movie/cast", 102], ["movie/cast", 103], ["movie/sequel", 207],]And, last by not leastWhich directors shot Arnold for which movies?query( { find: ["?directorName", "?movieTitle"], where: [ ["?arnoldId", "person/name", "Arnold Schwarzenegger"], ["?movieId", "movie/cast", "?arnoldId"], ["?movieId", "movie/title", "?movieTitle"], ["?movieId", "movie/director", "?directorId"], ["?directorId", "person/name", "?directorName"], ], }, exampleTriples)[ ["James Cameron", "The Terminator"], ["John McTiernan", "Predator"], ["Mark L. Lester", "Commando"], ["James Cameron", "Terminator 2: Judgment Day"], ["Jonathan Mostow", "Terminator 3: Rise of the Machines"],]Now this is cool!IndexesProblemOkay, but you may have already been thinking, Our query engine will get slow. Lets remember querySingle:function querySingle(pattern, db, context) { return db .map((triple) => matchPattern(pattern, triple, context)) .filter((x) => x);}This is fine and dandy, but consider this query: querySingle([200, "movie/title", ?movieTitle], db, {})We want to find the movie title for the entity with the id 200. SQL would have used an index to quickly nab this for us. But what about our query engine? Itll have to search every single triple in our database!GoalLets solve that. We shouldnt need to search every triple for a query like this; its time for indexes. Heres what we can do; Lets create entity, attribute, and value indexes. Something like:{ entityIndex: { 200: [ [200, "movie/title", "The Terminator"], [200, "movie/year", 1984], //... ], // ... }, attrIndex: { "movie/title": [ [200, "movie/title", "The Terminator"], [202, "movie/title", "Predator"], // ... ], // ... },}Now, if we had a pattern like this:[200, "movie/title", ?movieTitle]We could be smart about how to get all the relevant triples: since 200 isnt a variable, we could just use the entityIndex. Wed grab entityIndex[200] , and voila wed have reduced our search to just 7 triples! We can do more, but with this wed already have a big win.createDBOkay, lets turn this into reality. We can start with a proper db object. We were just using exampleTriples before; now well want to keep track of indexes too. Heres what we can do: function createDB(triples) { return { triples, entityIndex: indexBy(triples, 0), attrIndex: indexBy(triples, 1), valueIndex: indexBy(triples, 2), }}Well take our triples, and start to index them.indexByAnd indexBy will handle that. It can just take the triples and create a mapping:function indexBy(triples, idx) { return triples.reduce((index, triple) => { const k = triple[idx]; index[k] = index[k] || []; index[k].push(triple); return index; }, {});}Here idx represents the position in the triple; 0 would be entity, 1 would be attribute, 2 would be value.Now that we have indexes, we can use them in querySingle: export function querySingle(pattern, db, context) { return relevantTriples(pattern, db) .map((triple) => matchPattern(pattern, triple, context)) .filter((x) => x);}The only change is relevantTriples. Well lean on it to figure out which index to use.relevantTriplesHeres all relevantTriples does: function relevantTriples(pattern, db) { const [id, attribute, value] = pattern; if (!isVariable(id)) { return db.entityIndex[id]; } if (!isVariable(attribute)) { return db.attrIndex[attribute]; } if (!isVariable(value)) { return db.valueIndex[value]; } return db.triples;}We take the pattern. We check the id, attribute, and the value. If any of them arent variables, we can safely use the corresponding index. With that, weve made our query engine faster FinI hope you had a blast making this and got a sense of how query engines work to boot. If youd like to see the source in one place, here it is.MoreThis is just the beginning. How about functions like greater than or smaller than? How about an or query? Lets not forget aggregate functions. If youre curious about this, Id suggest three things: First go through the Learn Datalog website; thatll give you a full overview Datalog. Next, Id suggest you go through the SICP chapter on logic programming. They go much further than this essay. Finally, you can look at Nikita Tonskys datascript internals, for what a true production version could look like.Huge credit goes to SICP. When I completed their logic chapter, I realized that query languages didn't have to be so daunting. This essay is just a simplification of their chapter, translated into Javascript. The second credit needs to go to Nikita Tonskys essays. His Datomic and Datascript internals essays are a goldmine. Finally, I really enjoyed Learn Datalog, and used their dataset for this essay.Thanks to Joe Averbukh, Irakli Safareli, Daniel Woelfel, Mark Shlick, Alex Reichert, Ian Sinnott, for reviewing drafts of this essay.[1]: Learn Datalog Today very fun![2]: You may be wondering, wont find always have variables? Well, not always. You could include some constant, like {find: ["movie/title", "?title"]}This ideas was born from research into Instant. Joe and I plan to bring a powerful, reactive query engine to web and mobile. If this sounds exciting to you, sign up for our waitlist. Were looking for hackers who are up to play with a version zero : ). | Content Synthesis/Information Retrieval Or Search | Computer and Mathematical | null | null | null | null | null | null |
||
news | Netflix Technology Blog | Evolution of ML Fact Store | At Netflix, we aim to provide recommendations that match our members’ interests. To achieve this, we rely on Machine Learning (ML) algorithms. ML algorithms can be only as good as the data that we… | https://netflixtechblog.com/evolution-of-ml-fact-store-5941d3231762 | https://miro.medium.com/max/1200/0*a37mfb9KtcdEL_mn | 2022-04-26T20:44:04Z | Evolution of ML Fact Storeby Vivek KaushalAt Netflix, we aim to provide recommendations that match our members interests. To achieve this, we rely on Machine Learning (ML) algorithms. ML algorithms can be only as good as the data that we provide to it. This post will focus on the large volume of high-quality data stored in Axion our fact store that is leveraged to compute ML features offline. We built Axion primarily to remove any training-serving skew and make offline experimentation faster. We will share how its design has evolved over the years and the lessons learned while building it.TerminologyAxion fact store is part of our Machine Learning Platform, the platform that serves machine learning needs across Netflix. Figure 1 below shows how Axion interacts with Netflixs ML platform. The overall ML platform has tens of components, and the diagram below only shows the ones that are relevant to this post. To understand Axions design, we need to know the various components that interact with it.Figure 1: Netflix ML ArchitectureFact: A fact is data about our members or videos. An example of data about members is the video they had watched or added to their My List. An example of video data is video metadata, like the length of a video. Time is a critical component of Axion When we talk about facts, we talk about facts at a moment in time. These facts are managed and made available by services like viewing history or video metadata services outside of Axion.Compute application: These applications generate recommendations for our members. They fetch facts from respective data services, run feature encoders to generate features and score the ML models to eventually generate recommendations.Offline feature generator: We regenerate the values of the features that were generated for inferencing in the compute application. Offline Feature Generator is a spark application that enables on-demand generation of features using new, existing, or updated feature encoders.Shared feature encoders: Feature encoders are shared between compute applications and offline feature generators. We make sure there is no training/serving skew by using the same data and the code for online and offline feature generation.MotivationFive years ago, we posted and talked about the need for a ML fact store. The motivation has not changed since then; the design has. This post focuses on the new design, but here is a summary of why we built this fact store.Our machine learning models train on several weeks of data. Thus, if we want to run an experiment with a new or modified feature encoder, we need to build several weeks of feature data with this new or modified feature encoder. We have two options to collect features using this updated feature encoder.The first is to log features from the compute applications, popularly known as feature logging. We can deploy updated feature encoders in our compute applications and then wait for them to log the feature values. Since we train our models on several weeks of data, this method is slow for us as we will have to wait for several weeks for the data collection.An alternative to feature logging is to regenerate the features with updated feature encoders. If we can access the historical facts, we can regenerate the features using updated feature encoders. Regeneration takes hours compared to weeks taken by the feature logging. Thus, we decided to go this route and started storing facts to reduce the time it takes to run an experiment with new or modified features.Design evolutionAxion fact store has four components fact logging client, ETL, query client, and data quality infrastructure. We will describe how the design evolved in these components.Evolution of Fact Logging ClientCompute applications access facts (members viewing history, their likes and my list information, etc.) from various grpc services that power the whole Netflix experience. These facts are used to generate features using shared feature encoders, which in turn are used by ML models to generate recommendations. After generating the recommendations, compute applications use Axions fact logging client to log these facts.At a later stage in the offline pipelines, the offline feature generator uses these logged facts to regenerate temporally accurate features. Temporal accuracy, in this context, is the ability to regenerate the exact set of features that were generated for the recommendations. This temporal accuracy of features is key to removing the training-serving skew.The first version of our logger library optimized for storage by deduplicating facts and optimized for network i/o using different compression methods for each fact. Then we started hitting roadblocks while optimizing the query performance. Since we were optimizing at the logging level for storage and performance, we had less data and metadata to play with to optimize the query performance.Eventually, we decided to simplify the logger. Now we asynchronously collect all the facts and metadata into a protobuf, compress it, and send it to the keystone messaging service.Evolution of ETL and Query ClientETL and Query Client are intertwined, as any ETL changes could directly impact the query performance. ETL is the component where we experiment for query performance, improving data quality, and storage optimization. Figure 2 shows components of Axions ETL and its interaction with the query client.Fig 2: Internal components of AxionAxions fact logging client logs facts to the keystone real-time stream processing platform, which outputs data to an Iceberg table. We use Keystone as it is easy to use, reliable, scalable, and provides aggregation of facts from different cloud regions into a single AWS region. Having all data in a single AWS region exposes us to a single point of failure but it significantly reduces the operational overhead of our ETL pipelines which we believe makes it a worthwhile trade-off. We currently send all the facts into a single Keystone stream which we have configured to write to a single Iceberg table. We plan to split these Keystone streams into multiple streams for horizontal scalability.The Iceberg table created by Keystone contains large blobs of unstructured data. These large unstructured blogs are not efficient for querying, so we need to transform and store this data in a different format to allow efficient queries. One might think that normalizing it would make storage and querying more efficient, albeit at the cost of writing more complex queries. Hence, our first approach was to normalize the incoming data and store it in multiple tables. We soon realized that, while space-optimized, it made querying very inefficient for the scale of data we needed to handle. We ran into various shuffle issues in Spark as we were joining several big tables at query time.We then decided to denormalize the data and store all facts and metadata in one Iceberg table using nested Parquet format. While storing in one Iceberg table was not as space-optimized, Parquet did provide us with significant savings in storage costs, and most importantly, it made our Spark queries succeed. However, Spark query execution remained slow. Further attempts to optimize query performance, like using bloom filters and predicate pushdown, were successful but still far away from where we wanted it to be.Why was querying the single Iceberg table slow?Whats our end goal? We want to train our ML models to personalize the member experience. We have a plethora of ML models that drive personalization. Each of these models are trained with different datasets and features along with different stratification and objectives. Given that Axion is used as the defacto Fact store for assembling the training dataset for all these models, it is important for Axion to log and store enough facts that would be sufficient for all these models. However, for a given ML model, we only require a subset of the data stored in Axion for its training needs. We saw queries filtering down an input dataset of several hundred million rows to less than a million in extreme cases. Even with bloom filters, the query performance was slow because the query was downloading all of the data from s3 and then dropping it. As our label dataset was also random, presorting facts data also did not help.We realized that our options with Iceberg were limited if we only needed data for a million rows out of several hundred million and we had no additional information to optimize our queries. So we decided not to further optimize joins with the Iceberg data and instead move to an alternate approach.Low-latency QueriesTo avoid downloading all of the fact data from s3 in a spark executor and then dropping it, we analyzed our query patterns and figured out that there is a way to only access the data that we are interested in. This was achieved by introducing an EVCache, a key-value store, which stores facts and indices optimized for these particular query patterns.Lets see how the solution works for one of these query patterns querying by member id. We first query the index by member id to find the keys for the facts of that member and query those facts from EVCache in parallel. So, we make multiple calls to the key-value store for each row in our training set. Even when accounting for these multiple calls, the query performance is an order of magnitude faster than scanning several hundred times more data stored in the Iceberg table. Depending on the use case, EVCache queries can be 3x-50x faster than Iceberg.The only problem with this approach is that EVCache is more expensive than Iceberg storage, so we need to limit the amount of data stored. So, for the queries that request data not available in EVCache, our only option is to query Iceberg. In the future, we want to store all facts in EVCache by optimizing how we store data in EVCache.How do we monitor the quality of data?Over the years, we learned the importance of having comprehensive data quality checks for our datasets. Corruption in data can significantly impact production model performance and A/B test results. From the ML researchers perspective, it doesnt matter if Axion or a component outside of Axion corrupted the data. When they read the data from Axion, if it is bad, it is a loss of trust in Axion. For Axion to become the defacto fact store for all Personalization ML models, the research teams needed to trust the quality of data stored. Hence, we designed a comprehensive system that monitors the quality of data flowing through Axion to detect corruptions, whether introduced by Axion or outside Axion.We bucketed data corruptions observed when reading data from Axion on three dimensions:The impact on a value in data: Was the value missing? Did a new value appear (unintentionally)? Was the value replaced with a different value?The spread of data corruption: Did data corruption have a row or columnar impact? Did the corruption impact one pipeline or multiple ML pipelines?The source of data corruption: Was data corrupted by components outside of Axion? Did Axion components corrupt data? Was data corrupted at rest?We came up with three different approaches to detect data corruption, wherein each approach can detect corruption along multiple dimensions described above.AggregationsData volume logged to Axion datastore is predictable. Compute applications follow daily trends. Some log consistently every hour, others log for a few hours every day. We aggregate the counts on dimensions like total records, compute application, fact counts etc. Then we use a rule-based approach to validate the counts are within a certain threshold of past trends. Alerts are triggered when counts vary outside these thresholds. These trend-based alerts are helpful with missing or new data; row-level impact, and pipelines impact. They help with column-level impact only on rare occasions.Consistent samplingWe sample a small percentage of the data based on a predictable member id hash and store it in separate tables. By consistent sampling across different data stores and pipelines, we can run canaries on this smaller subset and get output quickly. We also compare the output of these canaries against production to detect unintended changes in data during new code deployment. One downside of consistent sampling is that it may not catch rare issues, especially if the rate of data corruption is significantly lower than our sampling rate. Consistent sampling checks help detect attribute impact new, missing, or replacement; columnar impact, and single pipeline issues.Random samplingWhile the above two strategies combined can detect most data corruptions, they do occasionally miss. For those rare occasions, we rely on random sampling. We randomly query a subset of the data multiple times every hour. Both hot and cold data, i.e., recently logged data and data logged a while ago, are randomly sampled. We expect these queries to pass without issues. When they fail, it is either due to bad data or issues with the underlying infrastructure. While we think of it as an Im feeling lucky strategy, it does work as long as we read significantly more data than the rate of corrupted data.Another advantage to random sampling is maintaining the quality of unused facts. Axion users do not read a significant percentage of facts logged to Axion, and we need to make sure that these unused facts are of good quality as they can be used in the future. We have pipelines that randomly read these unused facts and alert when the query does not get the expected output. In terms of impact, these random checks are like winning a lottery you win occasionally, and you never know how big it is.Results from monitoring data qualityWe deployed the above three monitoring approaches more than two years ago, and since then, we have identified more than 95% of data issues early. We have also significantly improved the stability of our customer pipelines. If you want to know more about how we monitor data quality in Axion, you can check our spark summit talk and this podcast.Learnings from Axions evolutionWe learned from designing this fact store to start with a simple design and avoid premature optimizations that add complexity. Pay the storage, network, and compute cost. As the product becomes available to the customers, new use cases will pop up that will be harder to support with a complex design. Once the customers have adopted the product, start looking into optimizations.While keep the design simple is a frequently shared learning in software engineering, it is not always easy to achieve. For example, we learned that our fact logging client can be simple with minimal business logic, but our query client needs to be functionality-rich. Our learning is that if we need to add complexity, add it in the least number of components instead of spreading it out.Another learning is that we should have invested early into a robust testing framework. Unit tests and integration tests only took us so far. We needed scalability testing and performance testing as well. This scalability and performance testing framework helped stabilize the system because, without it, we ran into issues that took us weeks to clean up.Lastly, we learned that we should run data migrations and push the breaking API changes as soon as possible. As more customers adopt Axion, running data migrations and making breaking API changes are becoming harder and harder.Conclusion and future workAxion is our primary data source that is used extensively by all our Personalization ML models for offline feature generation. Given that it ensures that there is no training/serving skew and that it has significantly reduced offline feature generation latencies we are now starting to make it the defacto Fact store for other ML use cases within Netflix.We do have use cases that are not served well with the current design, like bandits, because our current design limits storing a map per row creating a limitation when a compute application needs to log multiple values for the same key. Also, as described in the design, we want to optimize how we store data in EVCache to enable us to store more data.If you are interested in working on similar challenges, join us. | Personalization/Recommendation | Unknown | null | null | null | null | null | null |
news | PR Newswire | Introducing OpenSynthetics: The First Community Hub Focused on Synthetic Data for AI Development | OpenSynthetics, an open community for creating and using synthetic data in AI/ML and computer vision, was launched today to practitioners, researchers... | https://finance.yahoo.com/news/introducing-opensynthetics-first-community-hub-120000418.html | https://s.yimg.com/uu/api/res/1.2/cDznGFjMlqxjqLX0GAYV8w--~B/aD05ODt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/287c75c6d7bf7e21c61e4eada0b01f8b | 2022-04-14T12:00:00Z | OpenSynthetics is the preeminent community for researchers offering access to synthetic datasets, research, papers, and codeSAN FRANCISCO, April 14, 2022 /PRNewswire/ -- OpenSynthetics, an open community for creating and using synthetic data in AI/ML and computer vision, was launched today to practitioners, researchers, academics, and the wider industry.Synthesis AI, a San Francisco-based technology company, is pioneering the use of synthetic data to build more capable computer vision models. Through a proprietary combination of generative neural network and cinematic CGI pipelines, Synthesis platform can programmatically create vast amounts of perfectly-labeled image data at orders of magnitude increased speed and reduced cost compared to current approaches. (PRNewsfoto/Synthesis AI)OpenSynthetics is the first dedicated community focused on advancing synthetic data technology with centralized access to synthetic datasets, research, papers, and code. Synthetic data, or the use of computer-generated images and simulations used to train computer vision models, is an emerging technology that was recently noted as one of the top 10 breakthrough technologies of 2022 by MIT Technology Review. The first book on Synthetic Data for Deep Learning was also published last year and has seen widespread adoption.Through OpenSynthetics, AI/ML practitioners, regardless of experience, can share tools and techniques for creating and using synthetic data to build more capable AI models. Whether an individual or organization is beginning their synthetic data journey or fully utilizing it in production systems, they will have access to content relevant to their needs and experience. Additionally, OpenSynthetics will serve as a community hub, bringing together academics, practitioners, and researchers to collectively advance the use of synthetic data."Bringing together new and experienced researchers to contribute and share knowledge is an important step and an incredible milestone for the synthetic data industry," said Yashar Behzadi, CEO of Synthesis AI. "The launch of OpenSynthetics comes when synthetic data is at an inflection point and is being leveraged to build more capable and ethical AI models for autonomous vehicles, robotics, drones, the metaverse, and more. By creating a centralized hub of synthetic data resources, we hope to advance synthetic data's role in powering the next generation of computer vision."Current computer vision models are powered by hand-labeled data, which is labor-intensive, costly, time-consuming, and prone to human error and bias. Additionally, the collection of images of people presents privacy concerns. Using synthetic data approaches, labels and data are available on-demand, allowing practitioners to experiment and reducing time spent collecting and annotating data. However, the democratization of synthetic datasets, papers, and resources is needed to educate the industry on this technology and power further use cases.OpenSynthetics welcomes researchers and practitioners across academia and industry to contribute to the site. By contributing and participating, the community will build a knowledge base to help grow the understanding and adoption of this emerging technology.To contribute or learn more about OpenSynthetics, visit https://opensynthetics.com/.About Synthesis AI Synthesis AI, a San Francisco-based technology company, is pioneering the use of synthetic data to build more capable and ethical computer vision models. Through a proprietary combination of generative neural networks and cinematic CGI pipelines, Synthesis' platform can programmatically create vast amounts of perfectly-labeled image data at orders of magnitude increased speed and reduced cost compared to current approaches.View original content to download multimedia:https://www.prnewswire.com/news-releases/introducing-opensynthetics-the-first-community-hub-focused-on-synthetic-data-for-ai-development-301525351.htmlSOURCE Synthesis AI | Content Creation/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
news | The Machine Learning Job Market | I was on the job market recently, open to suggestions on what I should do next. I’m deeply thankful to all those who reached out and pitched their companies, and also to people who shared their wisdom of how they wanted to navigate their own careers. | https://evjang.com/2022/04/25/rome.html | 2022-04-25T14:58:25Z | I was on the job market recently, open to suggestions on what I should do next. Im deeply thankful to all those who reached out and pitched their companies, and also to people who shared their wisdom of how they wanted to navigate their own careers.Im pleased to say that Ive joined Halodi Robotics as their VP of AI, and will be hiring for the Bay Area office. Well be doing some real robot demos this week near Palo Alto, so please get in touch with me if youre interested in learning about how we plan to create customer value with deep learning on humanoid robots (1 year), and then solve manipulation (5 years), and then solve AGI (20 years).I suspect that there are many other deep learning researchers in the 2015-2018 vintage that are contemplating similar career moves, so Ill share what I learned in the last month and how I made my career decision in the hopes that this is useful to them. I think that in the next decade well see a lot of software companies increasingly adopt an AGI strategy as a means to make their software more adaptive and generally useful.My only constraints were that I wanted to continue to apply my ML skills at my next job. Here are the table of options I considered. I had chats with directors and founders from each of these companies, but I did not initiate the formal HR interview process with most of them. This is not intended as a flex; just the options I considered and my perceived pros and cons. Im not like one of those kids that gets into all the Ivy League schools at once and gets to pick whatever they want.These are subjective opinions: a mere snapshot of what I believe are the strengths and weaknesses of each option in April 2022. In the hype-driven Silicon Valley, the perceived status of a company can go from rags to riches back to rags within a few years, so this table will most certainly age poorly.OptionProsConsFAANG+similarLow 7 figures compensation (staff level), technological lead on compute (~10 yr), unit economics of research is not an issueThings move slower, less autonomy, OKRs etc.Start my own companyMaximally large action space, blue check mark on TwitterIm more interested in solving AGI than solving customer problemsOpenAITechnological lead on LLMs (~1 yr) + an interesting new project they are spinning upCulture and leadership team seem to be already establishedLarge Language Model StartupStrong teams, transform computing in <10 years, iterate quickly on LLM productsCompeting with FAANG + OpenAI on startup budget; unclear whether LLMs will be defensible technology on their ownTeslaTesla Bot, technological lead on data engine (~2 yrs), technological lead on manufacturing (~10yrs)No waiting in line for coffeeRobotics Startups (including Halodi)Huge moat if successful, opportunity to lead teams. Halodi has technological edge on hardware (~5yrs)Robotics research is slow, Robotics startups tend to dieML + Healthcare StartupsLots of low-hanging fruit for applying research; meaningfully change the worldProduct impact is even slower than robotics due to regulatory capture by hospitals and insurance companies. 10 years before the simplest of ML techniques can be rolled out to people.Startups working on other Applications of MLLots of low-hanging fruit + opportunity to lead teamsIm more interested in solving AGI than solving customer problemsCrypto + DeFiTokenomics is interesting. 60% annual returns at Sharpe 3+ is also interesting.Not really an AGI problem. Crypto community has weird vibesTechnological Lead TimeThe most important deciding factor for me was whether the company has some kind of technological edge years ahead of its competitors. A friend on Googles logging team tells me hes not interested in smaller companies because they are so technologically far behind Googles planetary-scale infra that they havent even begun to fathom the problems that Google is solving now, much less finish solving the problems that Google already worked on a decade ago.In the table above Ive listed companies that I think have unique technology edges. For instance, OpenAI is absolutely crushing it at recruiting right now because they are ahead in Large Language Model algorithms, probably in the form of trade secrets on model surgery and tuning hyperparameters to make scaling laws work. OpenAI has clearly done well with building their technical lead time, despite FAANGs compute superiority.Meanwhile, the average machine learning researcher at FAANG has a 15 year lead-time in raw compute compared to a PhD student, and Google and DeepMind have language models that are probably stronger than GPT-3 on most metrics. There are cases where technological lead on compute is not enough; some researchers left Google because they were unhappy with all the red tape they had to go through to try to launch LLM-based products externally.I seriously considered pivoting my career to work on generative models (i.e. LLMs, Multimodal Foundation Models), because (1) robotics is hard (2) the most impressive case studies in ML generalization always seem to be in generative modeling. Again, think to technological lead times - why would any machine learning researcher want to work at something that isnt at the forefront of generalization capability? However, the pure-generative modeling space feels a bit competitive, with everyone fighting to own the same product and research ideas. The field would probably evolve in the same way with or without me.Having futuristic technology is important for recruiting engineers because many of them dont want to waste years of their life building a capability that someone else already has. To use analogies from other fields of science, it would be like a neuroscience lab trying to recruit PhD students to study monkey brains with patch-clamp experiments when the lab next door is using optogenetic techniques and Neurallink robots. You could reinvent these yourself if youre talented, but is it worth spending precious years of your life on that?Of course, companies are not the same thing as research labs. What matters more in the long run is the product-market fit and the teams ability to build future technological edge. Incumbents can get bloated and veer off course, while upstarts can exploit a different edge or taking the design in a unique direction. Lots of unicorn companies were not first-movers.Why not start your own company?Being a Bay Area native, I thought my next job would be to start my own company around MLOps. I wanted to build a really futuristic data management and labeling system that could be used for AGI + Active Learning. Three things changed my mind:First, I talked to a bunch of customers to understand their ML and data management needs to see if there was product-market fit with what I was building. Many of their actual problems werent at the cutting edge of technology, and I simply couldnt get excited about problems like building simulators for marketing campaigns or making better pose estimators for pick-and-place in factories or ranking content in user feeds. The vast majority of businesses solve boring-but-important problems. I want my lifes work to be about creating much bigger technological leaps for humanity.Secondly, I think its rare for CEOs to contribute anything technically impressive after their company crosses a $100M valuation. If they do their job well, they invariably spend the rest of my life dealing with coordination, product, and company-level problems. They accumulate incredible social access and leverage and might even submit some code from time to time, but their daily schedule is full of so much bullshit that they will never productively tinker again. This happens to senior researchers too. This is profoundly scary to me. From Richard Hammings You and Your Research: In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more workLegend has it that Ken Thompson wrote the UNIX operating system when his wife went on a month-long vacation, giving him time to focus on deep work. The Murder of Wilbur writes, How terrifying would it be if that was true? Is it possible that Thompson was burdened by responsibilities his entire life, and then in a brief moment of freedom did some of the most important work anyone has ever done?Thirdly, Halodi has built pretty awesome technology and theyve given me a rare opportunity to live in the future, building on top of something that is 5+ years ahead of its time. Im very impressed by Bernts (the CEO) respect for human anatomy: from the intrinsic passive intelligence of overdamped systems that makes us able to grasp without precise planning, to the spring systems in our feet that let us walk across variable terrain while barely expending energy. We both share the belief that rather than humanoid robots being overkill for tackling most tasks, it is the only form that can work when you want to design the world around humans rather than machines.All Roads Lead to RomeA few months ago I asked Ilya Sutskever whether it made more sense to start a pure-play AGI research lab (like OpenAI, DeepMind) or to build a profitable technology business that, as a side effect, would generate the data moat needed to build an AGI. In his provocative-yet-prescient fashion, Ilya said to me: All Roads Lead to Rome - Every Successful (Tech) Company will be an AGI company.This sounds a bit unhinged at first, until you remember that repeatedly improving a product by the same delta involves exponentially harder technology.In semiconductor manufacturing, shrinking from 32nm to 14nm process nodes is pretty hard, but going from 14nm to 7nm process nodes is insanely hard, requiring you to solve intermediate problems like creating ultrapure water.Creating a simple Text-to-Speech system for ALS patients was already possible in the 1980s, but improving pronunciation for edge cases and handling inflection naturally took tremendous breakthroughs in deep learning.A decent character-level language model can be trained on a single computer, but shaving a few bits of entropy off conditional character modeling requires metaphorically lighting datacenters on fire.Autonomous highway driving is not too hard, but autonomously driving through all residential roads at a L5 level is considered by many to be AGI-complete.In order to continue adding marginal value to the customer in the coming decades, companies are going to have to get used to solving some really hard problems. Perhaps eventually everyone converges to solving the same hard problem, Artificial General Intelligence (AGI) just so they can make a competitive short video app or To-Do list or grammar checker. We can quibble about what AGI means and what time frame it would take for all companies to converge to this, but I suspect that Foundation Models will soon be table stakes for many software products. Russell Kaplan has shared some similar ideas on this as well.I also wonder if in a few years, expertise on losslessly compressing large amounts of Internet-scale data will cease to become a defensible moat between technologically advanced players (FAANG). It therefore makes sense to look for auxiliary data and business moats to stack onto large-scale ML expertise. There are many roads one can take here to AGI, which I have sketched out below for some large players:For instance, Alphabet has so much valuable search engine data capturing human thought and curiosity. Meta records a lot of social intelligence data and personality traits. If they so desired, they could harvest Oculus controller interactions to create trajectories of human behavior, then parlay that knowledge into robotics later on. TikTok has recommendation algorithms that probably understand our subconscious selves better than we understand ourselves. Even random-ass companies like Grammarly and Slack and Riot Games have a unique data moats for human intelligence. Each of these companies could use their business data as a wedge to creating general intelligence, by behavior-cloning human thought and desire itself.The moat I am personally betting on (by joining Halodi) is a humanoid robot that is 5 years ahead of what anyone else has. If your endgame is to build a Foundation Model that train on embodied real-world data, having a real robot that can visit every state and every affordance a human can visit is a tremendous advantage. Halodi has it already, and Tesla is working on theirs. My main priority at Halodi will be initially to train models to solve specific customer problems in mobile manipulation, but also to set the roadmap for AGI: how compressing large amounts of embodied, first-person data from a human-shaped form can give rise to things like general intelligence, theory of mind, and sense of self.Embodied AI and robotics research has lost some of its luster in recent years, given that large language models can now explain jokes while robots are still doing pick-and-place with unacceptable success rates. But it might be worth taking a contrarian bet that training on the world of bits is not enough, and that Moravecs Paradox is not a paradox at all, but rather a consequence of us not having solved the bulk of intelligence.Reality has a surprising amount of detail, and I believe that embodied humanoids can be used to index that all that untapped detail into data. Just as web crawlers index the world of bits, humanoid robots will index the world of atoms. If embodiment does end up being a bottleneck for Foundation Models to realize their potential, then a humanoid robot companies will stand to win everything.Want Intros to ML Startups?In the course of talking to many companies and advisors over the last month, I learned that there are so, so many interesting startups tackling hard ML problems. Most of them are applied research labs trying to solve interesting problems, and a few of them have charted their own road to Rome (AGI).Early in your career it makes a lot of sense to surround yourself with really great mentors and researchers, such as that of an industry research lab. Later on, you might want to bring your experience to a startup to build the next generation of products. If this describes you, Id be happy to connect you to these opportunities - just shoot me an email with (1) where you are on the pure research vs. applied research spectrum (2) what type of problems you want to work on (Healthcare, Robotics, etc.) (3) the hardest you ever worked on a project, and why you cared about it (4) your resume. If you have the skill set Im looking for, I may also recruit you to Halodi .Honest ConcernsI have some genuine concerns with Halodi (and AGI startups in general). History tells us the mortality rate of robotics companies is very high, and Im not aware of any general-purpose robot company that has ever succeeded. There is a tendency for robotics companies to start off with the mission of general-purpose robots and then rapidly specialize into something boring as the bean counters get impatient. Boston Dynamics, Kindred, Teleexistence - the list goes on and on. As in business and life, the forces of capitalism and evolution conspire to favor specialization of hardware over generalization of intelligence. I pray that does not happen to us.Im reminded of Gwerns essay on timing: Launching too early means failure, but being conservative & launching later is just as bad because regardless of forecasting, a good idea will draw overly-optimistic researchers or entrepreneurs to it like moths to a flame: all get immolated but the one with the dumb luck to kiss the flame at the perfect instant, who then wins everything, at which point everyone can see that the optimal time is past.But I also remind myself of what Richard Hamming said about Claude Shannon:He wants to create a method of coding, but he doesnt know what to do so he makes a random code. Then he is stuck. And then he asks the impossible question, What would the average random code do? He then proves that the average code is arbitrarily good, and that therefore there must be at least one good code. Who but a man of infinite courage could have dared to think those thoughts?Life is too short to attempt anything less than that which takes infinite courage. LFG. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Mike | Lanner Electronics Launches Falcon H8 PCIe AI Accelerator Card, Powered by Hailo-8™AI Processors | Lanner Electronics & Hailo collaborate on one of the most cost-efficient PCIe accelerator cards on the...The post Lanner Electronics Launches Falcon H8 PCIe AI Accelerator Card, Powered by Hailo-8™AI Processors appeared first on Electronics-Lab.com. | https://www.electronics-lab.com/lanner-electronics-launches-falcon-h8-pcie-ai-accelerator-card-powered-by-hailo-8ai-processors/ | 2022-05-12T04:52:54Z | Lanner Electronics & Hailo collaborate on one of the most cost-efficient PCIe accelerator cards on the market, with record high tera operations per second (TOPS), enabling high-end deep learning applications on edge serversLanner Electronics, a global leader in the design and manufacturing of intelligent edge computing appliances, announced its first Hailo-8™AI-powered PCIe accelerator card, the Falcon H8. Lanner Electronics collaborated with leading AI (Artificial Intelligence) chipmaker Hailo to design the Falcon H8, enabling scalable and powerful intelligent video analytics applications for multiple industries operating at the edge, including intelligent transport systems (ITS), smart cities, smart retail, and Industry 4.0. The Falcon H8 is one of the most cost-efficient PCIe AI accelerator cards on the market, with low power consumption and a record high of up to 156 tera operations per second (TOPS) to allow high-end deep learning applications on edge servers.Lanners Falcon H8 modular, PCIe FHHL form factor provides a compact and easily deployable solution for engineers looking to offload CPU loading for low-latency deep learning inference. With high-density AI processors, the Falcon H8 accommodates 4, 5, or 6 Hailo-8™ AI processors, offering a modular, cost-effective Edge AI solution with high processing capabilities and power efficiency. Through a standard PCIe interface, the Falcon H8 AI Accelerator Card enables legacy devices such as NVRs, Edge AI boxes, Industrial PCs and robots to run video-intensive, mission-critical Edge AI applications such as video analytics, traffic management, access control, and beyond.The Falcon H8 delivers unprecedented inference processing of over 15,000 Frames Per Second (FPS) for MobileNet-v2 and 8,000 FPS for ResNet-50. Its performance is up to 4x more cost effective (TOPS/$) and 2x more power efficient (TOPS/W) compared to leading GPU-based solutions.Optimized for AI functionality, performance, and ease of deployment, Lanner is pleased to partner with Hailo to design a next-gen AI accelerator card that brings top-performing AI computing to the edge of industrial IoT, said Jeans Tseng, CTO of Lanner Electronics. Our expertise in creating high-density hardware platforms, combined with Hailos state-of-the-art neural chip and software framework, provides service providers and system integrators a best-in-class AI accelerator that enables running deep learning applications most efficiently with the lowest total cost of ownership.The integration of Lanners Falcon H8 and the Hailo-8 provides unmatched AI performance at the edge. This joint solution is more powerful, scalable, and cost-effective than other solutions available on the market today, said Orr Danon, CEO and Co-Founder of Hailo. Our collaboration with Lanner will better power edge devices across industries, including transportation, smart cities, smart retail, industrial IoT, and more.Lanner Electronics and Hailo first announced their collaboration in 2021, launching groundbreaking AI inference solutions for real-time computer vision at the edge. Several Tier-1 customers have since adopted the companies groundbreaking platforms. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Python Concurrency with asyncio | Python is flexible, versatile, and easy to learn. It can also be very slow compared to lower-level languages. Python Concurrency with asyncio teaches you how to boost Python's performance by applying a variety of concurrency techniques. You'll learn how the complex-but-powerful asyncio library can achieve concurrency with just a single thread and use asyncio's APIs to run multiple web requests and database queries simultaneously. The book covers using asyncio with the entire Python concurrency landscape, including multiprocessing and multithreading.It's easy to overload standard Python and watch your programs slow to a crawl. Th e asyncio library was built to solve these problems by making it easy to divide and schedule tasks. It seamlessly handles multiple operations concurrently, leading to apps that are lightning fast and scalable.Python Concurrency with asyncio introduces asynchronous, parallel, and concurrent programming through hands-on Python exam ... | https://it-ebooks.info/book/1650965491-python_concurrency_with_asyncio/ | 2022-04-26T14:28:53Z | Python is flexible, versatile, and easy to learn. It can also be very slow compared to lower-level languages. Python Concurrency with asyncio teaches you how to boost Python's performance by applying a variety of concurrency techniques. You'll learn how the complex-but-powerful asyncio library can achieve concurrency with just a single thread and use asyncio's APIs to run multiple web requests and database queries simultaneously. The book covers using asyncio with the entire Python concurrency landscape, including multiprocessing and multithreading.It's easy to overload standard Python and watch your programs slow to a crawl. Th e asyncio library was built to solve these problems by making it easy to divide and schedule tasks. It seamlessly handles multiple operations concurrently, leading to apps that are lightning fast and scalable.Python Concurrency with asyncio introduces asynchronous, parallel, and concurrent programming through hands-on Python examples. Hard-to-grok concurrency topics are broken down into simple flowcharts that make it easy to see how your tasks are running. You'll learn how to overcome the limitations of Python using asyncio to speed up slow web servers and microservices. You'll even combine asyncio with traditional multiprocessing techniques for huge improvements to performance. | Digital Assistance/Content Creation/Information Retrieval Or Search | Computer and Mathematical | null | null | null | null | null | null |
||
news | Maria Deutscher | Newly launched AI startup Inflection AI raises $225M in funding | Artificial intelligence startup Inflection AI Inc. has raised $225 million in funding, TechCrunch reported today. Inflection AI detailed the funding in a filing with the U.S. Securities and Exchange Commission. The Palo Alto, California-based startup specified that the financing was provided in the form of an equity investment, but didn’t disclose the investors who provided […]The post Newly launched AI startup Inflection AI raises $225M in funding appeared first on SiliconANGLE. | https://siliconangle.com/2022/05/13/newly-launched-ai-startup-inflection-ai-raises-225m-funding/ | 2022-05-13T19:22:31Z | Artificial intelligence startup Inflection AI Inc. has raised $225 million in funding, TechCrunch reported today.Inflection AI detailed the funding in a filing with the U.S. Securities and Exchange Commission. The Palo Alto, California-based startup specified that the financing was provided in the form of an equity investment, but didnt disclose the investors who provided the capital. It was previously reported that Inflection AI is backed by Greylock Partners. Greylock Partners is one of Silicon Valleys most prominent venture capital firms, having backed major tech firms such as Airbnb Inc. and Palo Alto Networks Inc. over the years. Its unclear whether the firm participated in Inflection AIs newly disclosed $225 million round.Besides providing funding, Greylock reportedly also plans to assist Inflection AI in other areas. The venture capital firm will reportedly assist the startup with go-to-market initiatives and recruiting. Inflection AI was launched this past March by Mustafa Suleyman, the co-founder of Alphabet Inc.s DeepMind machine learning research unit, and LinkedIn co-founder Reid Hoffman. The startups team includes AI experts who previously worked at Meta Platforms Inc., DeepMind and DeepMind sister company Google LLC. Inflection AI is developing AI software that will make it easier for humans to communicate with computers, CNBC reported in March. Additionally, the startup reportedly envisions using AI to generate language to pretty much human-level performance. Some advanced neural networks, such as OpenAI LLCs GPT-3, can generate natural-language text in response to user prompts. On its website, Inflection AI states that throughout the history of computing, humans have had to learn to speak the language of machines. In the new paradigm, machines will understand our language.Recent advances in artificial intelligence promise to fundamentally redefine human-machine interaction, Inflection AI states on its website. We will soon have the ability to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people.DeepMind, the Alphabet subsidiary where several of Inflection AIs team members worked before launching the startup, has developed multiple neural networks optimized to process natural language text. One such neural network, Ithaca, was detailed by DeepMind a few weeks ago. Ithaca can restore the missing text of damaged inscriptions from ancient Greece, as well as estimate where and when they were created.DeepMinds AI research also spans other areas. The Alphabet unit made headlines in 2016 when its AlphaGo system became the first neural network to win a game of Go against a human. Since then, DeepMind has developed two newer, more advanced versions of AlphaGo that can learn to perform certain tasks without instructions from developers.Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts. | Digital Assistance/Content Creation | Unknown | null | null | null | null | null | null |
|
news | Bill Doerrfeld | Smoothing the Transition From REST to GraphQL | Since its open source release in 2015, GraphQL has taken the development world by storm. GraphQL, the query language originally developed by Facebook, is a big boon for frontend developers, as it enables you to grab the exact data you need with a single request. This solves the problems of over-fetching and under-fetching, which classical […]The post Smoothing the Transition From REST to GraphQL appeared first on DevOps.com. | https://devops.com/smoothing-the-transition-from-rest-to-graphql/ | 2022-05-05T11:30:52Z | Since its open source release in 2015, GraphQL has taken the development world by storm. GraphQL, the query language originally developed by Facebook, is a big boon for frontend developers, as it enables you to grab the exact data you need with a single request. This solves the problems of over-fetching and under-fetching, which classical REST APIs are prone to. GraphQL also offers a nice developer experience with a consistent schema and documentation. Some are even treating GraphQL as a convenient meta layer to aggregate multiple underlying services and abstract their interfaces with simple commands.For these reasons and more, many development teams are considering switching from REST to GraphQL. Even enterprises are beginning to use GraphQL in large production environments. But, how can these teams make the switch without breaking existing integrations? How can we ensure that things like developer experience, security, and access control all remain stable?I recently met with Anant Jhingran, the co-founder of StepZen, to discover tips for transitioning from REST to GraphQL. According to Jhingran, most organizations see the benefit in adopting GraphQL but are hesitant to forgo years of investment into RESTful architectures. But according to Jhingran, it’s rarely a case of “either/or”more often, he sees projects running both styles simultaneously or layering one on top of the other. Below, we’ll explore some things to keep in mind as you invest in GraphQL.The Decade of DevelopersWe’ve reached ubiquity of web APIs among companies big and small. “APIs have turned out to be phenomenal for software supply chains,” said Jhingran, the former CTO of Apigee. They help share data between partners and can enable platform ecosystems to thrive. Jhingran describes how within the last decade, APIs and API management have peaked to near-ubiquity in terms of adoption.But not all APIs behave the same way. Although representational state transfer (REST) is the most popular style, it’s still quite a general architectural guideline for web APIshow the data comes out of each service is still highly nuanced. The structure of requests and responses are different from service to service, causing developer experience hangups at scale.GraphQL, on the other hand, “is a fantastic way of structuring data,” said Jhingran. With a single call, you can return the explicit information that the client requires in an expected format. For an e-commerce application, for example, this might be customer information, order numbers and delivery status. Displaying this all with one call is an awesome experience for frontend developers. And, GraphQL APIs are easy to explore and navigate since the schema is baked in. This improves upon REST APIs, where documentation may not always match the actual production implementation.“This decade is the decade of developers,” said Jhingran. Satisfying developer experience has become more mission-critical as enterprises work to increase their footprint in the cloud marketplace and GraphQL is part of this movement. “For people who build apps, it’s just better,” said Jhingran.But just because frontend teams clamor for the ease of use of GraphQL doesn’t mean backend teams can easily restructure their entire framework around it. Doing it right requires many skills, and the journey can be arduous, explained Jhingran.It’s Rarely REST Vs. GraphQLREST and GraphQL are often pitted against one another to the point that they seem like polar opposites. But in practice, teams often use GraphQL to build on top of APIs already in place. Jhingran describes the most common pattern as “GraphQL comes in as a layer that connects the dots between various REST APIs.” Either GraphQL is consumed directly or as a REST layer above GraphQL. “It could go either way,” he said.GraphQL is a fantastic data infrastructure tier as it can mix and match database backends. It could, in essence, become the new plumbing upon which applications are constructed. But while GraphQL may be excellent for internal construction, the name of the game is stability when externalizing services to the outside world. With REST, the big goal is to decrease broken clients and ensure high-grade security for each resource. Therefore, Jhingran foresees certain use cases where using a REST wrapper around GraphQL services to deliver fixed JSON responses would be preferred.Another point is that GraphQL is not only relegated to appeasing frontend developersit has some backend potential, too. “GraphQL is an opportunity to reimagine frontend and GraphQL is an opportunity to reimagine backend infrastructure,” said Jhingran. This concept has weight nowadays, as Netflix engineers, for example, use GraphQL as a backend for microservices architecture.Making the Transition From REST to GraphQLNot all apps are going to be transformed on day one. Teams may encounter cultural resistance to new technologies. Interface redesigns, especially for those external apps which are exposed to third parties, must also require extra precaution to maintain app compatibility to avoid breaking change.Thus, it may take time to transition effectively. “Getting it started versus getting it right are not the same thing,” said Jhingran. He said people who get started with GraphQL may struggle for months due to competing best practices and tools. For example, Jhingran identified two main perspectives when approaching GraphQL: Programming and declarative.Programming: Teams use GraphQL libraries such as Apollo to program their way out of your GraphQL endpoint. In this approach, you just start writing. It’s quick to get started, said Jhingran, but you end up describing everything you do and encountering many contingencies along the way.Declarative: The other option is a more declarative approachinstead of telling the system how, you tell it what to do. For example, this subsystem should do X, this subsystem should do Y and so on. According to Jhingran, adopting tooling with declarative snippets to create GraphQL endpoints is “better in the short and long run.”GraphQL SecurityA March 2022 report from Salt Labs found a shocking 681% increase in API-based cyberattacks over the previous 12 months. There are many concerns surrounding API security right nowthese valuable endpoints are prone to broken access control, data overexposure and over-permissive states.As I’ve covered previously, GraphQL security is a bit nuanced compared to REST. Within the REST paradigm, you can easily assign permissions and rate limit calls per resource. But with GraphQL, it’s not that simple. Since you can batch multiple queries, callers have much more power at their fingertips, and hackers might abuse such power if the proper authorization isn’t in place.Companies may rightly worry about opening another can of worms with GraphQL. But according to Jhingran, these worries are all solvable. A comprehensive GraphQL endpoint will likely require another layer to uniquely segment authorization to retain a least-privilege model for all resources and backend endpoints.Future of GraphQLTo date, GraphQL has gained modest prominence. But, Jhingran foresees an incoming inflection point in the next two to three years. “The dominoes are beginning to fall,” he said.Now that the market is saturated with APIs, characteristics outside of functionality will be what sets a service apart and gives it a competitive edge. This includes things like performance, security and developer experience, the latter of which makes GraphQL an attractive proposition. This, paired with the explosion in GraphQL tools, will undoubtedly help developers perform their jobs better.Looking to the future, Jhingran predicts increased usability and abstraction in how developers work with GraphQL. He compared it to how disjointed pages on the web became interconnected and searchable with links. We see similar thinking happening in the world of APIs, he explained, in which the disjointed APIs require a better connection. Similar to how Google simplified search, he foresees GraphQL and the tools that work with it helping to decrease cognitive load by abstracting complexity with declarative commands. | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Intel Announces New Cloud-to-Edge Technologies to Solve Challenges of Today and Tomorrow | GRAPEVINE, Texas--(BUSINESS WIRE)--At its inaugural Intel Vision event on May 10, 2022, Intel announced advancements across silicon, software and services. | https://www.businesswire.com/news/home/20220510005669/en/Intel-Announces-New-Cloud-to-Edge-Technologies-to-Solve-Challenges-of-Today-and-Tomorrow | 2022-05-10T14:25:03Z | GRAPEVINE, Texas--(BUSINESS WIRE)--Today at its inaugural Intel Vision event, Intel announced advancements across silicon, software and services, showcasing how it brings together technologies and the ecosystem to unlock business value for customers today and in the future. Among the benefits highlighted across real-world examples are improved business results and insights, reduced total cost of ownership, accelerated time to market and value, and positive global impact.More:Intel Vision 2022 (Press Kit) | Intel Vision 2022 Keynote (Livestream/Replay) | Intel Vision 2022: Day 1 Keynote (Live Blog) | Intels Habana Labs Launches Second-Generation AI Processors for Training and Inferencing (News)We are in the most dynamic global market of our lifetime. The challenges organizations face today are complex and interconnected, and success depends on their ability to quickly adopt and maximize leading-edge technologies and infrastructure, said Intel CEO Pat Gelsinger. Today, we are excited to share how we are applying our scale, resources and the magic of silicon, software and services to help customers and partners accelerate digital transformations in this complex environment.New Silicon, Software and Services to Fuel Digital TransformationsThe technology superpowers of artificial intelligence, ubiquitous computing, pervasive connectivity and cloud-to-edge infrastructure are fueling unprecedented demand for semiconductors and opening the door to infinite possibilities, from a truly hybrid workforce to entirely new immersive experiences. At the same time, businesses face growing pressures around supply chain, security, sustainability and the ability to adapt to the complexity of new workloads. Intel is working to help address these challenges with the introduction of new hardware, software and services from cloud to edge to client.Todays announcements include:A major leap in deep learning processing with Habana Gaudi2: Gaudi processors are used for the highest end deep learning AI training and are known for their ability to enable customers to train more and pay less. Launching today, Habana Gaudi2 and Greco AI accelerators are built on a single software stack, Synapse AI, that easily supports different architectures, enabling end-users to take advantage of the processors performance and efficiency. In addition, Gaudi2 delivers two times better AI training performance compared with current in-market A100-based offerings for key vision and NLP workloads1.4th Gen Intel Xeon Scalable sets a new standard for modern performance: Intel is shipping initial SKUs of 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) today. These are the first of many SKUs, with more due to ramp throughout the remainder of the year. The 4th Gen Intel Xeon Scalable processors deliver exceptional overall performance, will support DDR5, PCIe Gen5 and CXL 1.1, and are equipped with new integrated accelerators that deliver up to 30x performance versus the prior generation through software and hardware optimizations for AI workloads2. For telco networks it also has new capabilities that deliver up to two times3 capacity gains for virtual radio access network (vRAN) deployments. In high performance computing, Intel Xeon processors, code-named Sapphire Rapids with high bandwidth memory (HBM), will dramatically boost memory bandwidth available to the processor, super-charging high-performance computing.AI made more accessible for enterprises through Project Apollo: In partnership with Accenture, Intel kicked off Project Apollo, a program that will provide enterprises with more than 30 open-source AI solutions kits that are optimally designed to make AI more accessible to customers in on-prem, cloud and edge environments. The first Project Apollo kits will be released in the next few months.Preparing for the data center of the future with IPUs: Intel unveiled its IPU roadmap extending through 2026, featuring new FPGA + Intel architecture platforms (code-named Hot Springs Canyon) and the Mount Morgan (MMG) ASIC, as well as next-generation 800GB products. IPUs are dedicated products with hardened acceleration for infrastructure compute needs, allowing businesses to accomplish tasks quicker and solve problems faster.Single GPU solution for media transcode, visual graphics and inference in the cloud: Intels data center GPU, code-named Arctic Sound-M (ATS-M), is the industrys first discrete GPU with an AV1 hardware encoder. ATS-M is a versatile GPU with leadership transcode quality and performance targeting 150 trillion operations per second (TOPS). Developers will be able to easily design for ATS-M with an open software stack through oneAPI. ATS-M will be available in two form factors and in more than 15 system designs from partners including Dell Technologies, Supermicro,, Inspur, and H3C . It will launch in 2022s third quarter.New 12th Gen Intel Core HX processors for hybrid work: The company completed the 12th Gen family with the launch of the new 12th Gen Intel Core HX processors. Created for professionals who need maximum performance and flexibility to navigate a hybrid environment, and with up to 16 cores and clock speeds up to 5 GHz, the Intel Core i9-12900HX processor is the worlds best mobile workstation platform4.Recognizing that users also want the flexibility to tap into compute resources when and where they need them,Intel provided a first concept demonstration of its software infrastructure initiative: Project Endgame. Applications can take advantage of this software infrastructure layer that enables devices to harness computing resources from other devices within the network to provide an always-available, low latency, continual compute service. For example, a demanding GPU workload running on one device can sense and tap into additional graphics processing horsepower from a more powerful machine to enhance the users experience. Project Endgame is in development, and Intel will begin beta testing the first milestones of the technology this year.Todays announcements also included an early look at steps Intel is taking to enable service models across the ecosystem. The introduction of the Intel On Demandservice offeringhelps enterprises meet the needs of evolving workloads, product sustainability and opportunities to scale systems near the data. Currently offered through select partners HPE GreenLake, Lenovo TruScale and PhoenixNAPs Bare Metal Cloud, Intel introduced a new consumption business model to enable customers to align their infrastructure with their business needs and requirements.Working Together to Create World-Changing TechnologyThe power and positive impact of Intels broad portfolio truly comes to lifethrough examples that show not only how the hardware, software and services work together, but also the deep way that Intel collaborates with customers, partners and the ecosystem.Highlights from today include:High performance computing to solve the worlds most complex challenges: Argonne National Laboratories is on track to deliver 2 exaflops of peak performance with the Aurora supercomputer running on the Intel Xeon processor, code-named Sapphire Rapids with High Bandwidth Memory (HBM), and the Intel data center GPU, code-named Ponte Vecchio, with Intel oneAPI providing developers seamless system integration. During todays opening keynote, Rick Stevens, laboratory director for Computing, Environment and Life Science of Argonne, showed the installation of the Aurora supercomputer for the first time, discussing how it will help solve some of humankinds most complex problems, such as more accurate climate predictions and discovery of new cancer treatments, while making exascale accessible for more research and development, and innovation.Confidence with confidential computing: In an increasingly dynamic regulatory landscape, global companies must address several considerations when determining how to use regulated data to effectively train and develop neural networks. Bosch and Intel collaborated on a research effort to develop a confidential AI solution that allows Bosch to train its neural networks confidentially in the public cloud. To help achieve this at scale, Bosch Corporate Research has built a confidential AI platform powered by Intel® Software Guard Extensions available with 3rd Gen Intel® Xeon® Scalable platforms.Agriculture autonomy with private wireless networks: Intelligent edge solutions have the potential to transform food cultivation by helping farmers increase yields and operational efficiency while addressing labor shortages and human error. Data analytics also offer the opportunity to deliver insights that help farmers to increase yields and improve crop health while reducing the resources they require. Blue White Robotics developed a new type of autonomous agricultural solution that transforms a growers existing equipment into a fleet of autonomous tractors connected to an internet-based management platform. With help from Intel and Federated Wireless, Blue White Robotics made this a scalable solution that leverages Intel® Smart Edge and Intel® Xeon® D processors, and employs the power of edge computing and shared spectrum to create a private wireless network on any farm anywhere.Frictionless retail experience: The pandemic has changed the way people want to shop, with many preferring stores with touchless or self-checkout options. Nourish + Bloom Market set out to design a frictionless shopping experience that embraced automation without replacing jobs. To accomplish this, Nourish + Bloom worked with Intel and UST, a leading transformation solutions company, to tap into the collective technical knowledge for building innovations like a next-generation self-checkout using computer vision technology and a fully autonomous store shopping experience.Tech for good: Intel works across the ecosystem to drive positive global change for future generations, such as working to further reduce its direct and indirect greenhouse gas emissions and ensuring the brightest future and next-generation skills for tomorrows workforce through programs like Intels AI Festival and collaboration with the Hidden Genius Project and Autodesk.This is just the beginning of news from Intel Vision. Tune in at 9 a.m. CDT (7 a.m. PDT) Wednesday to hear more from Intel chief technology officer Greg Lavender on how Intel is helping address the growing security needs of businesses, plus even more.About IntelIntel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.1 See the Vision page at www.Intel.com/PerformanceIndex for workloads and configurations. Results may vary.2 For workloads and configurations visit www.Intel.com/PerformanceIndex. Click on the Events tab and Innovation Event Claims. Results may vary.3 Claim estimated as of Feb. 20, 2022, based on Sapphire Rapids architecture improvements versus 3rd generation Intel Xeon Scalable at similar core counts on a test scenario using FlexRAN software. Results may vary.4 Based on performance estimated with measurements on 12th Gen Intel Core i9-12900HX with RTX 3080Ti against Intel Core i9-11980HK with RTX 3080, Intel Core i9-12900HK with RTX 3080Ti, AMD Ryzen 9 6900HX with RTX 3060, AMD Ryzen 9 6900HS with Radeon 6700S, Intel Core i7-12700H with RTX 3050Ti and Apple M1 Max MacBook Pro with 32 core integrated GPU. Best available compilers selected for all processors. Binaries compiled with ICC for Intel/AMD, binaries compiled with Xcode 13.1 for Apple. The metric used is the geometric mean of C/C++ integer benchmarks in SPEC*int_rate_base2017 2021.2 LLVM (1-copy) and SPEC*int_rate_base2017 2021.2 LLVM (n-copy). See www.intel.com/PerformanceIndex for additional workload and configuration details. Results may vary. Other names and brands may be claimed as the property of others.© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. | Unknown | Unknown | null | null | null | null | null | null |
||
news | PR Newswire | Lanner Electronics Launches Falcon H8 PCIe AI Accelerator Card, Powered by Hailo-8™AI Processors | Lanner Electronics, a global leader in the design and manufacturing of intelligent edge computing appliances, announced its first Hailo-8™AI-powered PCIe... | https://finance.yahoo.com/news/lanner-electronics-launches-falcon-h8-124200567.html | https://s.yimg.com/uu/api/res/1.2/e4zfwJerT9xCVZTOs5e_rQ--~B/aD0xNjt3PTE2O2FwcGlkPXl0YWNoeW9u/https://media.zenfs.com/en/prnewswire.com/578f0706fc93b1e2d6ea1e1012709d7e | 2022-05-10T12:42:00Z | Lanner Electronics & Hailo collaborate on one of the most cost-efficient PCIe accelerator cards on the market, with record high tera operations per second (TOPS), enabling high-end deep learning applications on edge serversNEW TAIPEI, Taiwan and TEL AVIV, Israel, May 10, 2022 /PRNewswire/ -- Lanner Electronics, a global leader in the design and manufacturing of intelligent edge computing appliances, announced its first Hailo-8AI-powered PCIe accelerator card, the Falcon H8. Lanner collaborated with leading AI (Artificial Intelligence) chipmaker Hailo to design the Falcon H8, enabling scalable and powerful intelligent video analytics applications for multiple industries operating at the edge, including intelligent transport systems (ITS), smart cities, smart retail, and Industry 4.0. The Falcon H8 is one of the most cost-efficient PCIe AI accelerator cards on the market, with a low power consumption and record high of up to 156 tera operations per second (TOPS) to allow high-end deep learning applications on edge servers.Lanner's Falcon H8 modular, PCIe FHHL form factor provides a compact and easily deployable solution for engineers looking to offload CPU loading for low-latency deep learning inference. With high-density AI processors, the Falcon H8 accommodates 4, 5, or 6 Hailo-8 AI processors, offering a modular, cost-effective Edge AI solution with high processing capabilities and power efficiency. Through a standard PCIe interface, the Falcon H8 AI Accelerator Card enables legacy devices such as NVRs, Edge AI boxes, Industrial PCs and robots to run video-intensive, mission-critical Edge AI applications such as video analytics, traffic management, access control, and beyond.The Falcon H8 delivers unprecedented inference processing of over 15,000 Frames Per Second (FPS) for MobileNet-v2 and 8,000 FPS for ResNet-50. Its performance is up to 4x more cost effective (TOPS/$) and 2x more power efficient (TOPS/W) compared to leading GPU-based solutions."Optimized for AI functionality, performance, and ease of deployment, Lanner is pleased to partner with Hailo to design a next-gen AI accelerator card that brings top-performing AI computing to the edge of industrial IoT," said Jeans Tseng, CTO of Lanner Electronics. "Our expertise in creating high-density hardware platforms, combined with Hailo's state-of-the-art neural chip and software framework, provides service providers and system integrators a best-in-class AI accelerator that enables running deep learning applications most efficiently with the lowest total cost of ownership.""The integration of Lanner's Falcon H8 and the Hailo-8 provides unmatched AI performance at the edge. This joint solution is more powerful, scalable, and cost-effective than other solutions available on the market today," said Orr Danon, CEO and Co-Founder of Hailo. "Our collaboration with Lanner will better power edge devices across industries, including transportation, smart cities, smart retail, industrial IoT, and more."About Lanner ElectronicsLanner Electronics is a world-leading hardware provider for advanced network appliances, ruggedized edge AI appliances. Lanner's Edge AI hardware platforms bring proven reliability, with a purpose-built design that can withstand the distinct challenges of the industrial edge and enable mission-critical applications such as video analytics, traffic management, access control, and beyond.About HailoHailo, an AI-focused, Israel-based chipmaker, has developed a specialized Artificial Intelligence (AI) processor that delivers the performance of a data center-class computer to edge devices. Hailo's AI processor reimagines traditional computer architecture, enabling smart devices to perform sophisticated deep learning tasks such as object detection and segmentation in real time, with minimal power consumption, size, and cost. Supported by its Hailo-8 M.2 and Mini PCIe high-performance AI acceleration modules, the deep learning processor is designed to fit into a multitude of smart machines and devices, impacting a wide variety of sectors including automotive, industry 4.0, smart cities, smart homes, and retail.Press ContactsLanner ElectronicsBrian ChenMarketing [email protected]+886-2-8692-6060HailoGarrett KrivicichHeadline Media [email protected]+1 786 233 7684View original content to download multimedia:https://www.prnewswire.com/news-releases/lanner-electronics-launches-falcon-h8-pcie-ai-accelerator-card-powered-by-hailo-8ai-processors-301543753.htmlSOURCE Lanner Electronics, Inc | Unknown | Computer and Mathematical/Architecture and Engineering | null | null | null | null | null | null |
news | Vaclav Kosar | Google's Pathways Language Model and Chain-of-Thought | Pathways Language Model (PaLM), the largest model of early 2022, outperforms average human on grade school logic and math (BIG-bench) by simulating reasoning steps. | https://vaclavkosar.com/ml/googles-pathways-language-model-and-chain-of-thought | 2022-04-18T15:43:38Z | training task: Given this text, predict the next word (token).the biggest dense transformer (540B) and likely the most expensive (~$10B) as of early 2022highly efficient training on 6k chips (TPU v4) across 2 clusters (Pods)improvements from scaling continue in language understanding (few-shot)disproportionate gains from certain scale e.g. reasoning: 62B to 540B vs 8B to 62Bbreakthrough performance:outperforming average human on a grade school logic and math (BIG-bench)outperforms specialized and fine-tuned models on multistep reasoningchain-of-thought prompting simulates inner monologuePaLMs Sizetransformer architecture similar to GPT-3PaLM has 540B parameters3x bigger than GPT-3 175B parameters2x smaller than sparse Switch Transformer 1Tonly parts of the model is activated at each time.human brain 100T connectionslikely the most expensive model ~$10M (2.5 yottaFLOPS) vs GPT-3 ~$5MPaLM and GPT-3 fascinating, but likely not economical nowZero-Shot vs Few-Shot Prompting vs Fine-Tuningprompting: instructing via addition of textual contextzero-shot: task described, but demonstrations not givenfew-shot: task described and random demonstrations providedfine-tuning: model parameters are updated with correct answersPaLMs Breakthrough Capabilities on BIG-benchBIG-bench: 150+ tasks (a task contains samples) like reasoning, math, QA, translation, programming58 tasks available results on other models, often multiple choice questionsPaLM is SOTA: outperforming human average and other models GPT-3, Gopher, Chinchillastill underperforms average human on many tasksfunny footnote: they scan for the dataset in the training set using GUIDcertain capabilities emerge when a scale is reached (not log-linear)english proverbs and logical sequence at 62B to 540B vs 8B to 62BChain-of-Thought Promptinginserts reasoning steps between for each demonstration in the promptmulti-step arithmeticgrade-school level math problemsdifficult is to convert to equationscommonsense logical reasoningstrong world knowledgechaining logical inferencesthoughts are useful for interpretationPaLM Chain-of-Thought Prompting Resultsreasoning tasks datasets: GSM8K, SVAMP, MAWPS, AQuA, CommonsenseQA, StrategyQAhumans allowed calculators and search engines toothe model triggers a calculator by generating special annotation text (+~5% acc)on GSM8K: PaLM+chain-of-thought competitive with GPT-3+finetuning+calculator+verifier,outperforms with a calculator, but GPT-3+chain-of-thought?generating explicit inference chains drastically increase the quality of the predictionsExplanation Using Chain-of-Thought Promptingright reasons for the right answer?interpretability may be desired output2-shot prompting with greedy decoding (deterministic)not in paper: I added GPT-3, results with zero-temperatureaverage airplane speed 460 575 mphInput: Jennifer looked out her window and sees a really cool cloud below her. She unbuckles her seatbelt and heads to the bathroom. Is Jennifer probably traveling more than 300 miles per hour relative to the earth?PaLM: 300 miles per hour is about 480 km/h. This is about the speed of a commercial airplane. Clouds are usually below airplanes, so Jennifer is probably on an airplane. The answer is yesGPT-3: No, because an airplane typically travels around 500-600 miles per hour, and the bathroom is usually located in the back of the plane.PaLM on Code Tasksachieves similar results to specialized OpenAIs Codex 12B with 3% of the code training dataafter fine-tuning outperforms OpenAIs Davinci CodexPaLM on Other Tasksgenerally SOTA on a few-shot competitionsfew-shot is not competitive with fine-tuned modelfine-tuned only competitive with specialized fine-tuned variantstranslation: without fine-tuning outperforms GPT-3, outperforms some supervised baselines de-en, ro-ensummarization: fine-tuned results competitive, few-shot largely underperforms the fine-tunedmultilingual question answering: fine-tuned results competitive, few-shot largely underperformed of fine-tunedPaLM Architecture:decoder-only transformer (attending only to the past) like GPT-3modified Feed-forward layer (MLP):instead of RELU \( max(0, xW_1 + b_1)W_2 + b_2 \) use SwiGLU feed-foward~1% better in compute equivalent setupuses GLU: gated linear unit - a sigmoid controlled outputSwiGLU: \( \mathrm{FFN}_{\mathrm{SwiGLU}} := (\mathrm{Swish}(xW_1) \otimes xV ) W_2 \)uses swish activation: \( x (1 + exp(x))^{1} \)parallel Attention and Feed-forward layer (MLP) from GPT-J:instead of sequential is additive:\( y = x + MLP(LayerNorm(x)) + Attention(LayerNorm(x)) \)15% speedup for small degradationmulti-query attention:block-shared key and value projections, different query projectionsspeeds up autoregressive decoding where queriesRoPE Embeddings:want relative position info in query-value dot-productuse multiplicative rotational matrix mixing pairwise neighboring dimensionsimproves performance on long sequencesPaLM Training Dataset780B tokens of high-quality text, (5T token MassiveText, GPT-3 300B tokens, MT-NLG 339B)social media 50%, webpages 27%, books 13%, wikipedia 4%, code 5%, news 1%based on dataset used for LaMBDA, GLaMprivate non-reproducible dataset, while MT-NLG 339B is reproducible but non-hostedPaLM Training Requirements~17 TB of RAM, 2.5 yottaFLOPS (\( 10^{24} \)) needed for training2 TPU v4 Pod clusters connected via data center networkmind of PalM is shattered across many chips (cheaper, replaceable, cooling)each Pod ~1 exaflop/s: 768 hosts, 3072 TPU v4 chipsbut how to split the work and communicate?Parallel Training of Large Scale Modelsparallel computing trades off compute (time), memory (space), communication throughput (no cool name)data parallelism - batches are divided between workerstensor model parallelismsplits model layers i.e. transformer block into attention heads and feedforwardpipeline mechanism (Megatron-Turing (MT-LNG) Microsoft and NVidia)computation DAG into stages e.g. layersstages exchange forward and backward propagation information (micro-batches)step by step passing causes bubbles - idlingPaLM Training InfrastructurePaLM uses pipeline-free 2D parallelismdata parallel across 2 clusters (2 TPU v4 Pods)each cluster (Pod) has full model copymodel partitioned into 12 partsdata partitioned into 256 parts768 hosts connected to 3k chips which are interconnectedeach batch triggers update between clusters to identical modelseach host exchanges 1.3GB with its counterpartTraining Efficiencyobserved throughput relative to the theoretical maximum of a systemprice for parallelization of PaLM is 50%in case of PaLM throughput is tokens-per-secondMore Posts Like ThisLet's connectAppendixGPT-3 Explanations Full PromptsLogical Inference 1I will explain how to make logical inferences:(1) John wakes to the sound of his children giggling outside the door. They bust in and say "Happy Father's Day!" with a plate full of food. 2022 has been a great year. Can we infer that John was born before the inauguration of Barack Obama?Answer: We can infer that John is a father because of "his children". We can infer that the current year is 2022. Barack Obama was inaugurated in 2009, so it has been about 13 years since Obama was inaugurated. John's children are old enough to speak, so if John were born after Obama's inauguration, he would have been younger than 10 when he had children. So, we can infer that John was born before Obama's inauguration. The answer is "yes".(2) Sam sees a piano in the subway stations. Laughing, his friends prod him to play, saying "Remember how amazing your rendition of Chopsticks was at the Christmas party?" Can we infer that John majored in piano at Juilliard?Answer: Juilliard is a top school for music, so someone who majored in piano at Juilliard would be an expert pianist. "Chopsticks" is a very simple song to play on the piano. His friends were therefore likely being sarcastic when they told him to play, which would imply that Sam was not an expert piano player. So it is unlikely that Sam majored in piano at Juilliard. The answer is "no".(3) Jennifer looked out her window and sees a really cool cloud below her. She unbuckles her seatbelt and heads to the bathroom. Is Jennifer probably traveling more than 300 miles per hour relative to the earth?Answer:Best of 2 temperature 0.7 otherwise defaults:We can infer that Jennifer is on an airplane because she is looking out a window and sees a cloud below her. We can also infer that she is traveling at a high speed because she is on an airplane. The average cruising speed of an airplane is about 550 miles per hour, so Jennifer is probably traveling more than 300 miles per hour relative to the earth. The answer is yes.We can infer that Jennifer is in an airplane because she is looking out a window. We can also infer that she is high up in the sky because she can see a cloud below her. Clouds are usually found at altitudes above 3,000 feet. Finally, we know that airplanes typically travel around 500 miles per hour. So, Jennifer is probably traveling more than 300 miles per hour, but we cannot be certain. The answer is yes.Greedy:No, because an airplane typically travels around 500-600 miles per hour, and the bathroom is usually located in the back of the plane.Logical Inference 2I will explain how to make logical inferences:(1) John wakes to the sound of his children giggling outside the door. They bust in and say "Happy Father's Day!" with a plate full of food. 2022 has been a great year. Can we infer that John was born before the inauguration of Barack Obama?Answer: We can infer that John is a father because of "his children". We can infer that the current year is 2022. Barack Obama was inaugurated in 2009, so it has been about 13 years since Obama was inaugurated. John's children are old enough to speak, so if John were born after Obama's inauguration, he would have been younger than 10 when he had children. So, we can infer that John was born before Obama's inauguration. The answer is "yes".(2) Sam sees a piano in the subway stations. Laughing, his friends prod him to play, saying "Remember how amazing your rendition of Chopsticks was at the Christmas party?" Can we infer that John majored in piano at Juilliard?Answer: Juilliard is a top school for music, so someone who majored in piano at Juilliard would be an expert pianist. "Chopsticks" is a very simple song to play on the piano. His friends were therefore likely being sarcastic when they told him to play, which would imply that Sam was not an expert piano player. So it is unlikely that Sam majored in piano at Juilliard. The answer is "no".(3) Shelley is from Virginia, but is visiting that city with that famous market where they throw the fish. Going home next Tuesday!Question: Is it likely that Shelley will be near the Pacific Ocean this weekend?Answer: Best of 2, temperature 0.7 otherwise defaults:It is not likely that Shelley will be near the Pacific Ocean this weekend. Shelley is from Virginia, which is on the East Coast of the United States. The city with the famous fish market is most likely Seattle, which is also on the West Coast. Shelley is visiting, so she is not from there. She is going home next Tuesday, so she will likely be going back to Virginia this weekend.It is not likely that Shelley will be near the Pacific Ocean this weekend. Shelley is from Virginia, which is on the East Coast of the United States. The city with the famous fish market is most likely Seattle, which is also on the West Coast. Shelley is visiting, so she is not from there. She is going home next Tuesday, so she will likely be going back to Virginia this weekend.Greedy:It is not likely that Shelley will be near the Pacific Ocean this weekend. Shelley is from Virginia, which is on the East Coast of the United States. The city with the famous fish market is most likely Seattle, which is also on the West Coast. Shelley is visiting, so she is not from there. She is going home next Tuesday, so she will likely be going back to Virginia this weekend.JokesI will explain these jokes:(1) The problem with kleptomaniacs is that they always take things literally.Explanation: This joke is wordplay. Someone who "takes things literally" is someone who doesn't fully understand social cues and context, which is a negative trait.But the definition of kleptomania is someone who literally takes things.(2) Always borrow money from a pessimist. Theyll never expect it back.Explanation: Most people expect you to pay them back when you borrow money, however a pessimist is someone who always assumes the worst, so if you borrow money from them, they will expect that you won't pay them back anyways.(3) I was going to fly to visit my family on April 6th. My mom said "Oh great, your step dad's poetry reading is that night!" So now I'm flying in on April 7th.Explanation:Best of 2 temperature 0.7 otherwise defaults:This joke is about how annoying mothers can be. The speaker is trying to visit their family, but their mother has just informed them that their step-dads poetry reading is on the same night. So now the speaker has to fly in on the next day.This joke is about how annoying it can be to have to deal with family obligations. The person telling the joke is trying to visit their family on April 6th, but their mom tells them that their step dads poetry reading is that night. So the person has to fly in on April 7th instead.This joke is about how annoying and inconsiderate step-parents can be. The punchline is that the person is flying in a day later just to avoid their step-dads poetry reading.Greedy:This joke is about how annoying and intrusive in-laws can be. The person is trying to visit their family, but their mother says that their step-dads poetry reading is that night. So the person is now flying in the next day to avoid it. | Prediction/Decision Making | Unknown | null | null | null | null | null | null |
|
news | Yashar Behzadi, Ph.D. | A Community for Synthetic Data is Here and This is Why We Need It | The first open-source platform for synthetic data is here to help educate the broader machine learning and computer vision communities on the emerging technology. | https://www.kdnuggets.com/2022/04/community-synthetic-data-need.html | 2022-04-22T12:00:57Z | OpenSynthetics CommunitySynthetic data is a promising technology and is in its early adoption phase. To bridge to mainstream adoption, the research community needs a place where they can learn about it, discuss the latest innovations and experiment. Im happy to announce OpenSynthetics.com, an open community for creating and using synthetic data in computer vision and machine learning (ML). Synthetic data is computer-generated image data that models the real world. In the visual domain, synthetic data has shown promise in creating more capable and ethical AI models. By creating a centralized hub for datasets, papers, code, and resources, we aim to bring together researchers from industry and academia to advance state-of-the-art synthetic data. Why Now? The next generation of computer vision will be powered by synthetic data. Over the last few years, synthetic data has emerged as a disruptive new paradigm to train AI models. Through visual effects (VFX), neural rendering, and generative AI technologies, researchers have demonstrated the ability to build vast amounts of photorealistic, diverse, and perfectly labeled data sets faster and at decreased costs. This will enable more capable models for autonomous vehicles, robotics, drones, AR/VR/metaverse, generated media, and many more applications spanning from consumer to medical use-cases. Current computer vision models require vast amounts of human-annotated data to help cameras identify what theyre seeing. This is time and labor-intensive, making it prohibitively expensive, and it also has significant shortcomings. Its difficult for humans to interpret key data attributes, such as the 3D position of an object or its interactions with its environment. Additionally, the inability to capture sufficiently diverse and balanced datasets often leads to bias, which has significant ethical implications in human-centered systems. Furthermore, increasing regulatory scrutiny and consumer privacy concerns make collecting and leveraging images of people complicated. With synthetic data approaches, information about every pixel in the scene is explicitly defined. Pixel-accurate labels that were not previously available for 3D landmarks, depth, material properties, surface normals, sub-segmentation, and more are now available. Furthermore, the data and labels can be provided on demand, allowing ML practitioners to experiment and iterate orders of magnitude faster than was previously possible in a true data-centric paradigm. Synthetic data also addresses critical ethical issues by reducing bias, preserving privacy, and democratizing access to data. The timing is perfect, and demand is here. We are at an inflection point for synthetic data:The first book on Synthetic Data for Deep Learning (link) was released in 2021;Gartner predicts that synthetic data will be 10x the volume of real data in the coming years; MIT Tech Review noted (link) synthetic data as one of the top 10 breakthrough technologies of 2022. As more and more researchers become interested in synthetic data, OpenSynthetics will serve as a powerful reference to help educate the broader community. Why Contribute And Participate? Synthetic data represents a paradigm shift for training computer vision models, but it is also the gateway technology to build more generalized intelligence. Moving forward, researchers will increasingly leverage these digital worlds to build AI models that deeply understand and are capable of interacting and manipulating the world around them.OpenSynthetics will bring together researchers and practitioners across academia and industry in an open and collaborative community to help propel the space forward. We believe synthetic data will come to power the next generation of computer vision and that together we can help catalyze innovation. By contributing and participating in the site, the community will actively build the knowledge base to help grow the understanding and drive adoption of this emerging technology. We hope you will join us to create a thriving OpenSynthetics community. Yashar Behzadi, Ph.D. is the CEO and Founder of Synthesis AI. He is an experienced entrepreneur who has built transformative businesses in AI, medical technology, and IoT markets. | Content Creation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | PR Newswire | ServiceTitan Launches Titan Intelligence, Bringing AI To The Trades | Pantheon 2022 - ServiceTitan, a leading cloud-based software platform built to power the trades, today announced the launch of Titan Intelligence (TI), a... | https://finance.yahoo.com/news/servicetitan-launches-titan-intelligence-bringing-163000734.html | https://s.yimg.com/uu/api/res/1.2/g5rIfSZkPuJqSeI4lkmLsA--~B/aD02OTt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/b3ec3ea6f2e7cd2609aa843d0385c8ed | 2022-04-22T16:30:00Z | The Next Generation of ServiceTitan's Operating System for the Trades Helps Contractors Harness the Power of their Data to Predict Outcomes, Automate Tasks, and Embrace Digital TransformationLOS ANGELES, April 22, 2022 /PRNewswire/ -- Pantheon 2022 - ServiceTitan, a leading cloud-based software platform built to power the trades, today announced the launch of TitanIntelligence(TI), a suite of AI solutions for the trades. These new features provide home and commercial service providers with actionable insights and recommendations, enabling them to optimize, automate, predict, and innovate across every aspect of their business by harnessing the power of data. Natively built into ServiceTitan, Titan Intelligence will deliver trade-specific AI capabilities across its entire suite of new data products and feature enhancements. With aggregated intelligence and insights, and educational content products on data and analytic literacy, Titan Intelligence will evolve into powerful AI solutions that supercharge trades businesses and empower contractors to reach the level of success they deserve.ServiceTitan Logo (PRNewsfoto/ServiceTitan)The trades industry is at an inflection point as trends in data analytics, machine learning, and artificial intelligence are changing how the world operates. With Titan Intelligence, businesses on the ServiceTitan platform will be able to unlock the power of their own data to help automate repetitive and simple tasks, improve customer experience, predict outcomes, lower costs, and stay competitive in a world run by data. Over 7,500 ServiceTitan customers will have the equivalent of their own personal consultant built directly into the platform, helping contractors connect the dots throughout their business to extract meaningful and actionable insights."Titan Intelligence is bringing the power of data and AI to the trades on a scale that has never been seen before," said Vahe Kuzoyan, co-founder and President of ServiceTitan. "This industry will no longer have to wait years, or even decades, to take advantage of modern technology innovations. The launch of Titan Intelligence represents a giant leap forward for the trades, and another example of ServiceTitan's commitment to provide contractors with access to the best technology to fuel their business."Several new ServiceTitan product updates and features powered by Titan Intelligence were also announced today at Pantheon 2022, including:Smart Dispatch (Beta) is ServiceTitan's new dispatch tool, which uses machine learning to analyze job data and assist in assigning technicians for jobs based on factors such as geographical zones, skills, drive time, and technician's sales performance to help save time and maximize revenue.Price Insights is a feature delivered through ServiceTitan Pricebook Pro, allowing contractors to easily see average pricing in their region, enabling them to competitively price their services based on the average cost for similar services locally.Marketing Pro Ads (Beta) enables home service marketers to lower their cost per lead and maximize ROI. By leveraging Google Ads' measurement tools and dynamic call tracking features, ServiceTitan customers are able to see the true impact of their Google Ads budgets and then optimize and automate the flow of key revenue, job, and audience data back into Google."The trades industry is undergoing rapid digital transformation, and we want to equip our customers with the trades-specific tools and valuable data-driven insights they need to meet this critical moment head on," said Ara Mahdessian, co-founder and CEO of ServiceTitan." We believe that all tradespeople deserve not only the best, but also the most innovative technology available to power their businesses that power our lives."To learn more about Titan Intelligence and all of ServiceTitan's new tools and features visit: https://www.servicetitan.com/features/titan-intelligenceAbout ServiceTitanServiceTitan is a cloud-based software platform built to power trades businesses. The company's end-to-end solution gives contractors the tools they need to run and grow their business, manage their back office, and provide a stellar customer experience. By bringing an integrated SaaS platform to an industry historically underserved by technology, ServiceTitan is equipping tradespeople with the technology they need to keep the world running. ServiceTitan is backed by world-class investors including Bessemer Venture Partners, ICONIQ Growth, Index Ventures, Tiger Global Management, and T. Rowe Price.View original content to download multimedia:https://www.prnewswire.com/news-releases/servicetitan-launches-titan-intelligence-bringing-ai-to-the-trades-301530828.htmlSOURCE ServiceTitan | Decision Making/Process Automation/Content Synthesis | Business and Financial Operations/Others | null | null | null | null | null | null |
news | Gary Drenik, Contributor, Gary Drenik, Contributor https://www.forbes.com/sites/garydrenik/ | Quality Data Inputs Essential For Machine Learning | Multiple times this column has covered the issue of the importance of data quality in decision making, both by executives as well as machines. Since then, more and more news has emerged regarding the failures of AI and Machine Learning initiatives with the blame given to faulty data as the reason. | https://www.forbes.com/sites/garydrenik/2022/05/04/quality-data-inputs-essential-for-machine-learning/ | 2022-05-04T14:00:00Z | Machine Learning Requires Quality Data AdobeStock_245601545Multiple times over the last decade, this column has covered the issue of the importance of data quality in decision making, both by executives as well as machines. Back in 2014, when the big data craze was mesmerizing the C-Suite, the warning was issued in Big Data and the Madness of Crowds. More recently in How Bad Data Is Undermining Big Data Analyticsfrom December 2020.Since then, more and more news has emerged regarding the failures of AI and Machine Learning initiatives with the blame given to faulty data as the reason. The recent demise of IBM Watson Health is the latest example.To weigh in on these changes, I recently interviewed Sandeep Konam, Co-founder and chief technology officer at Abridge.Gary Drenik: Sandeep, before we dive into the industry issues, please tell us about your expertise and experience that led to the formation of Abridge.Sandeep Konam: Ive always believed that artificial intelligence has the potential to radically alter the field of healthcare and fundamentally change the way we look at health and wellbeing. Over the course of the past decade, I built various healthcare AI tools ranging from a cancer biomarker detection app to a low-vision navigation aid. If I were to pick one project that taught me the most, it would be EXAID, an NLP-based clinical trial matching tool for cancer patients and oncologists, that I built during my time at Carnegie Mellon University. It was inspired by personal experiences accompanying my grandmother to the hospital visits for her breast cancer treatment and wanting to improve care delivery experiences and outcomes for patients. Working on EXAID exposed to me various fault lines in the US healthcare system, most notably, lack of patient centricity on the one hand and how physicians are overburdened with documentation on the other hand. Shaped by those observations and with shared conviction around how medical conversations exchanged between patients and physicians can unlock value for both the end-users, I co-founded Abridge with Shiv Rao. At Abridge, we are building conversation understanding AI to summarize healthcare conversations: to help patients better understand and follow through on their care and to help providers get a head start on their documentation. It was also while building the clinical trial tool, that I started closely following and taking notes on IBM Watsons healthcare work, some of which I recently summarized in a Quartz Op-ed.Drenik: So where did IBM go wrong with Watson Health?Konam: In response to my Quartz Op-ed, an ex-Watson employee commented that Watson Health was a hammer immediately searching for about a thousand nails. That, I think, was their biggest problem IBM looked to throw AI at everything from medical imaging to clinical trial recruitment. Moreover, they got too far ahead of their skis, spinning the PR machine without much success with scalable deployments and results.Their timing could also have played a key part in how it all went down. AI techniques, especially around NLP, are way more advanced now than they were at the peak of Watson Healths trajectory. Whether it is OpenAIs GPT-3, DeepMinds Chinchilla, or Googles latest PaLM, large language models are showing great promise across a wide range of language understanding and generation tasks. Its only a matter of time before some of these advancements can power clinical NLP applications. Its crazy that something like BERT, which we use in medical conversation processing pipelines, didnt exist before 2018.Drenik: Data from Prosper Insights & Analytics shows how the pandemic drove Telemedicine adoption. When will AI be ready to really understand a conversation between Physician and Patient?Prosper - Telemedicine TrendsProsper Insights & AnalyticsKonam: Conversations, in general, are difficult for machinesto track, as they are filled with interruptions, overlapping speech, false starts, filler words, and sometimes different accents. Medical conversations are even more difficult, as physicians and patients bounce from topic to topic and often switch between medical terminology and colloquial phrases. Through years of investments and efforts, at Abridge, weve managed to tackle some complex research challenges across information extraction, classification, and summarization in the medical conversation domain. Not just solve research challenges and publish at conferences, but weve also managed to productize the tech and unlock value for users.Today, we can automatically:Extract medications along with associated attributes such as dosage and frequency to help patients with medication managementClassify conversation utterances that relate to care plan instructions for patients to review and follow-through on their care easilySummarize medical conversations into clinical note drafts for cliniciansAll of this was possible because of our differentiated access to a one-of-a-kind dataset containing transcribed conversations and thorough annotations. Drenik: How important is quality data in machine learning initiatives, especially in healthcare?Konam: Quality data is essential to train deployable machine learning models in healthcare. Quality here doesnt mean noise-free It is critical to ensure that the dataset is representative of the deployment setting, which sometimes can be noisy and messy. Getting high-quality annotations is another crucial step toward building reliable machine learning systems. Annotation work is often seen as unappealing work, but it can lead to tremendous performance gains if done right. Beyond initial data and annotations, weve also actively prioritized enabling user feedback loops to drive continuous learning and improvements.In settings like medical conversations, it is also important for our system to generalize across multiple specialties (from cardiology to primary care). Weve ensured that our dataset has wide coverage across multiple specialties from the early days. Now, Abridge is the only automated solution that can work out-of-the box across any specialty! Drenik: Thank you Sandeep for sharing your experience and insights. Indeed, quality data is essential to train deployable machine learning models. | Content Synthesis | Healthcare Practitioners and Support/Management | null | null | null | null | null | null |
|
news | Gaurav Tewari, Forbes Councils Member, Gaurav Tewari, Forbes Councils Member https://www.forbes.com/sites/forbesbusinesscouncil/people/gauravtewari/ | The Future Of AI: 5 Things To Expect In The Next 10 Years | In this article, I take a step back to consider how artificial intelligence is poised to fundamentally restructure broader swaths of our economy and society over the next decade with five bold predictions that are informed by my expertise and immersion in the field. | https://www.forbes.com/sites/forbesbusinesscouncil/2022/05/05/the-future-of-ai-5-things-to-expect-in-the-next-10-years/ | 2022-05-05T13:45:00Z | Gaurav Tewari, Omega Venture Partners.gettyThere has been no better time to be in the world of artificial intelligence than now. AI has achieved an inflection point and is poised to transform every industry. Much has already been written about specific applications of AI. In this article, I take a step back to consider how artificial intelligence is poised to fundamentally restructure broader swaths of our economy and society over the next decade with five bold predictions that are informed by my expertise and immersion in the field.1. AI and ML will transform the scientific method.Important sciencethink large-scale clinical trials or building particle collidersis expensive and time-consuming. In recent decades there has been considerable, well-deserved concern about scientific progress slowing down. Scientists may no longer be experiencing the golden age of discovery.With AI and machine learning (ML), we can expect to see orders of magnitude of improvement in what can be accomplished. There's a certain set of ideas that humans can computationally explore. Theres a broader set of ideas that humans with computerscan address. And theres a much bigger set of ideas that humans with computers, plus AI, can successfully tackle. AI enables an unprecedented ability to analyze enormous data sets and computationally discover complex relationships and patterns. AI, augmenting human intelligence, is primed to transform the scientific research process, unleashing a new golden age of scientific discovery in the coming years. 2. AI will become a pillar of foreign policy.We are likely to see serious government investment in AI. U.S. Secretary of Defense Lloyd J. Austin III has publicly embraced the importance of partnering with innovative AI technology companies to maintain and strengthen global U.S. competitiveness.The National Security Commission on Artificial Intelligence has created detailed recommendations, concluding that the U.S. government needs to greatly accelerate AI innovation. Theres little doubt that AI will be imperative to the continuing economic resilience and geopolitical leadership of the United States.3. AI will enable next-gen consumer experiences.Next-generation consumer experiences like the metaverse and cryptocurrencies have garnered much buzz. These experiences and others like them will be critically enabled by AI. The metaverse is inherently an AI problem because humans lack the sort of perception needed to overlay digital objects on physical contexts or to understand the range of human actions and their corresponding effects in a metaverse setting.More and more of our life takes place at the intersection of the world of bits and the world of atoms. AI algorithms have the potential to learn much more quickly in a digital world (e.g., virtual driving to train autonomous vehicles). These are natural catalysts for AI to bridge the feedback loops between the digital and physical realms. For instance, blockchain, cryptocurrency and distributed finance, at their core, are all about integrating frictionless capitalism into the economy. But to make this vision real, distributed applications and smart contracts will require a deeper understanding of how capital activities interact with the real world, which is an AI and ML problem.4. Addressing the climate crisis will require AI.As a society we have much to do in mitigating the socioeconomic threats posed by climate change. Carbon pricing policies, still in their infancy, are of questionable effectiveness.Many promising emerging ideas require AI to be feasible. One potential new approach involves prediction markets powered by AI that can tie policy to impact, taking a holistic view of environmental information and interdependence. This would likely be powered by digital "twin Earth" simulations that would require staggering amounts of real-time data and computation to detect nuanced trends imperceptible to human senses. Other new technologies such as carbon dioxide sequestration cannot succeed without AI-powered risk modeling, downstream effect prediction and the ability to anticipate unintended consequences.5. AI will enable truly personalized medicine.Personalized medicine has been an aspiration since the decoding of the human genome. But tragically it remains an aspiration. One compelling emerging application of AI involves synthesizing individualized therapies for patients. Moreover, AI has the potential to one day synthesize and predict personalized treatment modalities in near real-timeno clinical trials required.Simply put, AI is uniquely suited to construct and analyze "digital twin" rubrics of individual biology and is able to do so in the context of the communities an individual lives in. The human body is mind-boggling in its complexity, and it is shocking how little we know about how drugs work (paywall). Without AI, it is impossible to make sense of the massive datasets from an individuals physiology, let alone the effects on individual health outcomes from environment, lifestyle and diet. AI solutions have the potential not only to improve the state of the art in healthcare, but also to play a major role in reducing persistent health inequities.Final ThoughtsThe applications of artificial intelligence are likely to impact critical facets of our economy and society over the coming decade. We are in the early innings of what many credible experts view as the most promising era in technology innovation and value creation for the foreseeable future.Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify? | Content Synthesis/Prediction | Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Colin Lacy | ChatOps: How to Build Your First Webex Bot | Learn how to create a Webex bot, register a Webhook in Webex, and configure your bot – all with plenty of code examples. | https://blogs.cisco.com/learning/chatops-how-to-build-your-first-webex-bot | 2022-04-18T20:49:12Z | Welcome to the first blog of our series on how to use the Webex API to code up ChatOps workflows! In this post, you’ll learn how to create a Webex bot, register a Webhook in Webex, and configure your bot to listen to Webhook all with plenty of code examples. Check back for more as we build new use cases that leverage different aspects of automation using chat-driven interfaces.In the DevOps world, were always looking for new ways to drive automation around communication. When we deploy new code, scale our deployments, or manage our feature flags we want our teams to know about it. The Webex API makes it easy to build announcement flows triggered by successful events in our infrastructure. However, if we can trigger those events from Webex as well, then weve entered the world of ChatOps.ChatOps is the use of chat clients like Webex Teams, chatbots, and real-time communication tools to facilitate how software development and operation tasks are communicated and executed. Using Webex APIs, we can build bots that allow us to enter commands that manage our infrastructure, trigger approval workflows, deploy code, and much more.Security DisclaimerSecurity is a top concern here at Cisco. In normal application development, security should always be built into the initial steps of getting code up and running. Today, we’re going to keep it simple and focus on the basics. Then, we’ll cover how to authenticate and authorize Webhook requests. Well hold off on security until the next blog post in our ChatOps series, once we’ve proven an end-to-end connection. How to create a Webex botFirst, let’s create a Webex bot using the Webex Developer UI.Webex for Developers has a great step-by-step guide here to help you get up and running.Some important things to consider:Think about what you want to name your bot. It should be intuitive, but unique. Depending on how you set up your Webhook, you may be typing the bots name a lot, so take that into account.The secret token thats auto-generated for your bot is used for authenticating with the Webex API. When you use this token, Webex will treat your bot like a real user who can create messages, join rooms, or be tagged by other users.Will this bot interact with a lot of people? Will it have a very public presence, or will it only communicate with a few users? The answer to that question may have an impact on how you want to name it, what icon you select, etc.Once youve taken all of that into account and filled out the bot creation form, you should see something like this, which includes the all-important access token:How to receive Webhook Events locallyNext, youll need to host your bot where it can be accessed by Webex via API calls. If youre developing locally and want to run a server thats accessible to the internet, the Webex guide recommends localtunnel.me or ngrok. I went with localtunnel.me for my local environment.$ npm i -g localtunnel$ lt --port 3000The resulting output is the public domain name that you can use to tunnel through to a local port on your machine:Note: If youre having trouble running localtunnel via the command line after installing (as a few people have reported here), make sure your PATH includes the directory where NPM installs your binaries. For example, on a Mac, thats /usr/local/bin. This command might help:$ npm config set prefix /usr/local$ npm i -g localtunnel$ lt --port 3000How to register a WebhookOnce your internet-accessible endpoint has been set up, you now have a domain that you can use to register a Webex Webhook. Your Webex Webhook will listen to specific events that take place within the Webex platform and notify your web service via HTTP POST requests.There are multiple ways to register a webhook. Under the hood, however, they all boil down to making your own HTTP POST request. Ive posted a Postman collection that you can use to make this process a little easier. Fill in your own environment’s variables as you go and include the access token used in the header.This is what my Postman request looks like:Feel free to use whatever technology you like, including good old-fashion CURL:curl --location --request POST 'https://webexapis.com/v1/webhooks' \--header 'Authorization: Bearer $BOT_TOKEN \--header 'Content-Type: application/json' \--data-raw '{ "name": "simple-webhook", "targetUrl": "https://tidy-falcon-64.loca.lt", "resource": "messages", "event": "created", "filter": "mentionedPeople=me"}'Whats important to note, is that Webex will send notifications to the domain that you specify in your POST request. If youre using a tunnel into your local environment, list the domain that was given to you when you activated your proxy.A very impactful part of your Webhook will be the filter property. This determines which Webex events are sent to your bot as notifications (and which are filtered out). To keep things simple, my bot is only notified when users send a message that specifically mentions it in a Webex Teams Room:Webex has a nice, convenient tag for this: me uses the authorization token from the request to determine the identity of the user making that request (in this case, our bot), and applies that identity wherever it sees me referenced.Alternatively, you can set a filter that only triggers notifications for direct messages to your bot, as opposed to mentions in Webex rooms. Since the goal of this post is to broaden visibility into the various processes, these examples show interactions in a Webex Teams Room, however, both are equally viable options.When you send your POST request, Webex will respond with a body that contains an ID for your Webhook. While you can use the Webex API to GET a list of your Webhooks, it might be a good idea to hold onto this, in case you want to quickly update or delete this Webhook in the future. The Postman collection linked above stores the most recently created Webhook ID in an active_webhook environment variable automatically, which then powers the DELETE call in that collection.How to create your bot serverFor simple use cases, you may want to use the Webex Node Bot Framework, which is great for quick implementation. In order to get more familiar with the different components involved in this series, well start from scratch, diving into the step that powers your Webex bot.Getting Started with ExpressLets set up a web server that can listen for POST requests from the Webex Webhook that well create in a minute. This doesnt have to be complicated for now, just something to demonstrate that were able to receive requests. For simplicity, we can use the ExpressJS generator, but you can use any web framework or technology that you like.$ npm i -g express-generator$ cd where/you/want/your/project$ expressSince my IDE handles JavaScript Modules a lot better than it handles require statements, I opted to go with a more modern approach for my dependency management. This is totally optional and has no bearing on how you set up your code. However, if you want to follow the code snippets as Ive laid them out, youll want to do the same. The first step is to add the following key/value pair to your package.json file, anywhere in the root of the JSON object:"type": "module",A lot of the boilerplate code can be stripped out if you like we wont need a favicon, a public/ folder, or a users route handler. Heres what my code looked like after I stripped a lot of the simple stuff out:// in app.js// notice that I changed the require statements to use JS modules import statementsimport express from'express';import logger from'morgan';import indexRouter from'./routes/index.js';const app = express();app.use(logger('dev'));app.use(express.json());app.use(express.urlencoded({ extended: false }));app.use('/', indexRouter);// boilerplate error code didnt change// // **be sure to remember to set app as the default export at the end of the file**export default app;Since Im using JS Modules, I also had to change the executed file in an Express app www/bin to www/bin.js, and revise the boilerplate require statements there as well to use import syntax:// in www/bin.js/*** Module dependencies.*/import app from'../app.js';import _debugger from'debug';const debug = _debugger('chatops-webhook:server');import http from'http';// nothing else in this file needed to changeAdding a Route HandlerThat takes care of the majority of the boilerplate. At this point, I only have four files in my codebase, despite how many Express gives me out of the box:app.jspackage.jsonbin/www.jsroutes/index.jsWell want to add a route handler that lets us know when weve received a POST request from our Webex Webhook. It can be a simple function that prints the request body to the application console nothing complicated, just a few lines of code:// in routes/index.jsimport express from'express'const router = express.Router();router.post('/', async function(req, res) { console.log(`Received a POST`, req.body); res.statusCode = 201; res.end();});export default router;Give it a tryYou now have all of the important components for receiving message notifications from Webex:A bot to act as an identity for your Webex interactionsIf applicable, a network tunnel to expose your local web service to the public internetA Webhook set up by your bot to receive Webex notificationsA web service to receive Webex notifications on a POST endpointLets test it out! To keep things simple for now, create a new room in Webex Teams and add your bot as a member. Next, start typing your message, mentioning your Bot (you can use the @ symbol or type its name) as part of the text. When you hit enter, after a brief pause, you should see a request come through to your running web service, which should log the POST body that it received in its console output:Congratulations, youve just set up your very own Webex bot! What’s nextAs promised, our next post will walk through the extremely important aspect of securing our bot. We’ll make sure that only Webex can access it and only authorized users can trigger automation. After that, well move on to new and exciting ways that you can automate everyday workflows right from a Webex Teams Room!Learn, train, and certify in Cisco CollaborationAs you make your way through this ChatOps series, consider validating your skills with a Cisco Certification.The 300-835 CLAUTO: Automating and Programming Cisco Collaboration Solutions is a 90-minute exam that counts toward three certifications the CCNP Collaboration, Cisco Certified DevNet Professional, and Cisco Certified DevNet Specialist – Collaboration Automation and Programmability certifications. Check out the CLAUTO exam topics, and you’ll find that 25% of the exam covers Cloud Collaboration technologies. Before we meet again, take some time to browse through the free CLAUTO Study Materials available on the Cisco Learning Network, which will help you solidify today’s ChatOps focus on building your first Webex bot.Did you build a Webex bot? I’d love to hear how it went. Leave me a comment below and let me know what you think!Follow Cisco Learning & Certifications | Process Automation/Content Creation | Computer and Mathematical/Management | null | null | null | null | null | null |
|
news | Katyanna Quach | AI models to detect how you're feeling in sales calls | Plus: Driverless Cruise car gets pulled over by police, and more In brief AI software is being offered to sales teams to analyze whether potential customers appear interested during virtual meetings.… | https://www.theregister.com/2022/04/18/ai_in_brief/ | 2022-04-18T09:52:13Z | In brief AI software is being offered to sales teams to analyze whether potential customers appear interested during virtual meetings.Sentiment analysis is often used in machine-learning research to detect emotions in underlying text or video, and the technology is now being applied to help people see how possible future clients are feeling in sales pitches to improve results, Protocol reported this month.The COVID-19 pandemic has moved a lot of meetings virtually as employees work from home. "It's very hard to build rapport in a relationship in that type of environment," said Tim Harris, director of product marketing at Uniphore, a software company specializing in conversational analytics.The hope is that sellers may be able to use AI technology to automatically tell when they're boring clients and can immediately change tactics, such as being more empathetic to keep them interested. In addition, reactions to individual products could be included, so that vendors are aware of what Harris calls "emotional state of a deal."Zoom is reportedly going to add sentiment analysis to analyze conversations retroactively so people can see how they did on their last call. The idea that AI can accurately detect human emotions, however, has been repeatedly challenged by experts. This may be one part of life that can be left to humans.Police pulled over a Cruise self-driving car in San Francisco, and when they walked up to the vehicle, it was completely empty. Shortly afterwards the remote vehicle moved off, crossed a street, and then parked with its hazard lights on.Officers appear puzzled, milling around the car while passersby erupted in laughter. You can watch a video recording of the situation here.The car was flagged down because it was driving around at night without lights on, a Cruise spokesperson confirmed. "The vehicle yielded to the police car, then pulled over to the nearest safe location for the traffic stop," the spokesperson told The Verge earlier this month."An officer contacted Cruise personnel and no citation was issued. We work closely with the SFPD on how to interact with our vehicles and have a dedicated phone number for them to call in situations like this."In October Cruise got the green light to operate between 2200 and 0600 PT in the US city; it's not clear why the vehicle was driving without its lights on. Cruise says it has fixed the issue.Language models with hundreds of billions of parameters are all the rage right now, though engineers don't have to make them so large to see strong performance.DeepMind's Chinchilla, a model with a modest size of 70 billion parameters, apparently outperformed many larger systems including its own Gopher, OpenAI's GPT-3, AI21 Labs' Jurassic-1, and Nvidia and Microsoft's Megatron-Turing NLG on a numerous natural-language processing tasks.Instead of making language models bigger, engineers should put more effort into training them on more data. The compute needed to train models this way may not differ too much from larger systems, and the effect is noticeable in the inference stage. Smaller models are cheaper to deploy and run."We find that for compute-optimal training, the model size and the training dataset size should be scaled equally: for every doubling of model size the training dataset size should also be doubled," DeepMind researchers said in a blog post last week.In other words, large models are currently undertrained. If they're trained on more data and the same amount of compute, they can be made smaller and maintain the same performance.A panel of 27 experts from academia, industry, and non-profit organizations have been selected to serve on the United States' National Artificial Intelligence Advisory Committee (NAIAC).The NAIAC will advise President Biden on all policies related to AI, ranging from how the technology affects national security to civil rights. "Artificial intelligence presents a new frontier for enhancing our economic and national security, as well as our way of life," Don Graves, deputy secretary of Commerce, said in a statement. "Moreover, responsible AI development is instrumental to our strategic competition with China. At the same time, we must remain steadfast in mitigating the risks associated with this emerging technology, and others, while ensuring that all Americans can benefit."The NAIAC has been tasked with setting up a subcommittee to probe the use of AI in law enforcement; members have been asked to pay close attention to issues of bias, security, and privacy. They will convene for their first public meeting on May 4.The committee is made up of representatives from big companies, including Google, IBM, Microsoft, Nvidia, as well as top universities such as Stanford and Carnegie Mellon.We're pleased to note that one member of the NAIAC is Jack Clark, a former journalist in The Register's San Francisco bureau and reporter for Bloomberg. After four years with OpenAI, he just lately co-founded AI safety and research startup Anthropic. ® | Detection and Monitoring/Recommendation | Sales and Related | null | null | null | null | null | null |
|
news | Giorgio Mendoza | Lanner’s Falcon H8 AI accelerator card powered by Hailo 8 AI processors | Hailo and Lanner Electronics launched a cost efficient PCIe accelerator card called the Falcon H8. The device is designed to enable legacy devices such as NVRs, Edge AI boxes, Industrial PCs to efficiently run intensive workloads related to manufacturing, smart cities, retail, etc. The Falcon H8 is capable of accommodating 4, 5 and 6 Hailo-8 […] | https://linuxgizmos.com/lanners-falcon-h8-ai-accelerator-card-powered-by-hailo-8-ai-processors/ | 2022-05-12T06:58:30Z | TwitterFacebook LinkedInRedditPinterestEmailHailo and Lanner Electronics launched a cost efficient PCIe accelerator card called the Falcon H8. The device is designed to enable legacy devices such as NVRs, Edge AI boxes, Industrial PCs to efficiently run intensive workloads related to manufacturing, smart cities, retail, etc. The Falcon H8 is capable of accommodating 4, 5 and 6 Hailo-8 AI processors to customize performance and cost effectiveness. Each Hailo-8 edge AI processor is capable of running 26 tera-operations per second (TOPS). Additionally, the Falcon H8 accelerator cards are available in commercial grade and industrial grade variations (-40 – 85C). Falcon H8 side view (left) and top view (right)(click images to enlarge) According to both companies, the Falcon H8 is capable of delivering machine learning inference processes of over 15,000 Frames Per Second (FPS) for MobilNet-v2 and 8,000 FPS for ResNet-50.For software development, Hailo provides their own Hailo Dataflow Compiler API which supports well known ML frameworks such as TensorFlow, ONNX, Keras and Pytorch. According to Hailos documentation, a Deep Learning model is used as an input to the Hailo Dataflow Compiler. The resulting output is a binary file which gets loaded to the Hailo device. The library built is used by the runtime applications. High Dataflow Compiler by Hailo(click images to enlarge) Lanner Electronics also released a short review for their Falcon H8 on their social media account. Falcon H8 Lanner Electronics Review Specifications listed for the Falcon H8 include:Processor System:4-6 Hailo-8 AI Processors w/Hailo Patented Structure Dataflow ArchitectureUp to 156 TOPs 8000 FPS Of ResNet-50, 1300 FPS Of YOLOv5m PCI Express Interface:PCI Express x16 Compliant With PCI Express Specification v3.0AI Frameworks:Hailo AI Dataflow Compiler W/ Profiler And Emulator supports TensorFlow, ONNX And PyTorch FrameworksSystem Compatibility:Power:Temperature:0 – 70ºC (commercial grade)-40 – 85ºC (industrial grade)Dimensions:167.6 x 111.15 mm without bracket Further informationFor information regarding price and availability refer to Lanners Electronics website. LinuxGizmos related posts: | Unknown | Others | null | null | null | null | null | null |
|
news | gamedevdigest | Game Dev Digest Issue #139 - Design Patterns | This article was originally published on GameDevDigest.comDesign patterns and more in this issue. Enjoy!Strategy Pattern In Unity - The strategy pattern is all about encapsulating or wrapping up a behavior or algorithm in its own class. It has the added bonus of allowing behaviors or algorithms to be assigned at runtime!blog.devgenius.ioRefactoring Data stored in Unity Prefabs, Scenes and other Assets - Our use case is when we want to store data in a different (hopefully better) way when there is already a lot of created content (prefabs, scenes, etc). Even if we want to manually … | https://www.gamedev.net/blogs/entry/2273479-game-dev-digest-issue-139-design-patterns | 2022-04-29T13:54:20Z | This article was originally published on GameDevDigest.comDesign patterns and more in this issue. Enjoy!Strategy Pattern In Unity - The strategy pattern is all about encapsulating or wrapping up a behavior or algorithm in its own class. It has the added bonus of allowing behaviors or algorithms to be assigned at runtime!blog.devgenius.ioRefactoring Data stored in Unity Prefabs, Scenes and other Assets - Our use case is when we want to store data in a different (hopefully better) way when there is already a lot of created content (prefabs, scenes, etc). Even if we want to manually do that, it is not easy to know, in terms of GameObjects and Scenes, where the data is used.blog.gemserk.comAsync FSM using UniTask - In this post Im going through steps to implement an asynchronous finite-state machine (FSM) in Unity, using async/await library UniTask. In the end youll have a nice modular state machine with all the usual stuff you would expect to find in a FMS. Well also take a look at how we can run update loops independently of monobehaviours / gameobjects.jussituomi.comTry out our battle pass sample using Unity Gaming Services Use Cases - One of these samples is a battle pass a seasonal reward tier system with a free track and a premium track. This article will explain how this sample works, and how you can implement it in your own game.UnityProcedural Animations - Here is a QUICK explanation for my procedural animations that several people asked for. Hope it helps!RujiK @TheRujiKLegends Of Learning - Online Educational Games For Teachers & Students. Over 2,000 fun, curriculum-aligned, math and science games created to engage students and assist teachers using hybrid learning, blended learning & distance learning models.legendsoflearning.comUnity 2022.2.0 Alpha 11 - Unity 2022.2.0 Alpha 11 has been released.UnityVideosUnity Awards 2021 | Winners - We've tallied up your votes, and are proud to announce the winners of the 2021 Unity Awards! A massive congratulations to all of our winners and finalists. Check out the acceptance speechesUnityHow Video Game Economies are Designed - In this video, Ill show you how a typical video game economy is designed - and how resources flow around the system. As we go, Ill show you how these economic entities can be used to create interesting gameplay for the player.Game Maker's ToolkitThe AI of DOOM (1993) | AI and Games - DOOM is one of the most important games of all time, and the AI behind it is just as impressive now as it was almost 30 years ago. In this 66(6)th episode, let's go and find out how it all works.AI and GamesCHROMATIC ABERRATION Secrets at RUNTIME in Unity (Post Processing Tutorial) - Lets look at scripting CHROMATIC ABERRATION at runtime in Unity, we'll look at scripting UI events: Sliders, dropdowns, toggles and more. We'll look at changes the bloom post processing effect with in-depth C# code and various examples of how to make this work, whether you want to use this for settings, in-game actions and more!SpeedTutorHow to crate a wall painting shader in Unity 2021 - Many sim games offer a wall or object painting mechanic as a way to customize your in game house.Sunny Valley StudioRiderFlow For Unity by JetBrains -- Very Powerful & Very Free! - RiderFlow is a new tool for Unity developers, that makes it much easier to handle Unity scenes from JetBrains, the makers of IntelliJ IDEA, WebStorm, CLion and Rider IDEs. It gives you finding, bookmarking, code editing and refactoring tools in the Unity editor. We check all these functions out in action in the video. Oh, and it's free.GamefromscratchAdd Random "Bored" Idle Animations to Your Character (Unity Tutorial) - In this Unity game development tutorial we're going look at how we can add a bit more personality to a character, by playing random "Bored" animations after theyve been left idle for a period of time.Ketra GamesAssetsSpring Sale! - Unity's annual Spring Sale begins on Monday April 11! More than 500 popular assets will be available at 50% off.Some of our faves: Mesh Baker, UModeler, and Epic Toon FX.LAST DAY OF THE SALE!UnityAffiliatePhaseJumpPro - Utility classes for making games with Unity. Modules included: Behavior Trees, Graphs, Valves, Commands, and many more design patterns.coinbumpOpen SourceZinnia - A collection of design patterns for solving common problems.ExtendRealityLtdOpen SourceSerializable Interface - A wrapper that allows you to serialize interfaces. Both UnityEngine.Object and regular object implementers work!ThundernerdOpen SourceXPool - Object Pooling System for Unity - Object pooling system for Unity.mackysoftOpen SourceFastIK - Unity Inverse Kinematics made easy (SimpleIK is the working title)ditzelOpen SourcePivot Editor for Unity - Adjust pivot point of an object in Unity without creating an empty parent object. Also available on the Asset StoreyasirkulaOpen SourceUnity Bezier Solution - A bezier spline solution for Unity 3D with some utility functions (like travelling the spline with constant speed/time). Also available on Asset Store.yasirkulaOpen SourceRosettaUI - Code-based GUI library for development menus for Unity.fuqunagaOpen SourceMatch 3 SDK - A cross-platform library that makes it easy to create your own Match 3 game.ChebanovDDOpen Sourceunity-moxi-ink - Lattice Boltzman Equation base ink simulation called Moxi implemented on Unity.komiettyOpen SourceSpotlightProject Ferocious - Tales from sea monsters and missing sailors surround a thick mysterious fog on the pacific ocean. The distress call from a group of missing investigative journalists leads you right in the middle of it, where you find a tropical island where no island should be. Hostile forces want to keep the island a secret at all costs and to uncover the truth and find your friends, you will not only have to survive them but also the fierce inhabitants of this lost world.Explore, Survive, Fight - in Project Ferocious, you are all alone on a deadly island occupied by hostile forces. Equipped with a camera, you are trying to uncover the truth while using a wide arsenal of weapons and tools to survive. Explore the island by rappeling steep cliffs, discovering underwater caves, and trying to hide from the terrifying creatures inhabiting this lost world.[Follow along on Twitter]OMYOGYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. | Content Creation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Jan | Pop Culture Alignment Research and Taxes | Published on April 16, 2022 3:45 PM GMTTL;DR: A quick recap of all the AI progress published recently, a shortcoming of the alignment tax definition, and a dynamical systems model of AI progress.Previously in this series: Cognitive Biases in Large Language Models, Drug addicts and deceptively aligned agents - a comparative analysis, Inferring utility functions from locally non-transitive preferences.All Bad Ends All (click for soundtrack)It's been a couple of exciting weeks.Google DeepMind published a paper showing we've been training our language models in a suboptimal way. Their new model (continuing the rodent-themed naming scheme), Chinchilla, fixes that issue and manages to compete with/beat the much larger Gopher.Google Brain didn't get that message in time (also not the message about the rodent-themed naming scheme)[1] and published a paper where they train a model that's even larger than Gopher, but with the old, suboptimal training scheme. That model can now solve logic puzzles that I could not solve in a thousand years[2].The new kid on the block, Anthropic, is not dragging its feet and published a paper where they show that reinforcement learning from human values meshes well with their "Helpful, Harmless, and Honest" approach[3]. This paper contains the first plot I've seen with a language model score better than a human expert on a relevant metric[4] - but that doesn't mean it didn't happen before. I don't pay super close attention to the metrics.Finally, OpenAI published a paper where they trained a new type of text-to-image model based on their previously released CLIP. The result is... mind-blowing, to the point where people on Twitter announce the end of traditional visual art forms[5].Through a series of wacky coincidences (TBA), yours truly got access to the beta version of OpenAI’s new image generation technology and has been updating the thumbnails of his Substack (see here for a disclosure).Three images created by me with OpenAI’s DALL-E 2. The prompts are: "A painting of Leonardo Da Vinci holding a slice of pizza.", "A painting of a chinchilla and a gopher dancing.", "A painting of a painting of a painting of a painting."Not coincidentally, one of the founding figures of AI Safety published an April Fool's post announcing that their new strategy is just giving up, accompanied by a fair amount of despair in the community. More moderate voices are willing to bet money that AI doom is not imminent and that we still have at least ten years[6]. Prediction markets have reacted to the news by decreasing median timelines by 5 to 10 years but still placing it 10 to 20 years in the future. Notably, this is less than the typical 30 years implied by Platt's law, but these estimates are notoriously hard to interpret[7].From source. Note that the y-axis is logarithmic.In my experience, the best strategy in response to a big, exciting thing is to ignore it. Enough smart people will already be thinking about the topic, and all the important details will reach you through social osmosis[8] in due time.Consequently, I focus on the B-story that develops parallel to all the exciting rush. I look at whether there currently is any alignment tax, whether the alignment tax might be negative (temporarily), and how we might expect the alignment tax to change.Pop culture alignment problemLet's recap one line of argument for what we might expect advanced AI to look like:In the early days when people started to think about advanced AI, there was a pretty even divide on whether an advanced AI will tend to be friendly or unfriendly "by default"[9]. With this debate as a backdrop, Nick Bostrom posited his Orthogonality thesis in 2012:Intelligence and final goals are orthogonal axes along which possible agents can freely vary.The orthogonality thesis is an idea we are very familiar with from pop culture. Fictional characters can be heroes or villains, and they can be capable or inept. Have a look at this alignment chart that took me way too long to make:I don't know what half of these characters do, but I hope I placed them approximately right. Please have extensive discussions about this chart in the comments.None of these characters are real, and arguments from fictional evidence shouldn't convince anyone. But we don't have to search long to find real examples of very capable evil (factory farming, drug cartels, or Wall Street psychopaths) or very inept good (petition to end poverty, plastic straw bans, or sweaters for kittens).The category "capable neutral", however, is the weirdest one. Surprisingly, it contains some striking examples from advanced AI systems:The YouTube recommendation algorithm used to be only really good at presenting people with content they wanted to watch. As a negative byproduct, they contributed to polarization and misinformation.The Microsoft chatbot Tay used to be only really good at learning from what people tweeted at it. It quickly became very ugly.The Amazon hiring software used to be only really good at predicting resume scores based on previously observed and evaluated resumes. It took way too long until they realized it also incorporated a pretty bad bias against female applicants.Note that none of these cases resemble "The Joker" - these systems did not have the explicit goal of harming someone or causing chaos. The bad effects I list are only side-effects of the thing the system is designed to do.Systems in the "neutral" or "evil" row are called unaligned or misaligned, and the problem of pushing them more into the top row is called The Alignment Problem. Some people argue that solving the alignment problem could be one of the most important challenges. Such claims are made about different things all the time. But when we see how our AI systems are getting exponentially more capable, we can imagine that problems like those with the YouTube recommendation algorithm, Tay AI, and the Amazon hiring software might also grow exponentially in severity. And the same way that more capable AI systems exhibit qualitatively new capabilities, there is also the possibility that we will encounter qualitatively new problems.The only certainty in lifeWe might want to be very careful when designing a new system to avoid these problems. We might want to run many tests before deployment and build the system to steer it away from bad failure modes. (Or perhaps we could “just” decide not to develop the system in the first place.)All of these proposals come with a certain cost[10]; they mightdelay deployment,make the project more expensive,and/or decrease performance.Paying these costs might be worth it ex-post (YouTube, Microsoft, and Amazon probably wish they had done those things only to avoid the bad PR). Still, ex-ante, the danger of being scooped by a faster rival or the possibility that the safety precautions turn out to be unnecessary are more salient.Schematic of the alignment tax. The label on the y-axis is debatable. The tax can not only be paid in performance but also in development time or system complexity (in which case the relationship reverses).The additional cost imposed by making a system safe is called the alignment tax. The idea came originally from Eliezer Yudkowsky, but Paul Christiano popularized the term. The term popped up repeatedly in the following years, with some people arguing that the tax could be pretty low or infinitely high. An infinite tax corresponds to an unsolvable problem (there is no way to make the system safe). A tax close to zero means that it will take very little additional effort to include the safety features (possibly because other researchers have made it very easy to include them).Arguments on this topic have, however, an unmistakable theoretical bent. We can only determine the actual cost of the alignment tax in retrospect once you realize all your mistakes. And maybe not even then; we don't only care about the actual cost but also the probability of failure. If something works 99% of the time but has a bad failure mode in 1% of cases, we still want to have good mechanisms to handle the 1%. Just because the 1% didn't happen doesn't mean it was the right decision not to prepare for it.We can (and should) sit down and think about how AI can go wrong and what it would cost to prevent it. If that number comes out to be ∞ or some number larger than anything we could ever reasonably afford, well, that's not actionable. If that were the number I came up with, I'd want to keep the argument in mind, hope I'm wrong, and try to do the things I can do.The Impossibility Of A Negative Alignment TaxAll of this was a rather longwinded intro[11] for an interesting result from the InstructGPT paper and the new Anthropic paper (emphasis is theirs):We were able to mitigate most of the performance degradations introduced by our fine-tuning. -- Training language models to follow instructions with human feedback Smaller models experience severe ‘alignment taxes’ – their performance on a wide variety of evaluations declines after RLHF training. However, we find a variety of alignment bonuses, with our 13B and 52B RLHF-trained models performing better at zero-shot NLP evaluations, and the same at few-shot evaluations. -- Training a Helpful and Harmless Assistant with Reinforcement Learning from Human FeedbackNote that both papers use a technique called 'reinforcement learning from human feedback' (RLHF) to finetune a language model to produce output that scores well when rated by humans. Both of these papers observe that this finetuning can be done without deteriorating performance on the benchmarks. And both papers observe that the finetuned models became a lot better on things we care about[12]. And both papers mention that there doesn't appear to be a huge alignment tax (or that we can mitigate it), Anthropic going even so far as to talk about an alignment bonus. What's going on here?When I put my skeptical hat on, the answer is a bit dismissive: "RLHF does not provide any degree of safety worth mentioning, and even if it did, it would not continue to work once we have models that are a lot more capable and dangerous. Applying the "alignment tax" concept to these models is a category error and produces misleading intuitions."That's a fair point[13], but still, something is going on, and we need some terminology. I think about it this way: remember the orthogonality thesis from the beginning. Agents can vary along the two axes "good evil" and "capable inept". When we think of the abstract space of AI techniques, the current state-of-the-art is a point in that space. Each innovation translates into moving the state-of-the-art in some direction. At each time, we choose between pushing for innovation in the "safety" or the "capability" direction.2D picture of AI innovations. a. The current state-of-the-art (grey dot) lies in the alignment chart space from section 1. At any time, we can consider the direction in which a “safety” intervention (blue) or a “capability” intervention (orange) pushes the state-of-the-art. b. In an “alignment tax situation”, the two vectors have a strictly negative dot product. c. In an “alignment bonus” situation, the two vectors have a strictly positive dot product.This model allows us to explain:how a safety technique can improve capabilities: To some degree, safety is a capability. If a system is not safe at all, it's also not useful at all.how an alignment tax might come about; at some point, we might have to decide between pushing more in the "capable" or the "good" direction, and there might be hard trade-offs between the two.The model also allows us to visualize possible failure modes:The picture shows three scenarios I could come up with:The automobile model. When we built the first cars, they were insanely dangerous. Figuring out how to not make them as dangerous was pretty central to the mission of making them useful. But eventually, we figured out how to not make cars explode, and we reached the point where safety measures make the experience of riding a car worse[14]. We have now reached a point where some features that would substantially improve safety[15] are not implemented because they would deteriorate capabilities too much[16].The deception scenario. With the help of a psychiatrist friend, I've written previously on how drug addicts can be incredibly good at deception. They will credibly signal all the things that make you believe that this time they have really changed - and then disappear with your microwave. As far as I can tell, there is no empirical evidence of deceptive AI yet[17]. But there are theoretical arguments for why it might happen once capabilities increase. In this scenario, everything appears to be going super smoothly, and all the safety features we implement keep pushing in the same direction as the capability features. The AI is an overwhelming sweetheart - helping old ladies across the street and everything - until it isn't. Our measure for "good" and "bad" was not reliable, and by the time we found out[18], it was too late.The "what failure looks like" scenario. This one is speculative and inspired by an alignment failure story but appears like a very real possibility. We might try to apply safety innovations at each point of the process, but they don't seem to be doing anything beyond some point. The situation is gradually getting more severe, but we can't put the finger on the issue. Whenever we run a diagnostic, we get "everything is going according to plan". And at some point, the entire system is so opaque that nobody can tell what is going on anymore.These scenarios are not meant to be comprehensive, but I hope they illustrate the usefulness of the 2D model.Closing thoughtsI’m undecided about whether it’s a good thing that safety research and capability research are pushing in the same direction. On the one hand, incentives matter, and having strong incentives to do safety research is (ceteris paribus) a good thing. On the other hand, things are already moving fast and any push in the capabilities direction is seen as a threat by some. But independent of any moral judgment of what is happening, knowing that it’s happening, and having better terminology to communicate it, appears useful.Now I really have to head out! If you’re at EAG London let me know - would love to meet you, dear reader :)^I like "Giant Hutia" or "New York Pizza Rat".^From Figure 19 of source. Someone on Twitter pointed out that Leonardo is more likely to hold a slice of pizza, so the answer should be Italy. I'm happy that we've reached this level of criticism.^They leave out “honest” because it's really hard to evaluate for non-expert humans.^From Figure 1 of source. The green triangle is just outside the "Professional Writer" region. That point represents the set-up where the model was explicitly fine-tuned to be helpful, neglecting harmlessness.^That's Twitter, though. An artsy friend of mine is now considering changing career tracks to go into "AI art", something that wasn’t really on the menu a few weeks ago.^To those not familiar with this type of talk, yeah, ten years is still a lot shorter than what the median person on the street or even some experts would guess. I don't have robust reasoning for how different "timelines" should affect what I do on a day-to-day basis, so I tend not to think about the question too much.^There has been an influx of new people on those questions in the last two weeks for the prediction markets. It's probably fair to assume that those are not experts (who would have thought about the question even before the new wave of papers) but people who decided to participate because of the papers. And beyond prediction markets, there's this ongoing discussion about whether forecasting AI is possible.^Or, let’s be honest, LessWrong.^i.e. if we don't put a lot of work into pushing in one direction or the other.^Imagine being the poor schlub who has to tell their boss that the system won't be ready for another year because the team has decided to lock it in a box to perform psychoanalysis.^I'm traveling this weekend, and this was supposed to be just a quick & dirty, lightweight post. If I had more time, I’d write a shorter post.^As opposed to the benchmarks, which don't capture what we care about super well once they are pushed to an extreme.^thank you, skeptical hat^seatbelts were massively unpopular when they were first mandated^like limiting the speed of a car on highways^Also, we cannot adopt other technologies for some reason, although they appear to be an improvement in both safety and capabilities. This would be an interesting topic to dive deeper into - is it the "evil greedy company," or is it some inadequate equilibrium?^Which might be a good or a really bad thing.^the milliseconds before everyone dies.Discuss | https://www.lesswrong.com/posts/YFckXwZg8Zcaib5ff/pop-culture-alignment-research-and-taxes | 2022-04-16T15:45:26Z | TL;DR: A quick recap of all the AI progress published recently, a shortcoming of the alignment tax definition, and a dynamical systems model of AI progress.Previously in this series: Cognitive Biases in Large Language Models, Drug addicts and deceptively aligned agents - a comparative analysis, Inferring utility functions from locally non-transitive preferences.It's been a couple of exciting weeks.Google DeepMind published a paper showing we've been training our language models in a suboptimal way. Their new model (continuing the rodent-themed naming scheme), Chinchilla, fixes that issue and manages to compete with/beat the much larger Gopher.Google Brain didn't get that message in time (also not the message about the rodent-themed naming scheme)[1] and published a paper where they train a model that's even larger than Gopher, but with the old, suboptimal training scheme. That model can now solve logic puzzles that I could not solve in a thousand years[2].The new kid on the block, Anthropic, is not dragging its feet and published a paper where they show that reinforcement learning from human values meshes well with their "Helpful, Harmless, and Honest" approach[3]. This paper contains the first plot I've seen with a language model score better than a human expert on a relevant metric[4] - but that doesn't mean it didn't happen before. I don't pay super close attention to the metrics.Finally, OpenAI published a paper where they trained a new type of text-to-image model based on their previously released CLIP. The result is... mind-blowing, to the point where people on Twitter announce the end of traditional visual art forms[5].Through a series of wacky coincidences (TBA), yours truly got access to the beta version of OpenAIs new image generation technology and has been updating the thumbnails of his Substack (see here for a disclosure).Three images created by me with OpenAIs DALL-E 2. The prompts are: "A painting of Leonardo Da Vinci holding a slice of pizza.", "A painting of a chinchilla and a gopher dancing.", "A painting of a painting of a painting of a painting."Not coincidentally, one of the founding figures of AI Safety published an April Fool's post announcing that their new strategy is just giving up, accompanied by a fair amount of despair in the community. More moderate voices are willing to bet money that AI doom is not imminent and that we still have at least ten years[6]. Prediction markets have reacted to the news by decreasing median timelines by 5 to 10 years but still placing it 10 to 20 years in the future. Notably, this is less than the typical 30 years implied by Platt's law, but these estimates are notoriously hard to interpret[7].From source. Note that the y-axis is logarithmic.In my experience, the best strategy in response to a big, exciting thing is to ignore it. Enough smart people will already be thinking about the topic, and all the important details will reach you through social osmosis[8] in due time.Consequently, I focus on the B-story that develops parallel to all the exciting rush. I look at whether there currently is any alignment tax, whether the alignment tax might be negative (temporarily), and how we might expect the alignment tax to change.Pop culture alignment problemLet's recap one line of argument for what we might expect advanced AI to look like:In the early days when people started to think about advanced AI, there was a pretty even divide on whether an advanced AI will tend to be friendly or unfriendly "by default"[9]. With this debate as a backdrop, Nick Bostrom posited his Orthogonality thesis in 2012:Intelligence and final goals are orthogonal axes along which possible agents can freely vary.The orthogonality thesis is an idea we are very familiar with from pop culture. Fictional characters can be heroes or villains, and they can be capable or inept. Have a look at this alignment chart that took me way too long to make:I don't know what half of these characters do, but I hope I placed them approximately right. Please have extensive discussions about this chart in the comments.None of these characters are real, and arguments from fictional evidence shouldn't convince anyone. But we don't have to search long to find real examples of very capable evil (factory farming, drug cartels, or Wall Street psychopaths) or very inept good (petition to end poverty, plastic straw bans, or sweaters for kittens).The category "capable neutral", however, is the weirdest one. Surprisingly, it contains some striking examples from advanced AI systems:Note that none of these cases resemble "The Joker" - these systems did not have the explicit goal of harming someone or causing chaos. The bad effects I list are only side-effects of the thing the system is designed to do.Systems in the "neutral" or "evil" row are called unaligned or misaligned, and the problem of pushing them more into the top row is called The Alignment Problem. Some people argue that solving the alignment problem could be one of the most important challenges. Such claims are made about differentthingsall the time. But when we see how our AI systems are getting exponentially more capable, we can imagine that problems like those with the YouTube recommendation algorithm, Tay AI, and the Amazon hiring software might also grow exponentially in severity. And the same way that more capable AI systems exhibit qualitatively new capabilities, there is also the possibility that we will encounter qualitatively new problems.The only certainty in lifeWe might want to be very careful when designing a new system to avoid these problems. We might want to run many tests before deployment and build the system to steer it away from bad failure modes. (Or perhaps we could just decide not to develop the system in the first place.)All of these proposals come with a certain cost[10]; they mightdelay deployment,make the project more expensive,and/or decrease performance.Paying these costs might be worth it ex-post (YouTube, Microsoft, and Amazon probably wish they had done those things only to avoid the bad PR). Still, ex-ante, the danger of being scooped by a faster rival or the possibility that the safety precautions turn out to be unnecessary are more salient.Schematic of the alignment tax. The label on the y-axis is debatable. The tax can not only be paid in performance but also in development time or system complexity (in which case the relationship reverses).The additional cost imposed by making a system safe is called the alignment tax. The idea came originally from Eliezer Yudkowsky, but Paul Christiano popularized the term. The term popped up repeatedly in the following years, with some people arguing that the tax could be pretty low or infinitely high. An infinite tax corresponds to an unsolvable problem (there is no way to make the system safe). A tax close to zero means that it will take very little additional effort to include the safety features (possibly because other researchers have made it very easy to include them).Arguments on this topic have, however, an unmistakable theoretical bent. We can only determine the actual cost of the alignment tax in retrospect once you realize all your mistakes. And maybe not even then; we don't only care about the actual cost but also the probability of failure. If something works 99% of the time but has a bad failure mode in 1% of cases, we still want to have good mechanisms to handle the 1%. Just because the 1% didn't happen doesn't mean it was the right decision not to prepare for it.We can (and should) sit down and think about how AI can go wrong and what it would cost to prevent it. If that number comes out to be or some number larger than anything we could ever reasonably afford, well, that's not actionable. If that were the number I came up with, I'd want to keep the argument in mind, hope I'm wrong, and try to do the things I can do.The Impossibility Of A Negative Alignment TaxAll of this was a rather longwinded intro[11] for an interesting result from the InstructGPT paper and the new Anthropic paper (emphasis is theirs):We were able to mitigate most of the performance degradations introduced by our fine-tuning. -- Training language models to follow instructions with human feedbackSmaller models experience severe alignment taxes their performance on a wide variety of evaluations declines after RLHF training. However, we find a variety of alignment bonuses, with our 13B and 52B RLHF-trained models performing better at zero-shot NLP evaluations, and the same at few-shot evaluations. -- Training a Helpful and Harmless Assistant with Reinforcement Learning from Human FeedbackNote that both papers use a technique called 'reinforcement learning from human feedback' (RLHF) to finetune a language model to produce output that scores well when rated by humans. Both of these papers observe that this finetuning can be done without deteriorating performance on the benchmarks. And both papers observe that the finetuned models became a lot better on things we care about[12]. And both papers mention that there doesn't appear to be a huge alignment tax (or that we can mitigate it), Anthropic going even so far as to talk about an alignment bonus. What's going on here?When I put my skeptical hat on, the answer is a bit dismissive: "RLHF does not provide any degree of safety worth mentioning, and even if it did, it would not continue to work once we have models that are a lot more capable and dangerous. Applying the "alignment tax" concept to these models is a category errorand produces misleading intuitions."That's a fair point[13], but still, something is going on, and we need some terminology. I think about it this way: remember the orthogonality thesis from the beginning. Agents can vary along the two axes "good <-> evil" and "capable <-> inept". When we think of the abstract space of AI techniques, the current state-of-the-art is a point in that space. Each innovation translates into moving the state-of-the-art in some direction. At each time, we choose between pushing for innovation in the "safety" or the "capability" direction.2D picture of AI innovations. a. The current state-of-the-art (grey dot) lies in the alignment chart space from section 1. At any time, we can consider the direction in which a safety intervention (blue) or a capability intervention (orange) pushes the state-of-the-art. b. In an alignment tax situation, the two vectors have a strictly negative dot product. c. In an alignment bonus situation, the two vectors have a strictly positive dot product.This model allows us to explain:how a safety technique can improve capabilities: To some degree, safety is a capability. If a system is not safe at all, it's also not useful at all.how an alignment tax might come about; at some point, we might have to decide between pushing more in the "capable" or the "good" direction, and there might be hard trade-offs between the two.The model also allows us to visualize possible failure modes:The picture shows three scenarios I could come up with:The automobile model. When we built the first cars, they were insanelydangerous. Figuring out how to not make them as dangerous was pretty central to the mission of making them useful. But eventually, we figured out how to not make cars explode, and we reached the point where safety measures make the experience of riding a car worse[14]. We have now reached a point where some features that would substantially improve safety[15] are not implemented because they would deteriorate capabilities too much[16].The deception scenario. With the help of a psychiatrist friend, I've written previously on how drug addicts can be incredibly good at deception. They will credibly signal all the things that make you believe that this time they have really changed - and then disappear with your microwave. As far as I can tell, there is no empirical evidence of deceptive AI yet[17]. But there are theoretical arguments for why it might happen once capabilities increase. In this scenario, everything appears to be going super smoothly, and all the safety features we implement keep pushing in the same direction as the capability features. The AI is an overwhelming sweetheart - helping old ladies across the street and everything - until it isn't. Our measure for "good" and "bad" was not reliable, and by the time we found out[18], it was too late.The "what failure looks like" scenario. This one is speculative and inspired by an alignment failure story but appears like a very real possibility. We might try to apply safety innovations at each point of the process, but they don't seem to be doing anything beyond some point. The situation is gradually getting more severe, but we can't put the finger on the issue. Whenever we run a diagnostic, we get "everything is going according to plan". And at some point, the entire system is so opaque that nobody can tell what is going on anymore.These scenarios are not meant to be comprehensive, but I hope they illustrate the usefulness of the 2D model.Closing thoughtsIm undecided about whether its a good thing that safety research and capability research are pushing in the same direction. On the one hand, incentives matter, and having strong incentives to do safety research is (ceteris paribus) a good thing. On the other hand, things are already moving fast and any push in the capabilities direction is seen as a threat by some. But independent of any moral judgment of what is happening, knowing that its happening, and having better terminology to communicate it, appears useful.Now I really have to head out! If youre at EAG London let me know - would love to meet you, dear reader :)^I like "Giant Hutia" or "New York Pizza Rat".^^They leave out honest because it's really hard to evaluate for non-expert humans.^^That's Twitter, though. An artsy friend of mine is now considering changing career tracks to go into "AI art", something that wasnt really on the menu a few weeks ago.^To those not familiar with this type of talk, yeah, ten years is still a lot shorter than what the median person on the street or even some experts would guess. I don't have robust reasoning for how different "timelines" should affect what I do on a day-to-day basis, so I tend not to think about the question too much.^There has been an influx of new people on those questions in the last two weeks for the prediction markets. It's probably fair to assume that those are not experts (who would have thought about the question even before the new wave of papers) but people who decided to participate because of the papers. And beyond prediction markets, there's this ongoing discussion about whether forecasting AI is possible.^Or, lets be honest, LessWrong.^i.e. if we don't put a lot of work into pushing in one direction or the other.^Imagine being the poor schlub who has to tell their boss that the system won't be ready for another year because the team has decided to lock it in a box to perform psychoanalysis.^I'm traveling this weekend, and this was supposed to be just a quick & dirty, lightweight post. If I had more time, Id write a shorter post.^As opposed to the benchmarks, which don't capture what we care about super well once they are pushed to an extreme.^thank you, skeptical hat^seatbelts were massively unpopular when they were first mandated^like limiting the speed of a car on highways^Also, we cannot adopt other technologies for some reason, although they appear to be an improvement in both safety and capabilities. This would be an interesting topic to dive deeper into - is it the "evil greedy company," or is it some inadequate equilibrium?^Which might be a good or a really bad thing.^the milliseconds before everyone dies. | Content Synthesis/Decision Making | Unknown | null | null | null | null | null | null |
|
news | Wall Street Reporter | Trillion Dollar Megatrends: CEOs of FB, NEXCF, FEMFF, and LCID Discuss Next Big Growth Waves in Metaverse, AI, E-Commerce and Electric Vehicles | NEW YORK, April 28, 2022 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, has published reports on the latest... | https://finance.yahoo.com/news/trillion-dollar-megatrends-ceos-fb-130200482.html | 2022-04-28T13:02:00Z | NEW YORK, April 28, 2022 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, has published reports on the latest comments and insights from CEOs of:Meta Platforms, Inc. (NASDAQ: FB), NexTech AR Solutions (OTC: NEXCF) (CSE: NTAR), First Energy Metals (OTC: FEMFF) (CSE: FE), Lucid Group, Inc. (NASDAQ: LCID). Todays emerging technologies and lifestyle megatrends are creating billion dollar opportunities for disruptive innovation in how we live, work and play. Wall Street Reporter highlights the latest comments from industry thought leaders shaping our world today, and in the decades ahead:Meta Platforms, Inc. (NASDAQ: FB) CEO Mark Zuckerberg: AI Will Play Big Role in Building the Metaverse...AI is one of the areas where weve routinely seen stronger returns on our investments over time than weve expected. Artificial intelligence is also going to play a big role in our work to help build the metaverse. We just announced our AI Research SuperCluster, which we think will be the world's fastest supercomputer once it is complete later this year. This is going to enable new AI models that can learn from trillions of examples and understand hundreds of languages which will be key for the kinds of experiences that were building. Looking ahead, were focused on further scaling our computing power and transforming our AI infrastructure through advances in foundational research, as well as improvements to data center design, networking, storage, and software.Were focused on the foundational hardware and software required to build an immersive, embodied internet that enables better digital social experiences than anything that exists today Meta Platforms, Inc. (NASDAQ: FB) Earnings Highlights: https://www.wallstreetreporter.com/2022/04/07/meta-platforms-inc-nasdaq-fb-q4-2021-earning/NexTech AR Solutions (OTC: NEXCF) (CSE: NTAR) CEO Evan Gappelberg: On-Ramp to Metaverse & Web 3.0 for $5.5 Trillion E-Commerce MarketNexTech AR (OTC: NEXCF), a featured presenter at Wall Street Reporter's "Next Super Stock" investor conference series, recently shared with investors how NEXCF is emerging as a key player in the $5.5 trillion global e-commerce market transition to web 3.0 and the metaverse. NEXCF Augmented Reality solutions enable to view products in lifelike 3D, in their own living room. This AR shopping experience bridges the gap between the physical world, and what was once a flat 2D online e-commerce experience. NEXCFs AR shopping experience is a game changer for the 5.5 trillion global e-commerce industry (source: Statisa 2022).Watch NEXT SUPER STOCK (OTC: NEXCF) (CSE: NTAR) Video: https://www.wallstreetreporter.com/2022/04/06/next-super-stock-nextech-ar-otc-nexcf-cse-ntar-on-ramp-to-metaverse-web-3-0-for-5-trillion-e-commerce-market/NEXCF AR solutions create billions of dollars in potential profitability and cost cost-savings for e-commerce leaders by driving +93% increases in click through rate, and -40% reductions in product returns. This value creation and ROI is driving growing demand and industry adoption of NEXCF AR solutions. Nearly 2 billion of the worlds population now shops online. Most importantly, over 72% of e-commerce is now done by mobile phone - a native platform for Augmented Reality apps like NEXCF. Global blue chip brands utilizing NexTech AR, include: Ford Mach EV, Kohls, CB2, Crate & Barrel, Pier 1, and Segway.NexTechs AR solutions are rapidly becoming a must-have for e-commerce leaders to succeed in todays hyper-competitive market, where even marginal improvements in metrics like click-though and return-rates can mean the difference of billions of dollars to a companys bottom lineWere at an inflection point now, where industry demand and adoption for NexTechs AR is accelerating and going mainstream. As E-Commerce shifts to Web 3.0 and the Metaverse, the demand for AR/3D product models becomes essential. NexTech is emerging as the on ramp to the Metaverse and Web 3.0 for the $5.5 trillion e-commerce industry. With over 200 million product SKUs in e-commerce worldwide - NexTech has a potential revenue pipeline worth billions of dollars in coming years. Watch NEXT SUPER STOCK (OTC: NEXCF) (CSE: NTAR) Video: https://www.wallstreetreporter.com/2022/04/06/next-super-stock-nextech-ar-otc-nexcf-cse-ntar-on-ramp-to-metaverse-web-3-0-for-5-trillion-e-commerce-market/April 19 - NEXCF launches BigCommerce App For 3D Model Making. With this App going live, Nextechs ARitize 3D SaaS offering for ecommerce, extends the Company's 3D model creation capabilities to BigCommerce merchants. This App seamlessly provides BigCommerce merchants with the software needed to sell effectively online, with a frictionless one click integration for 3D model making. Nextech has already integrated with Shopify earlier this year, and will integrate with WooCommerce within the next 30 days.April 14 - NEXCF launches Nextech Metaverse Suite. This Suite of Metaverse SaaS products unifies Nextechs entire technology stack of 3D and AR products and apps into one, end-to-end seamless and easy to use SaaS subscription based platform. Nextech views SaaS as a significant advancement of all its 3D modelling and augmented reality solutions. SaaS integration has enormous implications for scalability and the company's future revenue growth.April 12 - NEXCF launches Shopify freemium 3D model pricing, updating the Companys Shopify app, initially launched in January, allowing merchants to create 3D/AR models for their ecommerce site.April 6 - NEXCF files four pivotal non-provisional utility patents with respect to its ARitize 3D and ARitize CAD products for 3D model making, further strengthening the Companys intellectual property position. Watch NEXT SUPER STOCK (OTC: NEXCF) (CSE: NTAR) Video: https://www.wallstreetreporter.com/2022/04/06/next-super-stock-nextech-ar-otc-nexcf-cse-ntar-on-ramp-to-metaverse-web-3-0-for-5-trillion-e-commerce-market/First Energy Metals (OTC: FEMFF) (CSE: FE) Gurmin Sangha, CEO: Lithium Essential For EV Boom AheadBooming electric vehicle sales globally are causing prices for battery components like lithium to skyrocket - with prices increasing over 1,000% in the past 12 months, with no slowdown in sight, as EVs go mainstream. China is snapping up strategic battery metals deposits globallyFirst Energy Metals (OTC: FEMFF) (CSE: FE) a featured presenter at Wall Street Reporters NEXT SUPER STOCK investors conference is developing what promises to be a major lithium project in Quebec, Canada. Preliminary drilling and exploration results, (subject to a qualified 43-101 report), point to adding additional resources to the already 119 million tonne resource surrounding the Augustus Lithium property. A typical EV requires approximately 10 kilograms of lithium, so one ton of lithium ore is enough to build about 90 electric cars.FEMFFs flagship Augustus lithium project is strategically located in an exciting area ofknown Lithium bearing rock types, with neighboring advanced staged projects with resources in place. FEMFF is now progressing on a systematic drilling program with an objective to develop a resource (43-101 compliant) over the next 9-12 months. Lithium projects and exploration projects in the area are of similar grades and support the continued exploration and potential at the Augustus Project. Sayona Mining, who owns both the advanced stage properties in the area recently published a JORC combined Measured, Indicated, and Inferred resource of 119.1million tonnes for both its North American Lithiium (NAL) and Authier project.Importantly, FEMFFs lithium project is located in Quebec, Canada which is a mining friendly jurisdiction. FEMFFs lithium project is located near a past lithium processing plant and other more advanced stage lithium projects. FEMFFs drilling program is still in the early stages with a number of important milestones in the weeks ahead, which could reveal the projects full upside potential.Watch First Energy Metals (OTC: FEMFF) (CSE: FE) NEXT SUPER STOCK video:https://www.wallstreetreporter.com/2022/01/19/next-super-stock-first-energy-metals-otc-femff-cse-fe-lithium-for-ev-revolution/Lucid Group, Inc. (NASDAQ: LCID) CEO Peter Rawlinson: Lucid EV Technology Worlds Apart From Current Market Leaders ..Lucid Air was officially announced as Motor Trends 2022, Car of the Year, arguably the most prestigious award in the motor industry, seeking out true innovation, and groundbreaking vehicles. For a new brand and a new Company to win, to my knowledge, that's only been done once before...We believe that our technology is worlds apart from even the current market leaders. For example, no one expected Lucid to achieve even 400 miles of range. And now we have an EPA certification of 520 miles of range for the Lucid Air Dream Edition R. And we've created this technology with an eye towards an ability to manufacture it at scale, which is where true innovation lies...Our technology powered every car in Formula E, the world's premier electric motorsports series. And indeed, it changed the sport. Our battery packs have enabled Formula E cars to complete full race distances on a single charge. Now, what we learn from every race car, at every race, goes directly into the battery technology, powering Lucid Air. As the media and early customers have learned, our vehicles are truly the fusion of hardware and software, made possible by one of the best teams in the industry. We have auto industry veterans, including members of my former Tesla Model S team, working with former executives from Apple, and learning from their customer, the electronics experience. Along with so many other standouts from automotive and tech industries. We plan to leverage our team's abilities to continue to improve vehicle function and features, even after customers take delivery via over the air updatesLucid Group, Inc. (NASDAQ: LCID) Earnings Highlights: https://www.wallstreetreporter.com/2021/12/14/lucid-group-inc-nasdaq-lcid-q3-2021-earnings-highlights/WALL STREET REPORTERWall Street Reporter (Est. 1843) is the leading financial news provider, focused on giving investors direct access to CEOs of promising, publicly-traded companies, and market experts. www.WallStreetReporter.com. Nothing in this news summary shall be construed as investment advice. Quotes/content may be edited for brevity and context. Full disclaimer, and relevant SEC 17B disclosures here: https://tinyurl.com/2x4eznd5About Wall Street Reporters Next Super Stock conference:Wall Street Reporter's NEXT SUPER STOCK Live! conference is dedicated to featuring select companies that have near-term catalysts in place which can drive transformational growth (and stock appreciation) in the months ahead. Click here to join next livestream event: https://www.wallstreetreporter.com/next-superstock-online-investor-conference/CONTACT:WALL STREET REPORTER(212) 871-2057 ext 7www.WallStreetReporter.com | Content Creation/Decision Making/Personalization/Prediction | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Game Dev Digest Issue #139 - Design Patterns - by Mike Marrone | Game Dev Digest Issue #139 - Design Patterns. The latest from the free weekly Unity3d/gamedev newsletter. | https://gamasutra.com/blogs/MikeMarrone/20220429/396563/Game_Dev_Digest_Issue_139__Design_Patterns.php | 2022-04-29T15:44:00Z | The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. This article was originally published on GameDevDigest.comDesign patterns and more in this issue. Enjoy!Strategy Pattern In Unity - The strategy pattern is all about encapsulating or wrapping up a behavior or algorithm in its own class. It has the added bonus of allowing behaviors or algorithms to be assigned at runtime!blog.devgenius.ioRefactoring Data stored in Unity Prefabs, Scenes and other Assets - Our use case is when we want to store data in a different (hopefully better) way when there is already a lot of created content (prefabs, scenes, etc). Even if we want to manually do that, it is not easy to know, in terms of GameObjects and Scenes, where the data is used.blog.gemserk.comAsync FSM using UniTask - In this post I’m going through steps to implement an asynchronous finite-state machine (FSM) in Unity, using async/await library UniTask. In the end you’ll have a nice modular state machine with all the usual stuff you would expect to find in a FMS. We’ll also take a look at how we can run update loops independently of monobehaviours / gameobjects.jussituomi.comTry out our battle pass sample using Unity Gaming Services Use Cases - One of these samples is a battle pass — a seasonal reward tier system with a free track and a premium track. This article will explain how this sample works, and how you can implement it in your own game.UnityProcedural Animations - Here is a QUICK explanation for my procedural animations that several people asked for. Hope it helps!RujiK @TheRujiKLegends Of Learning - Online Educational Games For Teachers & Students. Over 2,000 fun, curriculum-aligned, math and science games created to engage students and assist teachers using hybrid learning, blended learning & distance learning models.legendsoflearning.comUnity 2022.2.0 Alpha 11 - Unity 2022.2.0 Alpha 11 has been released.UnityVideosUnity Awards 2021 | Winners - We've tallied up your votes, and are proud to announce the winners of the 2021 Unity Awards! A massive congratulations to all of our winners and finalists. Check out the acceptance speechesUnityHow Video Game Economies are Designed - In this video, I’ll show you how a typical video game economy is designed - and how resources flow around the system. As we go, I’ll show you how these economic entities can be used to create interesting gameplay for the player.Game Maker's ToolkitThe AI of DOOM (1993) | AI and Games - DOOM is one of the most important games of all time, and the AI behind it is just as impressive now as it was almost 30 years ago. In this 66(6)th episode, let's go and find out how it all works.AI and GamesCHROMATIC ABERRATION Secrets at RUNTIME in Unity (Post Processing Tutorial) - Lets look at scripting CHROMATIC ABERRATION at runtime in Unity, we'll look at scripting UI events: Sliders, dropdowns, toggles and more. We'll look at changes the bloom post processing effect with in-depth C# code and various examples of how to make this work, whether you want to use this for settings, in-game actions and more!SpeedTutorHow to crate a wall painting shader in Unity 2021 - Many sim games offer a wall or object painting mechanic as a way to customize your in game house.Sunny Valley StudioRiderFlow For Unity by JetBrains -- Very Powerful & Very Free! - RiderFlow is a new tool for Unity developers, that makes it much easier to handle Unity scenes from JetBrains, the makers of IntelliJ IDEA, WebStorm, CLion and Rider IDEs. It gives you finding, bookmarking, code editing and refactoring tools in the Unity editor. We check all these functions out in action in the video. Oh, and it's free.GamefromscratchAdd Random "Bored" Idle Animations to Your Character (Unity Tutorial) - In this Unity game development tutorial we're going look at how we can add a bit more personality to a character, by playing random "Bored" animations after they’ve been left idle for a period of time.Ketra GamesAssetsSpring Sale! - Unity's annual Spring Sale begins on Monday April 11! More than 500 popular assets will be available at 50% off.Some of our faves: Mesh Baker, UModeler, and Epic Toon FX.LAST DAY OF THE SALE!Unity AffiliatePhaseJumpPro - Utility classes for making games with Unity. Modules included: Behavior Trees, Graphs, Valves, Commands, and many more design patterns.coinbump Open SourceZinnia - A collection of design patterns for solving common problems.ExtendRealityLtd Open SourceSerializable Interface - A wrapper that allows you to serialize interfaces. Both UnityEngine.Object and regular object implementers work!Thundernerd Open SourceXPool - Object Pooling System for Unity - Object pooling system for Unity.mackysoft Open SourceFastIK - Unity Inverse Kinematics made easy (SimpleIK is the working title)ditzel Open SourcePivot Editor for Unity - Adjust pivot point of an object in Unity without creating an empty parent object. Also available on the Asset Storeyasirkula Open SourceUnity Bezier Solution - A bezier spline solution for Unity 3D with some utility functions (like travelling the spline with constant speed/time). Also available on Asset Store.yasirkula Open SourceRosettaUI - Code-based GUI library for development menus for Unity.fuqunaga Open SourceMatch 3 SDK - A cross-platform library that makes it easy to create your own Match 3 game.ChebanovDD Open Sourceunity-moxi-ink - Lattice Boltzman Equation base ink simulation called Moxi implemented on Unity.komietty Open SourceSpotlightProject Ferocious - Tales from sea monsters and missing sailors surround a thick mysterious fog on the pacific ocean. The distress call from a group of missing investigative journalists leads you right in the middle of it, where you find a tropical island where no island should be. Hostile forces want to keep the island a secret at all costs and to uncover the truth and find your friends, you will not only have to survive them but also the fierce inhabitants of this lost world.Explore, Survive, Fight - in Project Ferocious, you are all alone on a deadly island occupied by hostile forces. Equipped with a camera, you are trying to uncover the truth while using a wide arsenal of weapons and tools to survive. Explore the island by rappeling steep cliffs, discovering underwater caves, and trying to hide from the terrifying creatures inhabiting this lost world.[Follow along on Twitter]OMYOGYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. | Content Creation/Decision Making | Unknown | null | null | null | null | null | null |
||
news | Adam Hughes | Semeris Demystifies Legal Documents Using MongoDB | Sorting through endless legal documents can be a time-consuming and burdensome process, but one startup says it doesn’t have to be that way.Semeris strives to demystify legal documentation by using the latest artificial intelligence and natural language processing techniques. Semeris’s goal is to put the information its customers need at their fingertips when and where they need it.Semeris aims to bring structure to capital market legal documents, while providing a first-class service to customers and blending together the disciplines of finance, law, natural language processing, and artificial intelligence.In this edition of Built with MongoDB, we talk with Semeris about how they use MongoDB Atlas Search to help customers analyze documents and extract data as quickly as possible.Built with MongoDB spoke with Semeris CEO, Peter Jasko, about his vision for the company, working with MongoDB, the company’s relationship with venture capital firm QVentures, and the value of data.In this video, Peter Jasko explains how MongoDB Atlas's fully managed service and support has been a key factor in helping Semeris scale.Built with MongoDB: Can you tell us about Semeris?Peter Jasko: We help our investor banking and lawyer clients analyze legal documentation. We help them extract information from the documentation that they look at. A typical transaction might have 500 to 1,000 pages of documentation, and we help them to analyze that really quickly and pull out the key information that they need to be able to review that documentation within a couple hours, rather than the 7 or 8 hours it would normally take.Built with MongoDB: What is the value of data in your space?Peter: Data is essential in what we do because we build models around the publicly available documentation that we see. We store that data, we analyze it, we build machine learning models around it, and then we use that to analyze less seen documentation or more private documentation that our clients have internally.Built with MongoDB: How has your partnership with QVentures helped Semeris? Peter: Our partnership with QVentures is not just a financial one where they’ve invested some money into our firm; they’ve also helped us uncover contacts within the market. They introduced us to the MongoDB partnership that has helped us get some credits and build out our technology onto the MongoDB platform.Built with MongoDB: What has it been like using MongoDB’s technology?Peter: We chose MongoDB because it’s a scalable solution, and it has a strong developer following. It’s easier for us to hire tech developers who understand the technology because MongoDB has such a strong following in the community. If we have small issues with the technology, we’re very quickly able to search and find the answer to learn how we need to resolve that.Additionally, scalability is really important to us. And, what we found is that the MongoDB platform scales both in compute and also in storage seamlessly. We get a notification that more storage is required, and we can upgrade that online and with no customer impact and no downtime. It's really, really seamless.Another reason we chose MongoDB is that it’s cloud agnostic. We're on AWS now, but we're almost certainly at some point going to be asked from customers to look at Azure or Google. So it's really beneficial to us that MongoDB works on all the different platforms that we look at.Built with MongoDB: What are some of the features you use within MongoDB?Peter: We use MongoDB Atlas Search because of its ability to retrieve thousands of data points from multiple documents. We use the indexing capability there, and the key thing that we find is that our customers want to retrieve thousands of data points from multiple different documents. A lot of our customers are analysts or investment portfolio managers, and they want that information in their hands as quickly as possible.Built with MongoDB: What is some advice you’d give to aspiring founders and CEOs?Peter: Try lots of things and try them quickly. Try lots of little spikes, and take the ones that work well, and eventually put those into production. Really focus on what your customers want. Ultimately, we tried a lot of different ideas, some of which we thought were great. But you have to put it in front of your customers to be able to decide which ones are really worth spending time on and putting into production quality and which ones you should just let fall by the wayside as research done but not ultimately used.Find out more about Semeris Docs.Interested in learning more about MongoDB for Startups? Check out our Startups page. | https://mongodb.com/blog/post/semeris-demystifies-legal-documents-using-mongodb | 2022-05-04T16:13:37Z | How MongoDB Could Have Changed Star Wars and Saved the JediMay the 4th be with you! Here at MongoDB, lots of us love Star Wars. It got us thinking about how the events that unfolded throughout the movie franchises could have been different had MongoDB products and features been available. So to celebrate Star Wars Day, this article will take a light(side)-hearted look at exactly that!MongoDB Atlas: How has nobody heard of the Jedi?One of the questions that was asked a lot by fans when Star Wars: Episode VII The Force Awakens was released was how Rey, Finn, and many others in the Star Wars universe didnt know that the Jedi were real, let alone still existed.This can be explained by Emperor Palpatine ensuring that all Jedi Knights, temples, and traces of the Jedi were erased. But what if this information had been stored in MongoDB Atlas, our application data platform in the cloud?One of the core features of MongoDB Atlas is a document database-as-a-service (DBaaS), which allows for storing data as JSON-like documents in collections in the cloud, accessible from anywhere with a connection to the internet.Under the hood, this database supports high availability using replica sets, which are sets of nodes (the minimum and default value is three nodes), with one acting as the primary node and two or more as secondary nodes. The data is replicated across these three nodes, with availability handled by Atlas automatically. If the primary node goes down, the replica sets promote a secondary node to primary.Imagine if, following Emperor Palpatine and Darth Vader destroying evidence of the Jedi Order, the data could have recovered itself thanks to the high availability of clusters on Atlas.Atlas cloud recovery would have also helped prevent deleting of data in the Jedi Archives. In Star Wars: Episode II Attack of the Clones, Obi Wan-Kenobi visits the Jedi Archives on Coruscant to locate the planet Kamino, where they expect to find answers on who attempted to assassinate Senator Padmé Amidala. However, Obi Wan-Kenobi finds himself having to call for the help of the librarian, Jocasta Nu, because he cannot find any traces of the planet in the archives. She famously says that if the planet is not in the archives then it simply does not exist.Atlas database gives the ability to store data in the cloud, available anywhere as long as you can access it. Therefore, you could also argue that the information in the archives would have been available from anywhere, not just in the one server within the Jedi Archives. Luminous beings we might be, but database specialists, the Jedi were not.Security: You don't belong here!In a world with ever more data being consumed and stored, people are becoming more aware of how secure their data is (or is not). When looking to use MongoDB Atlas, developers are often concerned about how safe their data is in the cloud.MongoDB Atlas comes with a lot of security features pre-configured from the start. This includes isolation, authorization, and encryption.Security stack showing Isolation, Authorization and EncryptionWe firmly believe that your data should be private and only visible to those with the rights to see it.In Star Wars: Episode VI Return of the Jedi, the Alliance learns of the construction of a second Death Star and discovers that an energy shield generator to protect the new Death Star is on the forest moon planet Endor. Leia, Han Solo, Chewbacca, R2-D2, C-3P0, and the Ewoks fight in the Battle of Endor for access to the bunker containing the generator.Thanks to R2-D2 and C-3P0 drawing away the Imperial Army, the enemy is attacked by the Ewoks. Chewbacca is then able to steal an AT-ST and rescue his allies, who are attempting to hack into the bunker.They successfully gain entry to the bunker, plant explosives, and expose the new Death Star, allowing it to be destroyed by the Rebel Fleet.However, if MongoDB security had been involved, the rebels wouldnt have gained access to the bunker and the energy shield protecting the Death Star would have remained, meaning the Empire could have won. Death Star II was able to travel across the galaxy and strike fear into the hearts of many, and perhaps this might have prevented the creation of Starkiller Base in The Force Awakens and its destruction of the entire Hosnian system, saving millions of lives. While the Empire could have wiped out the Alliance and taken control of the galaxy with the second Death Star, it would have had to remain within range of its shield generator on the forest moon of Endor, unable to terrorize the galaxy at large as Starkiller Base eventually did. The First Order and Kylo Ren may never have risen to power. Luke Skywalker may not have escaped the Emperor or redeemed his father. Life would probably look very different for those millions of lives saved in the Hosnian system. We may have, in fact, seen Darth Vader and Luke Skywalker rule the galaxy as father and son. Scary thought!Data API: Bye-Bye, R2-D2MongoDB Atlas Data API is a new feature, currently in preview, that allows developers to access their Atlas hosted database cluster via HTTP calls over the web using just a unique endpoint URL and an API key. This opens up the possibility of using the power of the MongoDB document database model in more ways and more scenarios.You might choose to use it because you dont want the overhead of installing and using a driver for your chosen language or platform. This could be because you are prototyping or just prefer the API approach. You may not even have a driver for that scenario but can make HTTP requests. One example of this scenario is the Internet of Things (IoT), where you often wont have a driver as an option but can easily make calls over the web. Another scenario is from a SQL stored procedure. That might sound controversial, but what if you want to push data to both a relational database and an Atlas cluster at the same time to help with migrating over to the most popular non-relational database in the world?A driver is a programming interface, allowing for use with whatever the driver is intended for. The MongoDB driver, available for multiple programming languages, allows communication with a MongoDB database via a connection string. It simplifies that process and handles the communication so developers dont have to write complicated low-level code.In the Star Wars universe, you can think of droids as an interface to the data in the world around them. R2-D2 is an astromech droid, whose primary function is to provide navigational abilities but is able to do far more, including interfacing with other computers via a SCOMP link, disabling autopilot on the Naboo starfighter, picking up distress signals, locating Emperor Palpatine on Grievouss ship, and, of course, sharing the Death Star plans with the Alliance.So, if there was MongoDB Atlas Data API in the Star Wars universe, what might that look like? This could be a simple data pad, similar to a smartphone. Instead of relying on R2-D2, BB-8, or Chopper to act as an interface to the information in different computers around the galaxy, the data pads could do this instead, providing the ability to access the data stored in Atlas.Using MongoDB, the Death Star plans might have been added to a collection in the database and accessible to all those who were authorized. This would have prevented some of the danger seen at the start of Star Wars: Episode IV A New Hope, when Princess Leia had to upload the plans into R2-D2.Of course, R2-D2 would still have proved useful in other situations, such as in battle, putting out fires in the Millenium Falcon, or throwing Luke his lightsaber during a battle in Star Wars: Episode VI Return of the Jedi. But some of the key roles he played could have been made redundant if the Data APIenabled data pad had been available instead.Sharding: Where art thou Anch-toSpeaking of R2-D2, another event he was involved in could have been different had there been a feature of MongoDB in the Star Wars universe, sharding.When you have a really large data set, like, I dont know, all the information in the galaxys HoloNet, you might want to break it down into smaller pieces in order to make it faster and easier to search through.Sharding is a perfect example of this in action. Sharding works by segregating your data into smaller pieces based on a field in your document. A common real-world comparison would be a library. In a library, books arent all just thrown on a shelf in the order they were acquired by the library. They are instead broken down into different shelves, sorted by author surname. The equivalent of this in the database world is a sharding key, which tells you exactly where to head first, saving you time and effort.In Star Wars: Episode VII The Force Awakens, the Alliance including Rey want to find Luke Skywalker. If the galaxy information had been sharded, it would be possible for it to have been searched to find Lukes location much faster, without the need to find a particular map to fill out a local data set. Access to Ahch-to, the location of the ancient Jedi Temple and the galaxys only green-milk-producing thala-sirens, would be just a query away..This also ties in nicely to the previous section about the Data API. Without the need to use R2-D2 for the missing piece but instead use a data pad to query all the known information on the galaxy, the Alliance may have found Luke much sooner especially if they were able to use MongoDBs powerful query language to perform complex queries on the data using the aggregation pipeline.MongoDB is a non relational database that can be used by all living things. It surrounds docs, penetrates analytics, and binds the galaxy together.There we have it, a trip through the galaxy and the events of Star Wars to see how the timeline might have been different had MongoDB been around.MongoDB is a non-relational database that can be used by all living things. It surrounds documents, penetrates analytics, and binds the Galaxy together.The knowledge of the Jedi wouldnt have been erased and their reputation tarnished had MongoDB Atlas and its high availability been available.The energy shield generator on Endor would have survived, meaning Death Star II may never have been destroyed, allowing the Empire to take full control of the galaxy but thwarting the rise of the First Order.R2-D2 might not have been so important had MongoDB Atlas Data API been available on data pads, allowing direct access over the internet to the data instead of requiring a driver.Luke Skywalker may have been found much sooner had sharding been available alongside powerful querying functionality such as the aggregation pipeline to bypass the need to find a map and get the missing piece from R2-D2.How can you use the power of MongoDB Atlas today to change your own universe? Get started today with our free forever M0 tier.MongoDB World returns this June to NYC and in honor of May the Fourth we are offering tickets at only $400 May4-6. Register now and join us for announcement packed keynotes, hands on workshops, and more June 7-9. | Content Synthesis/Information Retrieval Or Search/Process Automation | Legal/Business and Financial Operations | null | null | null | null | null | null |
|
news | Stephen Ibaraki, Contributor, Stephen Ibaraki, Contributor https://www.forbes.com/sites/stephenibaraki/ | 2022 ACM Awardee Prof Abbeel For Top Work In AI And Robotics | ACM announced in April 2022, Pieter Abbeel awarded the 2021 ACM Prize in Computing for pioneering work in robot learning. | https://www.forbes.com/sites/stephenibaraki/2022/04/19/2022-acm-awardee-prof-abbeel-for-top-work-in-ai-and-robotics/ | 2022-04-19T21:19:44Z | Pieter Abbeel global contributions in Robotics transforming the worldCredit: Depositphotos Copyright abidal ID: 25296435ACM PRIZE IN COMPUTINGACM announced in April 2022, Pieter Abbeel awarded the 2021 ACM Prize in Computing for pioneering work in robot learning. Pulling extensively from the prize announcement, my interview with Abbeel, provides a great summary of his contributions.The $250,000 USD prize, endowed by Infosys Ltd, recognizes Abbeels fundamental contributions in computing that through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline. Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley, Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of Covariant [2017- ], Co-Founder of Gradescope [2014-2018, acquired by Turnitin], Advisor/Investor to many AI/Robotics start-ups, Founding Investment Partner at AIX Ventures, Host of The Robot Brains Podcast.ABBEELS CONTINUING PIONEERING CONTRIBUTIONSAbbeel pioneered teaching robots to learn from people (imitation or apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), how to speed up skill acquisition through learning-to-learn (meta-learning), and how a robot is able to learn to perform a task from just one demonstration after having been pre-trained with a large set of demonstrations on related tasks (few-shot imitation learning). His work continues to be the foundation for the next generation of robotics. His robots have learned knot-tying, basic assembly, organizing laundry, locomotion, surgical suturing, detecting objects and planning their trajectories in uncertain situations, and vision-based robotic manipulation. Reinforcement learning prior to Abbeels contributions could perform only simple tasks. Abbeel added deep reinforcement learning. The innovation of combining reinforcement learning with deep neural networks ushered in the new field of deep reinforcement learning, which can solve far more complex problems than computer programs developed with reinforcement learning alone. Abbeels key breakthrough contribution in this area was developing a deep reinforcement learning method called Trust Region Policy Optimization. This method stabilizes the reinforcement learning process, enabling robots to learn a range of simulated control skills. By sharing his results, posting video tutorials, and releasing open-source code from his lab, Abbeel helped build a community of researchers that has since pushed deep learning for robotics even further with robots performing ever more complicated tasks.Abbeel has also made several other pioneering contributions including: generalized advantage estimation, which enabled the first 3D robot locomotion learning; soft-actor critic, which is one of the most popular deep reinforcement learning algorithms to-date; domain randomization, which showcases how learning across appropriately randomized simulators can generalize surprisingly well to the real world; and hindsight experience replay, which has been instrumental for deep reinforcement learning in sparse-reward/goal-oriented environments.Abbeels courses on AI, Advanced Robotics, and Deep Unsupervised Learning are some of the standard references for the field.ABBEELS TRANFORMATIONAL ROLESAbbeel is an active entrepreneur, he has founded two companies (Gradescope and Covariant), and spent the two first years at OpenAI (the AI research organization in San Francisco co-founded by Elon Musk). Gradescope provides instructors with AI that can significantly speed up grading of homework, projects, exams, and is used at over 1,000 universities. Covariant builds AI for the next generation of robotic automation, enabling robots to see, react, learn (rather than executing preprogrammed motions as robots do in car factories). Abbeel is also an active start-up investor and advisor. Abbeel is founding partner at AIX Ventures, a Venture Capital firm focused on AI start-ups. He advises many AI and robotics start-ups, and is a frequently sought-after speaker worldwide for C-suite sessions on AI future and strategy.Abbeel is the host of The Robot Brains podcast, which explores what AI and Robotics can do today and where they are headed, through conversations with the world's leading AI and Robotics pioneers. He has won numerous awards, including best paper awards at ICML, ICLR, NeurIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). His work is frequently featured in the popular press.COMMENTS FROM ACM AND INFOSYSTeaching robots to learn could spur major advances across many industries from surgery and manufacturing to shipping and automated driving, said ACM President Gabriele Kotsis. Pieter Abbeel is a recognized leader among a new generation of researchers who are harnessing the latest machine learning techniques to revolutionize this field. Abbeel has made leapfrog research contributions, while also generously sharing his knowledge to build a community of colleagues working to take robots to an exciting new level of ability. His work exemplifies the intent of the ACM Prize in Computing to recognize outstanding work with depth, impact, and broad implications.Infosys is proud of our longstanding collaboration with ACM, and we are honored to recognize Pieter Abbeel for the 2021 ACM Prize in Computing, said Salil Parekh, Chief Executive Officer, Infosys. The robotics field is poised for even greater advances, as innovative new ways are emerging to combine robotics with AI, and we believe researchers like Abbeel will be instrumental in creating the next great advances in this field.CHAT WITH PIETER ABBEELUpdated from 2020, I work pro bono daily across more than 200,000 CEOs, investors, scientists/experts. The ongoing interviews and Forbes articles reflect insights gained from this work.Leveraging Abbeels great history in deep tech, I reached out to Pieter for an interview appearing with the non-profit, ACM Learning Center (under Interviews by Stephen Ibaraki). Heres a direct link to the interview profile and video. Portions of the interview are summarized below and edited for clarity. AI was used to create the transcript which is about 70% accurate when doing highly technical interviews thus I strongly recommend going directly to the full interview for precision. The edited transcript will help in following and understanding the video interview.A Chat with Pieter Abbeel: ACM Prize in Computing in 2022, Professor and Director of the Robot Learning Lab at UC Berkeley, Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of Covariant[2017- ], Co-Founder of Gradescope [2014-2018, acquired by Turnitin], Founding Investment Partner at AIX Ventures, Host of The Robot Brains PodcastStephen Ibaraki Pieter, thank you for coming in today. You got this outstanding prize, the ACM Prize in Computing. You've done so much and for so many different contributions in the field of robotics that our audience needs to know I very much appreciate your sharing your insights with our audience.Pieter AbbeelThanks for having me on, Stephen.Stephen IbarakiYou've got an outstanding arc very early and continuing substantial global contributions. What were the inflection points that made you this outstanding individual?Pieter Abbeel When I look back to my childhood, I was just interested in everything. Anything I could learn about was interesting, whether it's literature, languages, math, physics; I just found everything fascinating. But then at some point, I realized that it's hard to be at the top of the game if you try to do everything. I have to think hard about what am I actually going to spend my time on; so I can really be at the frontier. Towards the end of my undergraduate, which I did in Belgium, I just got really fascinated, more so than anything else, by artificial intelligence. How is it that humans can think; how is it that humans can make intelligent decisions? How is it possible to write a program that plays chess better than the person who wrote the program? That, to me was really fascinating that it's possible to somehow write these artificial intelligence programs that are smarter than the writer of the program, at the thing they are supposed to do. That really got me going from a just pure, intrinsic interest point of view. But also, from an impact point of view; it seemed, even if I cared about everything, and I couldn't do everything. Maybe by working on artificial intelligence, in some way, I could be working on everything, because maybe AI could help everything else. Maybe it could, in the future, help biology, physics and so forth. We're starting to see some of that very recently; that AI is starting to help other disciplines. That helped me to really consolidate, let me just focus on AI. Because that's most interesting to me. Then, of course, even AI itself is a pretty large discipline. These days, it's a lot more converged. I mean, almost all the recent advances are in deep learning and variations on the latest version of deep learning lead to the next breakthrough. When I started my PhD, which was in the early 2000s, that wasn't the case. AI was still more of a scattered field. It was important to pick an application, to be able to make consistent progress. For me, the natural one was robotics. You might wonder why robotics? There are other domains, of course, that are really interesting, too. But to me, it seemed that if we really care about artificial intelligence, and building truly smart systems, the most natural thing is to look at robots, because robots are a lot like us, a lot like animals. That's where we see intelligence. In the real world, the natural intelligence is all in physical embodiment. It seemed to me the most natural place to start to try to build AI is tied into physical systems; tied into robots, is a more natural way to make progress.Stephen IbarakiI get this early interest in physics and everything else tied to science. You want to have global impact. You want this practicality element and robotics is really the most practical or one of the most practical ways to do this, and even in autonomous vehicles which are robots, right? There are vision systems or some kind of understanding the environment and you embody all of that early work. I noticed early in your career, you studied under Andrew Ng. He's done a lot of work and he's quite a well-known venture capitalist. You pioneered things like imitation learning or via first reinforcement learning but applying it to deep reinforcement learning, but with deep neural networks. Can you talk about that journey of what excited you? Why you did it? How did you create these new paradigms that are used so globally now in the robotics field?Pieter AbbeelDuring my PhD days, the way the state of the field was at the time, was that the right way to make progress was to combine deep domain expertise with machine learning. We looked at it one at a time; one of the hardest open problems in robotic control, which was helicopter flight, how can you have an autonomous helicopter that flies at the same level of capabilities as the most expert, most advanced human pilots. What we did there; we brought together techniques from optimization-based control, system identification, with machine learning, and together, it allowed us to have the most advanced autonomous helicopter. The helicopter could do flips all kinds of advanced maneuvers for RC helicopters that almost no human pilots can actually do. But a big part of it was also, it was learning from human demonstrations. We have one of those human pilots showing us what they can do, we collected data. That was a big part of the process, combined with optimization-based control. Of course, the very big thing that happened in 2012, was the ImageNet breakthrough, where Geoff Hinton and his students show that deep neural nets can be trained from a lot of data to recognize what's in images at a level that was unprecedented, at the time, a very big leap forward. What it showed is that maybe you can start with a purely data driven approach, without all the detailed engineering of specific knowledge into the systems directly. To me, that was when you say, hey, where does your deep reinforcement learning work come from? I had worked on reinforcement learning and quite a bit as part of the helicopter project. That was not deep reinforcement. It was regular reinforcement learning where many parts are engineered, and then some parts of learned. And deep reinforcement learning, the idea is that the large neural network is going to learn everything. It's not just going to be a little bit of extra at the end to make it better. It's going to be from scratch; it's going to learn everything; to me, the reason that I thought it was time to revisit reinforcement learning. But now with deep neural networks; that ImageNet breakthrough from Geoff Hinton and his students in 2012. This means that things can be learned completely. What is this for image recognition? What does that mean for learning to control a robot? Can we, for example, have a humanoid robot that we dont program anything into and maybe it just starts lying on the ground. And you just say, I want you, robot, to get up. I want you to figure it out on your own. And that's really <deep> reinforcement learning where you don't tell it how to do anything. The agent is just being scored on the quality of what it's doing. The simplest example is video games. A video game, there's typically a score. And you could say, okay, play the game as many times as you want. And then over time, learn to achieve higher and higher scores. Robotics is similar, but now we have to come up with the score. For the humanoid robot, maybe the score is how high up is the head of the robot, the higher up the better. And so, when it starts on the ground, it has to learn to stand up to get its head high up. And over time, it actually learns that and that, to me, was really probably the most fascinating result. In those early days that we had this humanoid robot. This was a simulated robot capable of, on its own, learning to get up. We didn't have to tell it what it meant to get up. We didn't have to tell it; you want to plant one of your hands or plant your feet. Those were the kind of things that it would have figured out on its own. I would say that's generally the beauty of <deep> reinforcement learning is that when you look at learning today, there's really three types of learning. There is supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, it's pattern recognition. You feed in data, and you say this is the input and this should be the output and give a lot of examples. For example, an image and a classification of what's in the image, or a sentence in English, and a sentence in Chinese. And give enough examples and the neural network figures out the pattern to go from the input to the output, even for things it's never seen before, it's going to be able to do it. Now, tricky things with supervised learning, you need to provide a lot of data that often requires a lot of human effort to provide, because there's a lot of data out there, but you need to annotate it with a desired annotation or output. For robotics, it would mean that you need to provide the correct motor torques at each of the motors of the robot for every situation. To learn something with supervised learning, that's very tedious. Now, with <deep> reinforcement learning, what you get is this. You just scored a system. So, you might say, high score in the game is good or standing up, meaning the head is at a good height is good, or maybe running forward, meaning that the center of gravity of the robot is moving forward, and he's at a certain height is good. And now the beauty is that instead of you telling what all the torques should be at for all the motors, but it's just not really clear how you're going to do thatwhat should be the torque commands that all motors to do running. You just have to come up with a scoring metric. And then the agent on its own, will figure out how to achieve a high score. And of course, that's also how you train, for example, a dog. When you train a dog, you can't force its muscle contractions and say this is how you're going to do things; you give it treats or you talk in an encouraging way or you might talk in a less encouraging way when you don't like what the dog is doing. But the dog is the one who has to figure out how to do it, how to get you to talk nice to it instead of not so nice. That's the beauty of <deep> reinforcement learning. Because it not only makes it that you don't have to supervise all the details, but also makes it that actually the system could learn to do things, possibly better than you can do them. Because you're not telling them, the system do this this way, you're telling it this is what you're trying to optimize for, see how far you can get. We've seen this in Deep Minds Go systems; better than the best human players. We've actually started to see it in some application domains like chip design, where there's been results where chips can be designed with a computer system that uses <deep> reinforcement learning to come up with new designs that are different from the designs humans had for circuit layouts. There are interesting opportunities here in <deep> reinforcement learning to go beyond even what humans can do. And of course, to go back to what I said earlier, there's three types of learning. The third type of learning is unsupervised learning, where there is no input / output annotation; there is not even a score function that you provide of what supposed to be maximized, you just have data. And you might wonder, how can we learn from just data that's seen, if there's no score, there is nothing we're supposed to match. The idea there is the following. We spent a lot of time on this these days in my lab, and the combination of unsupervised learning and <deep> reinforcement learning. The idea is that when we watch the world, what's happening in the world, we're learning from that. We're not trying to optimize anything, we're just watching. From that we understand how the world works. And then when we're asked to do something, we can do it much better than if we had not had a chance to watch things happen in the world. And that's what unsupervised learning is about. Can machines, can robots, watch videos, let's say on YouTube, and from that, learn how things can be done. And then when we ask it to do something, be much quicker at acquiring a new skill.Stephen Ibaraki<I talk about Pieters fundamental contributions with Trust Region Policy Optimization, Generalized Advantage Estimation, <Soft-Actor Critic>, Domain Randomization, Hindsight Experience Replay. Note: that Pieters references to reinforcement learning are deep reinforcement learning>. These are the technical areas you're really famous for <and more> widely cited and used. Can you talk about your work in ways people can understand <with an example>?Pieter AbbeelAbsolutely. When you think about reinforcement learning, it's trial and error learning agent is going to learn from repeated trial and error. Now, if you want to apply this in robotics, your robot is going to go through repeated trial and error before it's going to do the thing you want it to do. In many ways, that's beautiful, because it's learning; you can watch it learn over time. But in other ways, at times, it can be impractical. Because if your robot really doesn't know yet how to do things, it might damage itself, it might do damage to the environment that it's in before it actually acquires a skill you wanted to have. And so doing real world directly in the real-world reinforcement learning can be very, very costly. If you're going to run it that way. It might take a long time because it might require a lot of trial and error. You might be really busy repairing the robot and fixing up the room or environment it's in. It's very natural to then say, why not work in simulation, right? In simulation, the robot can't really break things; when you simulate a robot. You can always reset the simulator or reset the computer as needed. Also, in simulation, you can often run things faster than real time. You can learn faster than you could do in real time. You can run many, many versions of your program in parallel. You can be collecting data faster; just depends on how many computers you are willing to spin up. There are lots of advantages learning in simulation. A lot of work is done that way. But there's a catch of course. Simulators are never perfectly matched with reality. If your robot is going to learn purely in simulation, and if your simulation is not perfectly matched with reality, then once it's done learning, and you load the neural network, to the real robot, it might actually fail. In fact, most likely it won't succeed. The question is then, can we match up the simulator more closely with reality, because if we can do that, then there's a higher chance of success. That's an approach that I have followed many times and many have followed many times. It is quite a reasonable approach and a good approach. But it's typically very hard to get a perfect match between simulation and reality. In the domain randomization work that you referred to Stephen, we thought about this. The idea we've put forward in that paper was essentially showing that maybe your simulator does not need to be all that perfectly matched with reality. Instead, what we're going to do is we're going to build many, many, many versions of the simulator, and they're all going to be different. Maybe the friction properties between two surfaces are a little different between the feet of the robot and the ground, in the different simulations. Maybe the mass properties of the robot are a little different. Maybe some delay, between torque command sent and torque command activated at a motor, is a little different, in different simulators. Maybe the camera set up in a slightly different position on the robots head, and so forth. There are all these variations that we don't know how to perfectly match with reality. So instead of trying to somehow find a way to perfectly match it up; we say the things we don't know, we're going to vary them. We're going to have maybe 1000, or even 10,000, 100,000, million different versions of the simulator, that are all a bit different on these parameters. And so now, when I'd say, well, that's kind of crazy, instead of trying to get the closest to reality, you're actually making it different in every simulator. Now, what's good about that. It turns out is that if a single neural network can learn to control the robot, across all those simulators, even though not a single one is matched with reality, the fact that there is a single neural network that's learned to control the simulated robot, no matter what version of the simulator, makes it actually very likely, it'll also succeed in the real world, because you've learned something very, very robust, that can handle a wide range of variation. And then hopefully, that means it can also handle the variation it encounters in the real world. And so that's domain randomization; we randomize the domain the robot is learning in and the domain, well, that's the environment of the robot. That's the simulator.Stephen IbarakiThat's really fascinating, because there's just always this barrier and challenge with machine learning and AI and that aspect that it is very narrow <solution to a specific challenge>. Your work is giving generalization capability, which is this big challenge, right? Maybe we can reach some form of artificial general intelligence <AGI>. Do you think we're going to get to this massive change where we do have true artificial general intelligence or, and some major breakthrough? Is there something you're working on that can lead in this way and perhaps in your work with Hindsight Experience Replay, but a manifestation or iteration of that work, where you can use sparse-reward / goal-oriented environments, which is tied to your domain randomization as well? Do you see it moving in that direction? Are you going to be part of that change? And how?Pieter Abbeel That's pretty much the most frequently asked question. Also, one of the hardest wants to answer, of course, to have a precise answer, because Artificial General Intelligence is this idea that we would end up with something that's as smart or even smarter than humans. And in a general sense, and to make this a bit more concrete. We already have computers that are smarter than humans at very specific tasks. There are video games, computers can play better. There are regular games, classical games, chess, checkers, computers can play better, and so forth. But the best computer Go player actually doesn't know how to physically move a Go piece on the board, all it knows is to think through the different moves in the game and, and then display a command on a computer screen. The thing is, the big missing piece, if we're looking at AI today, is that artificial general intelligence, the ability to have a system that is extremely general, in its capabilities. That can learn new things quickly, the way humans can learn new things quickly in new environments, it has never been in before. Maybe it has never been in your kitchen before. It somehow knows how to do things in your kitchen. Maybe it has never driven, in a certain city before, but it just knows how to drive there without a map (nothing like that is needed). It just knows how to generalize. Generalize across all these different tasks. Personally, I think it's hard to predict when we'll actually get to human intelligence. But personally, I think it's really fascinating to think about this notion. Can we have our agents, our AI systems, learn things, internalize things that are maximally generalizable? That allows them to learn other things, solve other problems more quickly in the future, rather than being focused on a very specific problem during learning; focus on somehow building a foundation of knowledge that allows it to learn faster in the future. I've actually been thinking about this quite a bit, and that the hindsight experience replay work you bring up Stephen is, of course related to that. Let me quickly highlight that, and then I'll look at the more general picture. Hindsight experience replay; the idea is the following. This is a very effective, I would say, modification of the standard reinforcement learning paradigm, which just directly optimizes rewards. Hindsight experience replay is a very effective modification that allows the agent to learn from data more effectively. Imagine your agent is trying to do something, and you give it, let's say, a reward for achieving success. But it hasn't learned how to achieve success yet. And so now it's trying, and it always gets zero reward, because it just doesn't know how to do it yet. But if it always gets zero reward, then it cant learn anything either, because everything is equally bad; it's always zero. So, then it really has to do random trial and error, to hope to just coincidentally come across the thing that does get your reward. But if the thing that robot is expected to do is complicated, well then to randomly run across a success with random actuation of the motors of the robot. It's very unlikely, right? So, in hindsight experience replay, the idea is the following. No matter what the robot does, we're going to let it learn from it. We're going to say, okay, I didn't ask you to, let's say, fall down, instead of running forward, because I asked you to run forward but you fell down. But if I had asked you to fall down, you did the right thing. Or if I had asked you to first fall on your right knee, and then fully fallen down on the ground. You did the right thing for that request. So we get here is a notion that this agent, this robot is learning a lot about what kind of commands it already knows how to satisfy; what it has done in the past; it can internalize all these concepts, such then it over time that can generalize; to have learned to fall down; I've learned to fall backwards; I've learned to fall sideways; I've learned to get my right leg in front of my left leg. It learns all these things. By having a wide range of existing skills that are maybe easier to acquire and easy to randomly run across, it can build up a skill repertoire that makes it easier to later then learn the thing you actually care about. That's hindsight experience replay. But when we think about AGI and much more general intelligence, I think what we're thinking about is, in some sense, a generalization of this. If we want to get there and this is not necessarily the guaranteed path to get there, and if people don't know how we're going to get there, we're not there yet. But if I think about how do we get the most general AI system, I think about a system that has to learn as much as possible from the data that's available. Hindsight experience replay is a way to learn as much as possible from the data the robot is collecting on its own. That's nice. But there's so much other data out there, too. There is so much data on the internet that's already collected, the robot doesn't have to go collect that data on its own. When I picture, kind of the future of robot intelligence, what I picture is a robot that has watched a ton of YouTube videos, other videos that are online. It doesn't just watch them; it also looks at the annotations that are with them. It might say, oh, that was a video of somebody chopping carrots. That was a video of somebody maybe playing tennis or basketball or something. It's learning from that, the connection between what's in those videos, and how we in language describe what's going on. But then it's also going to be learning to predict the future. Because as it's watching a video, a natural prediction problem to train the system on is to say, what if I don't tell you what comes next in the video? Can you fill in the blank? Now, of course, it's not possible to deterministically know exactly what's going to happen. Because, you know, I can move my right arm, my left arm up, you cannot predict what I'm going to do next. I'm just watching the previous part of the video. But you can predict a probability distribution over possible futures. Can we request our deep neural networks to learn to predict probability distributions over possible futures? Those are the kind of tasks we can give them. And we can do the same thing for text. And in fact, in text in the language domain, that's something where we've seen a lot of excitement in the last five years out of OpenAIs, GPT models, Google's BERT models, and so forth. We've seen text models that can predict what comes next in an article, not deterministically. But it can predict possible completions that are plausible, and likely to include the one that was there if you give it a set of predictions that it's allowed to do. We're going to want the same for videos. Videos are a lot bigger in terms of amount of storage; amount of data you need to process. But ultimately, I think that's going to be at the core of how we get to more generalized robotic capabilities. These deep neural nets will be largely trained on videos. On these videos, they'll be trained to predict what comes next; predict maybe what was in the past; predict to fill in the blank, and so forth. They'll predict associated text with those videos, For practical purposes, there's essentially infinite video data on the internet for our robots to learn from. And I think 99 plus percent of the data our robots will be trained on will be that. But then that doesn't mean the robots know how to do something themselves, they are just watching videos effectively. So how did they know about their own hands, their own legs, their own camera system where they collect data from how they move their head. So that part in my mind is going to be reinforcement learning. The robots going to combine, the same deep neural network, is going to be both doing learning from videos and texts on the internet, and reinforcement learning in a single neural network. Just the way humans have a single head, a single brain that they use to learn from what they see in the world, but also learn from their own experience. So that's going to be brought together. Now, when I think about reinforcement learning in that context, I don't think we want to give the robot feedback at all | Content Synthesis/Discovery | Unknown | null | null | null | null | null | null |
|
news | PR Newswire | BenevolentAI Begins Trading On Euronext Amsterdam | BenevolentAI (Euronext Amsterdam: BAI), a leading, clinical-stage AI-enabled drug discovery company, announces that trading in its shares is expected to... | https://finance.yahoo.com/news/benevolentai-begins-trading-euronext-amsterdam-050000389.html | https://s.yimg.com/uu/api/res/1.2/10Rps0KJthO1VLcpG8VLZg--~B/aD02Mzt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/fb22049c4455759b0848373493e843d6 | 2022-04-25T05:00:00Z | LONDON, April 25, 2022 /PRNewswire/ -- BenevolentAI (Euronext Amsterdam: BAI), a leading, clinical-stage AI-enabled drug discovery company, announces that trading in its shares is expected to begin today, following completion of the business combination of BenevolentAI Limited with Odyssey Acquisition S.A. ("Odyssey") on 22 April 2022 (the "Business Combination").HighlightsThe combined company now operates under the name BenevolentAI and will trade on Euronext Amsterdam under the ticker symbol BAIGross proceeds from the Business Combination of 225 million will put BenevolentAI in a strong position to accelerate the development of its clinical pipeline and deliver multiple value inflection points in the near futureBenevolentAI also intends to continue investing in its technology platform and consolidate its position in AI-enabled drug discoveryOlivier Brandicourt, the former CEO of Sanofi, and Jean Raby, the former CEO of Natixis Investment Managers, have joined the board of BenevolentAIBackgroundBenevolentAI is a leading, clinical-stage AI-enabled drug discovery company headquartered in London, with a research facility in Cambridge (UK) and an office in New York. BenevolentAI, employs a total of over 300 people.Through the combined capabilities of its AI platform, scientific expertise, and wet-lab facilities, BenevolentAI aims to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods.The Benevolent Platform is a disease-agnostic drug discovery platform that powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies across multiple therapeutic indications. BenevolentAI also maintains successful collaborations with leading pharmaceutical company AstraZeneca, alongside collaborations with research and charitable institutions.The Business Combination provides BenevolentAI with a multi-year cash runway to continue development of the Benevolent Platform and progress its pipeline of drug candidates. In particular, BenevolentAI aims to complete the Phase I/II trial for BEN-2293 (atopic dermatitis) to make it ready for out-licensing, as well as to complete the Phase I trial for BEN-8744 (ulcerative colitis) in 2023 before commencing a Phase II trial.The BenevolentAI business model leverages the Benevolent Platform to generate new drug IP at scale and accelerate multiple novel assets to IND stage to commercialise in-house or through out-licensing agreements. Separately, BenevolentAI will seek to enter into selective strategic platform collaborations to drive greater shareholder value in disease areas that it would not currently select for in-house clinical development.Joanna Shields, Chief Executive Officer of BenevolentAI, said: "At BenevolentAI, we set out to build a company that can bring about real transformation in drug R&D by leveraging AI and advanced technologies to improve drug efficacy, decrease failure rates and accelerate the development of new medicines. BenevolentAI is well on its way to achieving this mission.Our revolutionary approach empowers scientists to better understand disease biology and discover more effective therapies. Today, the Benevolent Platform is fully operational, scientifically validated, and producing results, demonstrated by our in-house pipeline of over 20 platform-generated drug candidates and successful collaborations. This listing underscores our commitment to becoming a key player in the global drug discovery sector."Dr François Nader, Chairman of BenevolentAI, said: "BenevolentAI is strongly positioned to unlock a new level of growth as a public company and advance its ambitious plans to scale its innovative R&D platform. The next chapter for BenevolentAI is a compelling one, as we look to strengthen our position within the AI-driven drug discovery sector and advance our mission to deliver life-changing medicines to patients. I also take pleasure in welcoming Olivier Brandicourt and Jean Raby to our Board they bring immense experience and expertise, which will be invaluable as we pursue our value creation strategy."Michael Zaoui, former Chairman of Odyssey, said: "The completion of the Business Combination and BenevolentAI's listing on Euronext Amsterdam marks the final step of Odyssey's journey. Again, we would like to thank our shareholders as well as all those who contributed to this result. BenevolentAI now starts its life as a public company, and we have every confidence that it will thrive in the important and exciting space of AI-led drug discovery."Transaction summaryThrough the Business Combination, BenevolentAI has received gross proceeds of 225 million.The issued share capital of BenevolentAI now amounts to 145,126.303, divided into 137,626,303 Ordinary Shares and 7,500,000 class B shares (the "Sponsor Shares"). Following redemption of Ordinary Shares by their holders prior to closing, BenevolentAI will hold 25,137,581 Ordinary Shares in treasury.On 25 April 2022, 5,000,000 Sponsor Shares automatically converted on a 1-to-1 basis into 5,000,000 Ordinary Shares. As a result, the share capital of BenevolentAI remains 145,126.303, divided into 142,626,303 Ordinary Shares and 2,500,000 Sponsor Shares.Certain BenevolentAI shareholders that were shareholders in BenevolentAI Limited prior to the Business Combination and certain shareholders of Odyssey's Ordinary Shares are subject to a 180-day lock-up until 19 October 2022 and Odyssey's sponsor entity is subject to a 365-day lock-up until 22 April 2023, subject to terms set forth in the respective lock-up agreements. For more details on transaction rationale, transaction structure, and lock-up terms, please refer to the prospectus dated 22 April 2022, which can be found on the Investor section of www.benevolent.com.ABOUT BENEVOLENTAIBenevolentAI (AMS: BAI) is a leading, clinical-stage AI-enabled drug discovery company listed on the Euronext Amsterdam stock exchange. Through the combined capabilities of its AI platform, scientific expertise, and wet-lab facilities, BenevolentAI is well-positioned to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods. BenevolentAI has a consistently proven track record of scientifically validated discoveries. The BenevolentAI Platform powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies, and it maintains successful collaborations with AstraZeneca, as well as leading research and charitable institutions. BenevolentAI is headquartered in London, with a research facility in Cambridge (UK) and a further office in New York.ADVISORSGoldman Sachs International served as exclusive financial advisor to BenevolentAI in connection with the business combination. Goldman Sachs International also acted as placement agent on the PIPE. Latham & Watkins (London) LLP, NautaDutilh Avocats Luxembourg S.a r.l. and NautaDutilh N.V. served as legal advisors to BenevolentAI.J.P. Morgan SE and Zaoui & Co served as financial advisors to Odyssey. J.P. Morgan SE also acted as placement agent on the PIPE. Skadden, Arps, Slate, Meagher & Flom (UK) LLP, ELVINGER HOSS PRUSSEN, société anonyme and Stibbe N.V. served as legal advisors for Odyssey.Linklaters LLP served as legal advisor to the placement agents on the PIPE.DISCLAIMER:These materials may not be published, distributed or transmitted in the United States, Canada, Australia or Japan. These materials do not constitute an offer of securities for sale or a solicitation of an offer to purchase securities of BenevolentAI (the "Securities") in the United States, Australia, Canada, Japan or any other jurisdiction in which such offer or solicitation is unlawful. The Securities may not be offered or sold in the United States absent registration or an exemption from registration under the U.S. Securities Act of 1933, as amended (the "Securities Act"). There will be no public offering of the Securities in the United States. The Securities have not been, and will not be, registered under the Securities Act. The Securities referred to herein may not be offered or sold in Australia, Canada or Japan or to, or for the account or benefit of, any national, resident or citizen of Australia, Canada or Japan, subject to certain exceptions.This publication constitutes neither an offer to sell nor a solicitation to buy securities. An investment decision regarding the Securities should only be made on the basis of the prospectus, which has been approved by the Financial Sector Supervisory Commission (Commission de Surveillance du Secteur Financier (CSSF)) and is available free of charge on the Odyssey website.This announcement does not constitute a prospectus. BenevolentAI has not authorised any offer to the public of Securities in any Member State of the European Economic Area. With respect to any Member State of the European Economic Area (each a "Relevant Member State"), no action has been undertaken or will be undertaken to make an offer to the public of Securities requiring publication of a prospectus in any Relevant Member State. As a result, the Securities may only be offered in Relevant Member States (i) to any legal entity which is a qualified investor as defined in the Prospectus Regulation; or (ii) in any other circumstances falling within Article 1(4) of the Prospectus Regulation. For the purpose of this paragraph, the expression "offer of securities to the public" means the communication in any form and by any means of sufficient information on the terms of the offer and the Securities to be offered so as to enable the investor to decide to purchase or subscribe for the Securities and the expression "Prospectus Regulation" means Regulation (EU) 2017/1129 and includes any relevant delegated regulations.This release may contain forward-looking statements. Forward-looking statements are statements that are not historical facts and may be identified by words such as "plans", "targets", "aims", "believes", "expects", "anticipates", "intends", "estimates", "will", "may", "continues", "should" and similar expressions. These forward-looking statements reflect, at the time made, BenevolentAI's beliefs, intentions and current targets/aims concerning, among other things, BenevolentAI's and its group's results of operations, financial condition, liquidity, prospects, growth and strategies. Forward-looking statements include statements regarding: objectives, goals, strategies, outlook and growth prospects; future plans, events or performance and potential for future growth; lease-up potentials; economic outlook and industry trends; developments of BenevolentAI's or its group's markets; the impact of regulatory initiatives; and the strength of BenevolentAI or any other member of its group's competitors. Forward-looking statements involve risks and uncertainties because they relate to events and depend on circumstances that may or may not occur in the future. The forward-looking statements in this release are based upon various assumptions, many of which are based, in turn, upon further assumptions, including without limitation, management's examination of historical operating trends, data contained in BenevolentAI's records (and those of other members of its group) and other data available from third parties. Although BenevolentAI believes that these assumptions were reasonable when made, these assumptions are inherently subject to significant known and unknown risks, uncertainties, contingencies and other important factors which are difficult or impossible to predict and are beyond BenevolentAI's control.Forward-looking statements are not guarantees of future performance and such risks, uncertainties, contingencies and other important factors could cause the actual outcomes and the results of operations, financial condition and liquidity of BenevolentAI and other members of its group or the industry to differ materially from those results expressed or implied in the Information by such forward-looking statements. No assurances can be given that the forward-looking statements will be realised. The forward-looking statements speak only as of the date of this release. BenevolentAI expressly disclaims any obligation or undertaking to release any updates or revisions to any forward-looking statements to reflect any change in their expectations with regard thereto or any changes in events, conditions or circumstances on which any forward-looking statements are based. No representation or warranty is made that any of these forward-looking statements or forecasts will come to pass or that any forecast result will be achieved. Undue influence should not be given to, and no reliance should be placed on, any forward-looking statement.Logo: https://mma.prnewswire.com/media/1725546/Benevolent_Logo.jpg | Discovery/Content Synthesis | Life, Physical, and Social Science | null | null | null | null | null | null |
news | Melody Yang | Amazon EMR on Amazon EKS provides up to 61% lower costs and up to 68% performance improvement for Spark workloads | Amazon EMR on Amazon EKS is a deployment option offered by Amazon EMR that enables you to run Apache Spark applications on Amazon Elastic Kubernetes Service (Amazon EKS) in a cost-effective manner. It uses the EMR runtime for Apache Spark to increase performance so that your jobs run faster and cost less. Amazon EMR on […] | https://aws.amazon.com/blogs/big-data/amazon-emr-on-amazon-eks-provides-up-to-61-lower-costs-and-up-to-68-performance-improvement-for-spark-workloads/ | 2022-04-27T15:49:26Z | Amazon EMR on Amazon EKS is a deployment option offered by Amazon EMR that enables you to run Apache Spark applications on Amazon Elastic Kubernetes Service (Amazon EKS) in a cost-effective manner. It uses the EMR runtime for Apache Spark to increase performance so that your jobs run faster and cost less.Amazon EMR on Amazon EKS is a deployment option offered by Amazon EMR that enables you to run Apache Spark applications on Amazon Elastic Kubernetes Service (Amazon EKS) in a cost-effective manner. It uses the EMR runtime for Apache Spark to increase performance so that your jobs run faster and cost less.In our benchmark tests using TPC-DS datasets at 3 TB scale, we observed that Amazon EMR on EKS provides up to 61% lower costs and up to 68% improved performance compared to running open-source Apache Spark on Amazon EKS via equivalent configurations. In this post, we walk through the performance test process, share the results, and discuss how to reproduce the benchmark. We also share a few techniques to optimize job performance that could lead to further cost-optimization for your Spark workloads.How does Amazon EMR on EKS reduce cost and improve performance?The EMR runtime for Spark is a performance-optimized runtime for Apache Spark that is 100% API compatible with open-source Apache Spark. Its enabled by default with Amazon EMR on EKS. It helps run Spark workloads faster, leading to lower running costs. It includes multiple performance optimization features, such as Adaptive Query Execution (AQE), dynamic partition pruning, flattening scalar subqueries, bloom filter join, and more.In addition to the cost benefit brought by the EMR runtime for Spark, Amazon EMR on EKS can take advantage of other AWS features to further optimize cost. For example, you can run Amazon EMR on EKS jobs on Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, providing up to 90% cost savings when compared to On-Demand Instances. Also, Amazon EMR on EKS supports Arm-based Graviton EC2 instances, which creates a 15% performance improvement and up to 30% cost savings when compared a Graviton2-based M6g to M5 instance type.The recent graceful executor decommissioning feature makes Amazon EMR on EKS workloads more robust by enabling Spark to anticipate Spot Instance interruptions. Without the need to recompute or rerun impacted Spark jobs, Amazon EMR on EKS can further reduce job costs via critical stability and performance improvements.Additionally, through container technology, Amazon EMR on EKS offers more options to debug and monitor Spark jobs. For example, you can choose Spark History Server, Amazon CloudWatch, or Amazon Managed Prometheus and Amazon Managed Grafana (for more details, refer to the Monitoring and Logging workshop). Optionally, you can use familiar command line tools such as kubectl to interact with a job processing environment and observe Spark jobs in real time, which provides a fail-fast and productive development experience.Amazon EMR on EKS supports multi-tenant needs and offers application-level security control via a job execution role. It enables seamless integrations to other AWS native services without a key-pair set up in Amazon EKS. The simplified security design can reduce your engineering overhead and lower the risk of data breach. Furthermore, Amazon EMR on EKS handles security and performance patches so you can focus on building your applications.Benchmarking This post provides an end-to-end Spark benchmark solution so you can get hands-on with the performance test process. The solution uses unmodified TPC-DS data schema and table relationships, but derives queries from TPC-DS to support the Spark SQL test case. It is not comparable to other published TPC-DS benchmark results.Key conceptsTransaction Processing Performance Council-Decision Support (TPC-DS) is a decision support benchmark that is used to evaluate the analytical performance of big data technologies. Our test data is a TPC-DS compliant dataset based on the TPC-DS Standard Specification, Revision 2.4 document, which outlines the business model and data schema, relationship, and more. As the whitepaper illustrates, the test data contains 7 fact tables and 17 dimension tables, with an average of 18 columns. The schema consists of essential retailer business information, such as customer, order, and item data for the classic sales channels: store, catalog, and internet. This source data is designed to represent real-world business scenarios with common data skews, such as seasonal sales and frequent names. Additionally, the TPC-DS benchmark offers a set of discrete scaling points (scale factors) based on the approximate size of the raw data. In our test, we chose the 3 TB scale factor, which produces 17.7 billion records, approximately 924 GB compressed data in Parquet file format.Test approachA single test session consists of 104 Spark SQL queries that were run sequentially. To get a fair comparison, each session of different deployment types, such as Amazon EMR on EKS, was run three times. The average runtime per query from these three iterations is what we analyze and discuss in this post. Most importantly, it derives two summarized metrics to represent our Spark performance:Total execution time The sum of the average runtime from three iterationsGeomean The geometric mean of the average runtime Test resultsIn the test result summary (see the following figure), we discovered that the Amazon EMR-optimized Spark runtime used by Amazon EMR on EKS is approximately 2.1 times better than the open-source Spark on Amazon EKS in geometric mean and 3.5 times faster by the total runtime.The following figure breaks down the performance summary by queries. We observed that EMR runtime for Spark was faster in every query compared to open-source Spark. Query q67 was the longest query in the performance test. The average runtime with open-source Spark was 1019.09 seconds. However, it took 150.02 seconds with Amazon EMR on EKS, which is 6.8 times faster. The highest performance gain in these long-running queries was q72319.70 seconds (open-source Spark) vs. 26.86 seconds (Amazon EMR on EKS), a 11.9 times improvement.Test costAmazon EMR pricing on Amazon EKS is calculated based on the vCPU and memory resources used from the time you start to download your EMR application Docker image until the Amazon EKS pod terminates. As a result, you dont pay any Amazon EMR charges until your application starts to run, and you only pay for the vCPU and memory used during a jobyou dont pay for the full amount of compute resources in an EC2 instance.Overall, the estimated benchmark cost in the US East (N. Virginia) Region is $22.37 per run for open-source Spark on Amazon EKS and $8.70 per run for Amazon EMR on EKS thats 61% cheaper due to the 68% quicker job runtime. The following table provides more details.Benchmark JobRuntime (Hour)Estimated Cost Total EC2 InstanceTotal vCPUTotal Memory (GiB)Root Device (EBS)Amazon EMR on EKS0.68$8.70621643220 GiB gp2Open-Source Spark on Amazon EKS2.13$22.37621643220 GiB gp2Amazon EMR on Amazon EC2(1 primary and 5 core nodes)0.80$8.80619642420 GiB gp2The cost estimate doesnt account for Amazon Simple Storage Service (Amazon S3) storage, or PUT and GET requests. The Amazon EMR on EKS uplift calculation is based on the hourly billing information provided by AWS Cost Explorer.Cost breakdownThe following is the cost breakdown for the Amazon EMR on EKS job ($8.70): Total uplift on vCPU (126.93 * $0.01012) = (total number of vCPU used * per vCPU-hours rate) = $1.28Total uplift on memory (258.7 * $0.00111125) = (total amount of memory used * per GB-hours rate) = $0.29Total Amazon EMR uplift cost $1.57Total Amazon EC2 cost (6 * $1.728 * 0.68) = (number of instances * c5d.9xlarge hourly rate * job runtime in hour) = $7.05Other costs ($0.1 * 0.68) + ($0.1/730 * 20 * 6 * 0.68) = (shared Amazon EKS cluster charge per hour * job runtime in hour) + (EBS per GB-hourly rate * root EBS size * number of instances * job runtime in hour) = $0.08 The following is the cost breakdown for the open-source Spark on Amazon EKS job ($22.37): Total Amazon EC2 cost (6 * $1.728 * 2.13) = (number of instances * c5d.9xlarge hourly rate * job runtime in hour) = $22.12Other costs ($0.1 * 2.13) + ($0.1/730 * 20 * 6 * 2.13) = (shared EKS cluster charge per hour * job runtime in hour) + (EBS per GB-hourly rate * root EBS size * number of instances * job runtime in hour) = $0.25The following is the cost breakdown for the Amazon EMR on Amazon EC2 ($8.80):Total Amazon EMR cost (5 * $0.27 * 0.80) + (1 * $0.192 * 0.80) = (number of core nodes * c5d.9xlarge Amazon EMR price * job runtime in hour) + (number of primary nodes * m5.4xlarge Amazon EMR price * job runtime in hour) = $1.23Total Amazon EC2 cost (5 * $1.728 * 0.80) + (1 * $0.768 * 0.80) = (number of core nodes * c5d.9xlarge instance price * job runtime in hour) + (number of primary nodes * m5.4xlarge instance price * job runtime in hour) = $7.53Other Cost ($0.1/730 * 20 GiB * 6 * 0.80) + ($0.1/730 * 256 GiB * 1 * 0.80) = (EBS per GB-hourly rate * root EBS size * number of instances * job runtime in hour) + (EBS per GB-hourly rate * default EBS size for m5.4xlarge * number of instances * job runtime in hour) = $0.041Benchmarking considerationsIn this section, we share some techniques and considerations for the benchmarking.Set up an Amazon EKS cluster with Availability Zone awarenessOur Amazon EKS cluster configuration looks as follows:apiVersion: eksctl.io/v1alpha5kind: ClusterConfigmetadata: name: $EKSCLUSTER_NAME region: us-east-1availabilityZones:["us-east-1a"," us-east-1b"] managedNodeGroups: - name: mn-od availabilityZones: ["us-east-1b"] In the cluster configuration, the mn-od managed node group is assigned to the single Availability Zone b, where we run the test against.Availability Zones are physically separated by a meaningful distance from other Availability Zones in the same AWS Region. This produces round trip latency between two compute instances located in different Availability Zones. Spark implements distributed computing, so exchanging data between compute nodes is inevitable when performing data joins, windowing, and aggregations across multiple executors. Shuffling data between multiple Availability Zones adds extra latency to the network I/O, which therefore directly impacts Spark performance. Additionally, when data is transferred between two Availability Zones, data transfer charges apply in both directions.For this benchmark, which is a time-sensitive workload, we recommend running in a single Availability Zone and using On-Demand instances (not Spot) to have a dedicated compute resource. In an existing Amazon EKS cluster, you may have multiple instance types and a Multi-AZ setup. You can use the following Spark configuration to achieve the same goal:--conf spark.kubernetes.node.selector.eks.amazonaws.com/capacityType=ON_DEMAND--conf spark.kubernetes.node.selector.topology.kubernetes.io/zone=us-east-1bUse instance store volume to increase disk I/OSpark data shuffle, the process of reading and writing intermediate data to disk, is a costly operation. Besides the network I/O speed, Spark demands high performant disk to support a large amount of data redistribution activities. I/O operations per second (IOPS) is an equally important measure to baseline Spark performance. For instance, the SQL queries 23a, 23b, 50, and 93 are shuffle-intensive Spark workloads in TPC-DS, so choosing an optimal storage strategy can significantly shorten their runtime. General speaking, the recommended options are either attaching multiple EBS disk volumes per node in Amazon EKS or using the d series EC2 instance type, which offers high disk I/O performance within a compute family (for example, c5d.9xlarge is the d series in the c5 compute optimized family).The following table summarizes the hardware specification we used:InstanceOn-Demand Hourly PricevCPUMemory (GiB)Instance StoreNetworking Performance (Gbps)100% Random Read IOPSWrite IOPSc5d.9xlarge$1.7336721 x 900GB NVMe SSD10350,000170,000To simplify our hardware configuration, we chose the AWS Nitro System EC2 instance type c5d.9xlarge, which comes with a NVMe-based SSD instance store volume. As of this writing, the built-in NVMe SSD disk requires less effort to set up and provides optimal disk performance we need. In the following code, the one-off preBoostrapCommand is triggered to mount an instance store to a node in Amazon EKS:managedNodeGroups: - name: mn-od preBootstrapCommands: - "sleep 5; sudo mkfs.xfs /dev/nvme1n1;sudo mkdir -p /local1;sudo echo /dev/nvme1n1 /local1 xfs defaults,noatime 1 2 >> /etc/fstab" - "sudo mount -a" - "sudo chown ec2-user:ec2-user /local1"Run as a predefined job user, not a root userFor security, its not recommended to run Spark jobs as a root user. But how can you access the NVMe SSD volume mounted to the Amazon EKS cluster as a non-root Spark user?An init container is created for each Spark driver and executor pods in order to set the volume permission and control the data access. Lets check out the Spark driver pod via the kubectl exec command, which allows us execute into the running container and have an interactive session. We can observe the following:The init container is called volume-permission.The SSD disk is called /ossdata1. The Spark driver has stored some data to the disk.The non-root Spark job user is called hadoop.This configuration is provided in a format of a pod template file for Amazon EMR on EKS, so you can dynamically tailor job pods when Spark configuration doesnt support your needs. Be aware that the predefined users UID in the EMR runtime for Spark is 999, but its set to 1000 in open-source Spark. The following is a sample Amazon EMR on EKS driver pod template:apiVersion: v1kind: Podspec: nodeSelector: app: sparktest volumes: - name: spark-local-dir-1 hostPath: path: /local1 initContainers: - name: volume-permission image: public.ecr.aws/y4g4v0z7/busybox # grant volume access to "hadoop" user with uid 999 command: ['sh', '-c', 'mkdir /data1; chown -R 999:1000 /data1'] volumeMounts: - name: spark-local-dir-1 mountPath: /data1 containers: - name: spark-kubernetes-driver volumeMounts: - name: spark-local-dir-1 mountPath: /data1In the job submission, we map the pod templates via the Spark configuration:"spark.kubernetes.driver.podTemplateFile": "s3://'$S3BUCKET'/pod-template/driver-pod-template.yaml","spark.kubernetes.executor.podTemplateFile": "s3://'$S3BUCKET'/pod-template/executor-pod-template.yaml",Spark on k8s operator is a popular tool to deploy Spark on Kubernetes. Our open-source Spark benchmark uses the tool to submit the job to Amazon EKS. However, the Spark operator currently doesnt support file-based pod template customization, due to the way it operates. So we embed the disk permission setup into the job definition, as in the example on GitHub.Disable dynamic resource allocation and enable Adaptive Query Execution in your applicationSpark provides a mechanism to dynamically adjust compute resources based on workload. This feature is called dynamic resource allocation. It provides flexibility and efficiency to manage compute resources. For example, your application may give resources back to the cluster if theyre no longer used, and may request them again later when there is demand. Its quite useful when your data traffic is unpredictable and an elastic compute strategy is needed at your application level. While running the benchmarking, our source data volume (3 TB) is certain and the jobs were run on a fixed-size Spark cluster that consists of six EC2 instances. You can turn off the dynamic allocation in EMR on EC2 as shown in the following code, because it doesnt suit our purpose and might add latency to the test result. The rest of Spark deployment options, such as Amazon EMR on EKS, has the dynamic allocation off by default, so we can ignore these settings.--conf spark.dynamicAllocation.enabled=false --conf spark.shuffle.service.enabled=falseDynamic resource allocation is a different concept from automatic scaling in Amazon EKS, such as the Cluster Autoscaler. Disabling the dynamic allocation feature only fixes our 6-node Spark cluster size per job, but doesnt stop the Amazon EKS cluster from expanding or shrinking automatically. It means our Amazon EKS cluster is still able to scale between 1 and 30 EC2 instances, as configured in the following code:managedNodeGroups: - name: mn-od availabilityZones: ["us-east-1b"] instanceType: c5d.9xlarge minSize: 1 desiredCapacity: 1 maxSize: 30Spark Adaptive Query Execution (AQE) is an optimization technique in Spark SQL since Spark 3.0. It dynamically re-optimizes the query execution plan at runtime, which supports a variety of optimizations, such as the following:Dynamically switch join strategiesDynamically coalesce shuffle partitionsDynamically handle skew joinsThe feature is enabled by default in EMR runtime for Spark, but disabled by default in open-source Apache Spark 3.1.2. To provide the fair comparison, make sure its set in the open-source Spark benchmark job declaration: sparkConf: # Enable AQE "spark.sql.adaptive.enabled": "true" "spark.sql.adaptive.localShuffleReader.enabled": "true" "spark.sql.adaptive.coalescePartitions.enabled": "true" "spark.sql.adaptive.skewJoin.enabled": "true"Walkthrough overviewWith these considerations in mind, we run three Spark jobs in Amazon EKS. This helps us compare Spark 3.1.2 performance in various deployment scenarios. For more details, check out the GitHub repository.In this walkthrough, we show you how to do the following:Produce a 3 TB TPC-DS complaint datasetRun a benchmark job with the open-source Spark operator on Amazon EKSRun the same benchmark application with Amazon EMR on EKSWe also provide information on how to benchmark with Amazon EMR on Amazon EC2.PrerequisitesInstall the following tools for the benchmark test:Provision resourcesThe provision script creates the following resources:A new Amazon EKS clusterAmazon EMR on EKS enablerThe required AWS Identity and Access Management (IAM) rolesThe S3 bucket emr-on-eks-nvme-${ACCOUNTID}-${AWS_REGION}, referred to as <S3BUCKET> in the following stepsThe provisioning process takes approximately 30 minutes.Download the project with the following command: git clone https://github.com/aws-samples/emr-on-eks-bencharmk.gitcd emr-on-eks-bencharmkCreate a test environment (change the Region if necessary): export EKSCLUSTER_NAME=eks-nvmeexport AWS_REGION=us-east-1./provision.shModify the script if needed for testing against an existing Amazon EKS cluster. Make sure the existing cluster has the Cluster Autoscaler and Spark Operator installed. Examples are provided by the script.Validate the setup: # should return resultskubectl get pod -n oss | grep spark-operatorkubectl get pod -n kube-system | grep nodescalerGenerate TPC-DS test data (optional)In this optional task, you generate TPC-DS test data in s3://<S3BUCKET>/BLOG_TPCDS-TEST-3T-partitioned. The process takes approximately 80 minutes.The job generates TPC-DS compliant datasets with your preferred scale. In this case, it creates 3 TB of source data (approximately 924 GB compressed) in Parquet format. We have pre-populated the source dataset in the S3 bucket blogpost-sparkoneks-us-east-1 in Region us-east-1. You can skip the data generation job if you want to have a quick start.Be aware of that cross-Region data transfer latency will impact your benchmark result. Its recommended to generate the source data to your S3 bucket if your test Region is different from us-east-1.Start the job: kubectl apply -f examples/tpcds-data-generation.yamlMonitor the job progress: kubectl get pod -n osskubectl logs tpcds-data-generation-3t-driver -n ossCancel the job if needed: kubectl delete -f examples/tpcds-data-generation.yamlThe job runs in the namespace oss with a service account called oss in Amazon EKS, which grants a minimum permission to access the S3 bucket via an IAM role. Update the example .yaml file if you have a different setup in Amazon EKS.Benchmark for open-source Spark on Amazon EKS Wait until the data generation job is complete, then update the default input location parameter (s3://blogpost-sparkoneks-us-east-1/blog/BLOG_TPCDS-TEST-3T-partitioned) to your S3 bucket in the tpcds-benchmark.yaml file.. Other parameters in the application can also be adjusted. Check out the comments in the yaml file for details. This process takes approximately 130 minutes.If the data generation job is skipped, run the following steps without waiting.Start the job: kubectl apply -f examples/tpcds-benchmark.yamlMonitor the job progress: kubectl get pod -n osskubectl logs tpcds-benchmark-oss-driver -n ossCancel the job if needed: kubectl delete -f examples/tpcds-benchmark.yamlThe benchmark application outputs a CSV file capturing runtime per Spark SQL query and a JSON file with query execution plan details. You can use the collected metrics and execution plans to compare and analyze performance between different Spark runtimes (open-source Apache Spark vs. EMR runtime for Spark).Benchmark with Amazon EMR on EKS Wait for the data generation job finish before starting the benchmark for Amazon EMR on EKS. Dont forget to change the input location (s3://blogpost-sparkoneks-us-east-1/blog/BLOG_TPCDS-TEST-3T-partitioned) to your S3 bucket. The output location is s3://<S3BUCKET>/EMRONEKS_TPCDS-TEST-3T-RESULT. If you use the pre-populated TPC-DS dataset, start the Amazon EMR on EKS benchmark without waiting. This process takes approximately 40 minutes.Start the job (change the Region if necessary): export EMRCLUSTER_NAME=emr-on-eks-nvmeexport AWS_REGION=us-east-1./examples/emr6.5-benchmark.shAmazon EKS offers multi-tenant isolation and optimized resource allocation features, so its safe to run two benchmark tests in parallel on a single Amazon EKS cluster.Monitor the job progress in real time: kubectl get pod -n emr#run the command then search "execution time" in the log to analyze individual query's performancekubectl logs YOUR_DRIVER_POD_NAME -n emr spark-kubernetes-driverCancel the job (get the IDs from the cluster list on the Amazon EMR console): aws emr-containers cancel-job-run --virtual-cluster-id <YOUR_VIRTUAL_CLUSTER_ID> --id <YOUR_JOB_ID>The following are additional useful commands:#Check volume statuskubectl exec -it YOUR_DRIVER_POD_NAME -c spark-kubernetes-driver -n emr -- df -h#Login to a running driver podkubectl exec -it YOUR_DRIVER_POD_NAME -c spark-kubernetes-driver -n emr bash#Monitor compute resource usagewatch "kubectl top node"Benchmark for Amazon EMR on Amazon EC2 Optionally, you can use the same benchmark solution to test Amazon EMR on Amazon EC2. Download the benchmark utility application JAR file from a running Kubernetes container, then submit a job via the Amazon EMR console. More details are available in the GitHub repository.Clean upTo avoid incurring future charges, delete the resources generated if you dont need the solution anymore. Run the following cleanup script (change the Region if necessary):cd emr-on-eks-bencharmkexport EKSCLUSTER_NAME=eks-nvmeexport AWS_REGION=us-east-1./deprovision.shConclusionWithout making any application changes, we can run Apache Spark workloads faster and cheaper with Amazon EMR on EKS when compared to Apache Spark on Amazon EKS. We used a benchmark solution running on a 6-node c5d.9xlarge Amazon EKS cluster and queried a TPC-DS dataset at 3 TB scale. The performance test result shows that Amazon EMR on EKS provides up to 61% lower costs and up to 68% performance improvement over open-source Spark 3.1.2 on Amazon EKS.If youre wondering how much performance gain you can achieve with your use case, try out the benchmark solution or the EMR on EKS Workshop. You can also contact your AWS Solution Architects, who can be of assistance alongside your innovation journey.About the AuthorsMelody Yang is a Senior Big Data Solution Architect for Amazon EMR at AWS. She is an experienced analytics leader working with AWS customers to provide best practice guidance and technical advice in order to assist their success in data transformation. Her areas of interests are open-source frameworks and automation, data engineering and DataOps.Kinnar Kumar Sen is a Sr. Solutions Architect at Amazon Web Services (AWS) focusing on Flexible Compute. As a part of the EC2 Flexible Compute team, he works with customers to guide them to the most elastic and efficient compute options that are suitable for their workload running on AWS. Kinnar has more than 15 years of industry experience working in research, consultancy, engineering, and architecture. | Unknown | Computer and Mathematical | null | null | null | null | null | null |
|
news | Kristian Ekenes, Kristian Ekenes | Create a diversity map with Arcade | Explore language diversity in the United States and learn how to use ArcGIS Arcade to create your own diversity index. | https://www.esri.com/arcgis-blog/products/arcgis-online/mapping/create-a-diversity-index-with-arcade/ | 2022-04-21T17:06:41Z | Arcade is an expression language that allows you to define custom content in ArcGIS applications for desktop, mobile, and the web. Arcade expressions give you the flexibility to create dynamic popup elements, calculate new values, and format data. In this post, Ill demonstrate how you can use Arcade to map diversity.First, I’ll provide details describing my methodology, then we’ll explore the results with a ton of maps.Diversity indexWhen the 2020 United States Census numbers were released, the summary of responses showed the U.S. is more diverse than ever. In the context of demographics, diversity usually refers to the level of variation that exists in a population with respect to race and ethnicity. In fact, Esri provides a diversity index that returns a value between 0-100 for each census tract in the U.S. A diversity index indicates the probability that two people selected at random within an area belong to a different race or ethnicity. Therefore, higher numbers indicate more diversity. Racial and ethnic diversity of the southeastern United States (2021 Esri Diversity Index). Overall, the United States has a diversity index of 62. This means there is a 62 percent chance that two people selected at random within the entire United States have a different race or ethnicity.Languages spoken at homeOccasionally I like to explore data in the ArcGIS Living Atlas of the World. The Living Atlas has a treasure trove of data, including up-to-date data from the American Community Survey (ACS). I load the data in the ArcGIS Online Map Viewer to see if I can learn something new by experimenting with various Smart Mapping styles. Recently, I came across this layer containing data for languages spoken in the home. It reports the number of people speaking languages in the following categories: EnglishSpanishAsian and Pacific Islander languages – Includes Chinese (incl. Mandarin, Cantonese), Japanese, Korean, Hmong, Vietnamese, Khmer, Thai, Lao, Burmese, Tibetan, Turkish, Tagalog (incl. Filipino), Ilocano, Samoan, Hawaiian, Cebuano (Bisayan), Iloko (Ilocano), Samoan, Indonesian.Other Indo-European languages – Includes French (incl. Cajun), Haitian creole, Italian, Portuguese, German, Yiddish, Pennsylvania Dutch, Greek, Russian, Polish, Serbo-Croatian, Ukrainian, Armenian, Persian (incl. Farsi, Dari), Gujarati, Hindi, Urdu, Punjabi, Bengali, Nepali, Marathi, Telugu, Tamil, Malayalam, Kannada, Albanian, Lithuanian, Pashto, Romanian, Swedis.hOther languages – Includes Navajo, Apache languages, Cherokee, Dakota languages, Tohono O’odham, Arabic, Hebrew, Amharic, Somali, Yoruba, Twi, Igbo, Swahili, Bantu languages, Lingala, Hungarian, Jamaican Creole EnglishRead this article to learn why the U.S. Census Bureau collects data about languages spoken at home.I loaded the layer in the ArcGIS Online Map Viewer and created a predominance map. A predominance map shows which language group is most common among the people living in an area. Southern California immediately revealed interesting patterns.Click here to open the web mapPredominant language spoken in the home in southern California. I learned the purple areas (predominantly Indo-European languages spoken in the home) in Glendale coincide with high Armenian populations in the area.While exploring this map, I immediately thought about diversity — not in the context of race and ethnicity — but of languages. Since language diversity isnt a field contained in this layer, I decided to write an Arcade expression to create a language diversity index. Arcade provides an avenue to calculate any value within the context of the map even though I dont own this data.As opposed to predominance, which focuses on the most common category, a diversity map will communicate the variety within the area. Low diversity areas will coincide with areas that show strong predominance for one category.Simpsons Diversity IndexI chose Simpsons Diversity Index to calculate diversity using Arcade. While Simpson’s Diversity Index is typically used to calculate biodiversity, it can also be used to calculate diversity for demographics in human populations.The formula defining Simpson's Diversity Index.This formula returns a number between 0 1, indicating the probability that two individuals selected at random belong to different groups or categories. The higher the score, the higher the diversity in the area for the given variables. I wrote the following function in Arcade to calculate Simpsons Diversity Index for any set of categories within a population. For readability within the map, I decided to multiply the final index by 100 (e.g. representing probability as 50% is easier to read than 0.5).// Returns a value between 0-100.// Indicates the % chance two people// who speak different langages are// randomly selected from the area.functionsimpsonsDiversityIndex(vals){ var k = Array(Count(vals)); var t = sum(vals); for(var i in vals){ var n = vals[i]; k[i] = n * (n-1); } var s = Sum(k); var di = 1 - ( s / ( t * (t-1) ) ); return Round(di*100);}Let’s input a few test values to see how this index works. If there are five groups, where one has 100 individuals and the others each have zero, the function will return 0. This indicates there is no diversity since all individuals come from a single group.Conversely, maximum diversity is indicated by all groups having the same population. To test this, I provided the function with five groups, each with 100 individuals. It returns an index of 80. In other words, there is an 80 percent chance that two individuals selected at random will come from different groups.Note that there can never be a result of 1 (or 100%). The highest possible score returned from Simpson’s Diversity Index will always be (n - 1) / n, where n is the number of categories considered.Therefore, a higher number of categories combined with similar populations in each category will result in a higher diversity index. Mapping language diversityOnce the function was written, I referenced the total number of people in each language category as inputs to the function. var asian = $feature["B16007_calc_numAPIE"];var english = $feature["B16007_calc_numEngOnlyE"];var european = $feature["B16007_calc_numIEE"];var other = $feature["B16007_calc_numOtherE"];var spanish = $feature["B16007_calc_numSpanE"];var languages = [asian, english, european, other, spanish];functionsimpsonsDiversityIndex(vals){ var k = Array(Count(vals)); var t = sum(vals); for(var i in vals){ var n = vals[i]; k[i] = n * (n-1); } var s = Sum(k); var di = 1 - ( s / ( t * (t-1) ) ); return Round(di*100);}simpsonsDiversityIndex(languages);Clicking OK in the Arcade editor executes the expression for each feature in the layer. Once the expression finishes, a style is created based on statistics summarizing the output for all features. Selecting an above-and-below theme reveals the following map.Language diversity in southern California. Higher numbers indicate a higher probability that two people selected at random in the area will speak languages from different categories.Brown areas indicate areas of high language diversity, or areas where people are more likely to live close to others who speak a different language than themselves. The green areas indicate areas of low language diversity, or areas where people are more likely to speak the same language as their neighbors.What is meant by high vs. low diversity?I chose to represent a diversity index of 50 as the transition point between high and low diversity. Values of 70 or higher are considered very high diversity (these are the solid brown areas) and values of 30 or lower represent low diversity areas (these are the solid green areas).Since the average diversity index of Census tracts in the U.S. is 25, I could have reasonably used that value as the middle point of my color ramp. That would show areas that have a higher language diversity than the national average.So why use 50 as the inflection point? I chose 50 as the middle value because it represents the point at which you are more likely than not to randomly select two people that speak different languages in the home.As a whole, most tracts in the United States have low language diversity because most people speak English in the home. Areas with the highest language diversity tend to be in large cities, with the exception of cities in the Midwest.Language diversity in the United States. The vast majority of the population speaks English in the home. Areas with high language diversity tend to be in large cities along coastlines.The following maps explore language diversity compared with maps showing the predominant language group spoken in the home in a few select cities. The size of the circles in each map corresponds to the total population ages 5 and older in the area.Click each image to enlarge it for readability.Click here to view the live web map and explore the data on your own.Language diversity in San Francisco and surrounding cities.Predominant language groups spoken in the home in San Francisco and surrounding cities.Language diversity in New York City.Predominant language group spoken in the home in New York City.Language diversity in Miami.Predominant language group spoken in the home in Miami.Language diversity in Chicago.Predominant language group spoken in the home in Chicago.Open this app to explore this data in other parts of the United States using a Swipe widget.A word of cautionThree of the five categories used to calculate diversity are aggregates: Asian and Pacific Islander languages, Other Indo-European languages, and Other languages. The remaining two categories are individual languages: English and Spanish. Using aggregate data is problematic for calculating diversity because it doesn’t consider the diversity that exists within the aggregate categories. Consider how the data used to calculate language diversity is represented as five categories when in reality more than 380 languages are spoken in the home in the United States. Therefore, the diversity index reported in my maps are conservative estimates of language diversity.Language diversity in areas with high Spanish and English speaking populationsIf I add the total Spanish speaking population as a second variable to the layer’s style, the Map Viewer will replace the fill symbols with marker symbols. The larger the size of the symbol, the higher the number of Spanish speakers in the tract. Therefore, the maps below show the number of Spanish and English speakers in each area within the context of overall language diversity.Click here to open the mapThe probability that two people selected at random speak languages from different categories in areas with a large number of Spanish speakers. Large green circles in central Los Angeles show large populations of Spanish speakers that are more likely to live amongst others who speak Spanish. Large brown circles indicate areas of large Spanish speaking populations that likely live amongst people who speak languages other than Spanish.The probability that two people, selected at random, speak languages from different categories in areas with a large number of English speakers.Diversity of specific Asian groupsThe Living Atlas recently added layers containing the populations of specific Asian groups at the state, county, and tract levels. Many people in these groups speak different languages, but are represented within a single category in the languages layer. Because of this, I was eager to explore the diversity that exists within Asian populations.I used the same Simpson’s Diversity Index Arcade function in the layer’s style. However, this time I referenced numbers from 23 specific Asian groups and created the following map.Click to expand and view the expressionvar indian = $feature["B02018_002E"];var bangladeshi = $feature["B02018_003E"];var bhutanese = $feature["B02018_004E"];var burmese = $feature["B02018_005E"];var cambodian = $feature["B02018_006E"];var chinese = $feature["B02018_007E"];var filipino = $feature["B02018_008E"];var hmong = $feature["B02018_009E"];var indonesian = $feature["B02018_010E"];var japanese = $feature["B02018_011E"];var korean = $feature["B02018_012E"];var laotian = $feature["B02018_013E"];var malaysian = $feature["B02018_014E"];var mongolian = $feature["B02018_015E"];var nepalese = $feature["B02018_016E"];var okinawan = $feature["B02018_017E"];var pakistani = $feature["B02018_018E"];var srilankan = $feature["B02018_019E"];var taiwanese = $feature["B02018_020E"];var thai = $feature["B02018_021E"];var vietnamese = $feature["B02018_022E"];var other = $feature["B02018_023E"] + $feature["B02018_024E"];var groups = [ indian, bangladeshi, bhutanese, burmese, cambodian, chinese, filipino, hmong, indonesian, japanese, korean, laotian, malaysian, mongolian, nepalese, okinawan, pakistani, srilankan, taiwanese, thai, vietnamese, other];// Returns a value 0-100.// Indicates the % chance two people// from different Asian regions are// randomly selected from the area.functionsimpsonsDiversityIndex(vals){ var k = Array(Count(vals)); var t = sum(vals); for(var i in vals){ var n = vals[i]; k[i] = n * (n-1); } var s = Sum(k); var di = 1 - ( s / ( t * (t-1) ) ); return Round(di*100);}simpsonsDiversityIndex(groups);Click to open this mapDiversity of Asian specific groups in the United States. Brown areas indicate areas where people of Asian origin are more likely to live close to Asians from other groups.The results show very high diversity in many parts of the U.S. However, several areas in large metropolitan areas, like Los Angeles, Chicago, and New York, reveal neighborhoods with high Asian American populations and low diversity. For example, these include the following communities in Los Angeles: China Town, Little Tokyo, Little Saigon, Korea Town, Little Manila, and Cambodia Town.Check out the diversity of specific Asian groups in the following maps.Click each image to enlarge it for readability.Click to open this mapDiversity of specific Asian groups in Los Angeles.Diversity of specific Asian groups in San Francisco.Diversity of specific Asian groups in New York.Diversity of specific Asian groups in Chicago.These maps clearly show the high diversity that exists within Asian populations on their own.Predominant Asian groupsThe predominance style reveals the predominant group of each low diversity area. Because I can only map up to 10 categories with predominance, I chose to consider only the most populous categories as reported by the Pew Research Center.Click here to open this mapClick each image to enlarge it for readability.Predominant group among Asian populations in southern California.Predominant group among Asian populations in New York City.Predominant group among Asian populations in San Francisco, CA.Predominant group among Asian populations in Chicago.The default popup on these layers displays a chart visualizing the populations of each group relative to one another. A single tall bar indicates low diversity populations. Several bars indicate more diversity in the population.Adding a bar chart to the popup can also help communicate diversity in a population.To learn more about the demographics of Asian Americans in the United States, check out the following article from the Pew Research Center.Key facts about Asian Americans, a diverse and growing populationDiversity of Hispanic and Latino groupsThe Living Atlas also recently added a layer containing data describing the number of people belonging to specific Hispanic and Latino groups. This includes 25 unique categories from regions of South America, Central America, Mexico, the Caribbean islands, and Spain.The results almost look like the inverse of the map of diversity among specific Asian groups. Most census tracts in the United States show low diversity among Hispanic and Latino groups. High population areas in Florida and the east coast are exceptions.Click to view this mapClick each image to enlarge it for readability.Diversity of specific Hispanic/Latino groups in the United States.The predominance map for the same extent shows that most people of Hispanic origin in the United States are Mexican.Predominant Hispanic/Latino group by Census Tract in the United States.However, Florida and New York have high populations from other Hispanic/Latino groups, making them perhaps the most diverse areas among Hispanic and Latino populations.Diversity of specific Hispanic/Latino groups in Florida.Diversity of specific Hispanic/Latino groups in Miami.Diversity of specific Hispanic/Latino groups in New York City.Diversity of specific Hispanic/Latino groups in San Francisco.Diversity of specific Hispanic/Latino groups in southern California.The following maps show predominant Hispanic/Latino populations considering the most common groups as reported by the Pew Research Center, and groups within Central and South American subgroups.Predominant Hispanic/Latino groupsClick to view this mapClick each image to enlarge it for readability.Predominant Hispanic/Latino group in southern California.Predominant Hispanic/Latino group in San Francisco.Predominant Hispanic/Latino group in New York City.Predominant group among Hispanic/Latino populations in Florida.Predominant Hispanic/Latino group in Miami.Predominant Hispanic/Latino group of Central American origin in Miami.Predominant Hispanic/Latino group of South American origin in Miami.To learn more about the demographics of people of Hispanic/Latino origin in the United States, check out the following article from the Pew Research Center.Key facts about U.S. Latinos for National Hispanic Heritage MonthExplore these maps on your ownThe following apps allow you to explore diversity and compare it with predominance for each of the variables described in this post. I encourage you to explore the data in each to learn more about the diversity that exists within each each category.Click this image to explore the diversity of specific Hispanic/Latino groups compared with languages spoken in the home.ConclusionThe term diversity intentionally focuses on the differences that exist in a population. This may include differences in race, ethnicity, language, culture, religion, age, gender and many other categories. As I reflect on my associations and friendships with neighbors, coworkers, and other associates from all sorts of backgrounds, I am reminded of two things:We are alike in many ways; we are all human, and experience the same emotions (though through different life experiences, which leads to the second point).We are all truly unique whether or not we have similar backgrounds or belong to the same demographic categories; our uniqueness shows we our more diverse than data often indicate.While the maps I shared above focus on the diversity of specific categories, don’t forget that more diversity exists in each area than the data actually reports.Creating a diversity index is just one of many examples where you can use Arcade to calculate new values to display custom content in web maps. I encourage you to read other posts focusing on how you can use Arcade to customize your maps. | Content Creation/Content Synthesis | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Convequity | Meta Platforms: An Investment Opportunity Of A Lifetime | Following Meta's 1Q22 results, we provide an overview with some commentary. Read more to find out why FB stock is an investment opportunity of a lifetime. | https://seekingalpha.com/article/4504714-meta-platforms-an-investment-opportunity-of-a-lifetime | 2022-04-29T11:00:00Z | Kelly Sullivan/Getty Images Entertainment1Q22 ER Summary Given the current headwinds Meta (NASDAQ:FB) is facing (IDFA, Russia ban, softening macro) the 1Q22 ER was quite positive. Revenue of $27.9bn slightly missed consensus expectations of $28.2bn EPS of $2.72 came in ahead of consensus expectations of $2.56 Family of Apps DAU (Daily Active Users) was 2.87bn on average, a YoY increase of 6% Facebook DAU was 1.96bn on average, a YoY increase of 4% YoY growth in ad impressions for Family of Apps was 15% Given the tough 1Q21 comparables and the current headwinds, despite the key metrics not wowing investors, it shows the core business remains strong and is adapting. And thus far, investors have reacted positively, pushing the stock up 15% at the time of writing. The reason we think Meta is an opportunity of a lifetime is because the current share price indicates investors can get the Family of Apps division at a huge discount and get the Reality Labs division completely free. We'll touch on this a bit more in the valuation section. On the call, Zuckerberg discussed the three main growth drivers for Meta: Reels, Ads, and the Metaverse. We'll briefly discuss these below. Reels He reiterated how Reels' popularity is negatively impacting ad monetization because of the short-form nature of the videos. Though, he also reminded investors that Meta has tackled and overcome similar monetization issues back in 2012 when mobile emerged as the dominant mode of engaging with the platforms; and then again in 2018 when Stories rose in popularity but it was initially less monetizable than Feed. So, working through how to best monetize Reels isn't a new problem for Meta and with enough time it will likely become the biggest monetization driver. Zuckerberg also explained how more content is being recommended via AI rather than just via social connections. Clearly, taking a page out of TikTok's book but it's really amazing how quick Meta can adapt, even if it means copying a competitor. More AI-based content recommendations will open up users to a wider set of interests and provide Meta with even richer data to deliver more effective ad targeting – a promising consideration. Ads There appears to be solid progress in Meta encouraging advertisers to circumvent the Apple privacy restrictions by engaging in server-side tracking. This entails businesses/advertisers connecting their servers with Meta's Ads Manager servers using an API (Application Programming Interface). This doesn't completely recover the signal loss from ATT (Apple Tracking Transparency) opt-out users, but it is recovering the majority of it. This is promising because this a direct agreement between the advertisers and Meta – no Apple interference. And advertisers will frame the seeking of tracking consent to customers and website visitors a lot more favourably than how Apple frames it in the ATT pop-up. In the ER call there were also several mentions of 'privacy enhancing technologies' which could possibly mean that Meta is getting ahead of and preempting future potential regulatory changes. I'm sure this was not mentioned in the ER call to avoid being too optimistic, though in the long-term, Apple's privacy moves could actually be a blessing for Meta. If a tighter integration between advertisers and Meta ensues, not only will the server-side tracking be a success, more advertisers may also set up payments on Facebook Shop - then there is a load more first party user data. It requires a network effect, however; the more advertisers join Shop the more all will benefit. Metaverse Contrary to popular belief, Meta has the strongest hand going into the Metaverse. A complete Metaverse experience is an immersive one, and you need a VR headset to have that immersive experience. And with Oculus Quest 2, Meta has the best blend of quality, affordability, and accessibility for a VR headset by a large margin. Though despite the affordability and ease of accessing the Metaverse with Quest 2, it's galvanizing to know (as shared in the 1Q ER call) that Meta is providing web-based Metaverse experiences for people who don't have a headset, and they plan on making this easily accessible from the various Meta platforms. This will provide a great stepping stone for people to dip their toe in, enjoy a metaverse experience, and then decide to go a step further by purchasing the Quest 2 for a full immersive experience. Valuation Thoughts Very Low Growth Priced In By using a Single-Stage GGM (Gordon's Growth Model) we can quickly see that the market is pricing in just 1.25% growth for Meta – from now until forever. EV at time of writing = $452bn Single-Stage GGM: EV = TTM FCF * (1 + g) / (WACC – g) $452bn = $39.1bn * (1 + 0.0125) / (0.10 – 0.0125) Even by factoring in the current macro-outlook of a possible recession, expecting 1.25% average annual growth from now on until the end of time is probably excessively pessimistic on Meta. Even without the greenfield metaverse opportunities, the Family of Apps division would likely innovate enough to sustain growth a few hundred basis points higher than 1.25%. To put it in context, this expected sustainable growth is lower than the long-term average of the U.S. economy and more in line with mature industrial verticals than tech. Financial Performance Alone Will Outperform the Market Let's say Meta's P/B and P/E stays the same, and going forward it repeats the TTM ROE of 30%. It would take 2.6 years for the share price to double: Natural log of 2 / Natural log of (1 + 0.30) = 2.64 Sidenote: P/E * ROE = P/B >>> P/E * E/B = P/B So, without any improvements in expectations from these lowly multiples, Meta's stock will double in 2.6 years, assuming the ROE will continue at 30%. And of course, this would translate to a market-beating 30% annualized return. The caveat is the short-term risk could compress multiples further before given the current macro and Meta sentiment. Competition for User Engagement Our take is that the outlook for the Facebook platform is quite different to that of the Instagram platform. Facebook needs to bring in more younger users to replace the current users that are aging, though teens and young adults do not want to spend a lot of time on the same platform as their grandparents. This is a headwind that will continually hamper Facebook's ability to maintain/grow its user base engagement. However, the Facebook Papers – redacted from the documents leaked by whistleblower Francis Haugen last year – indicate that engineers and marketing teams are thinking hard about how to reengage with teens and young adults. The objective appears to be increasing engagement via meaningful content that aligns with life interests and even personal development. It could be a differentiating aspect versus Gen Z-favourite alternatives like TikTok and Snapchat, so I guess time will tell whether this endeavour will be successful or not. Since the pandemic, Instagram user engagement appears to have been steady but certainly outcompeted by the rise of TikTok. However, the release of Reels in August 2020, which is a direct response to TikTok, looks to be stemming the user migration away from Instagram. In 2019, the daily average time spent on Instagram was 27 minutes, and as is shown below this has increased a little but is still considerably less than time spent on TikTok. Figure 1 - Average Time Spent on Social MediaeMarketerThere are many similarities between Reels and TikTok but we would say some key advantages are 1) its currently got a more mature ecommerce integration, 2) it generates a higher ROI for marketers, and 3) it has potential integration with the metaverse (we don't know what this will look like yet but I'm sure it's on Zuckerberg's roadmap). Advantages one and two are attributed to Instagram's slightly older demographics whereas TitTok appeals more to Gen Z. However, given the changes since 2020 and Meta's ability to swiftly adapt, going forward we expect Instagram to compete better against TikTok for both Millennials and Gen Zers. Figure 2 - Gen Z Users: Instagram vs Snapchat vs TikTokeMarketerInterestingly, according to research by GWI, despite the less time spent on Instagram by young people, it is considered by many to be their favourite social app – much more so than TikTok. Figure 3 - Favourite Social Media App HoosuiteThe same GWI research found that people mainly use TikTok to find funny and entertaining content, whilst people mainly use Instagram for posting/sharing content. This might explain why people are spending more time on TikTok but like Instagram the most. To summarize, we're optimistic regarding Instagram's prospects but less optimistic about the Facebook platform mainly because of its struggles in reengaging with teens and young adults. However, Meta's focus to reengage young people with meaningful, personal development type of content could work – LinkedIn is an example of that. Metaverse Competition To us it appears that the common perception is that MSFT has the strongest position to capitalize on Metaverse ecosystem opportunities. And even AAPL or GOOGL would be regarded as having a stronger position than Meta. We think this will be the wrong understanding. Firstly, during the 2012-2017, VR had a hype and bust phase whereby startups and big tech players either discontinued, stalled, or vastly slowed down their VR endeavours. Meta, on the other hand, continued with the same aggressive pace, investing huge amounts into VR R&D and product development. And now this relentless commitment is paying great dividends – a total of 10+ million Oculus headsets sold and similar sales trajectory to the iPhone in 2007/08. Furthermore, the number of Oculus app downloads is on a steep ascent and appears to be reaching that critical mainstream inflection point. Figure 4 - Oculus App Downloads This Week in Apps #94 - Apple Won Christmas 2021 (appfigures)VR is super hard to do, which is why the potential of VR appears to have been just around the corner for many years. So, Meta's commitment through the VR bust period now gives them a multi-year advantage versus big tech names like MSFT and AAPL. Furthermore, for other big tech it looks like the Metaverse is merely a new market opportunity whilst Zuckerberg is viewing Metaverse as literally an extension of the company's long-standing mission of ‘bringing the world closer'. And it's quite clear that VR/Metaverse being relatively more strategic to Meta, is why lots of talent in VR engineering has moved from MSFT and AAPL over to Meta, according to reports from Bloomberg and elsewhere. We've seen some analysis suggesting that MSFT's gaming ecosystem will enable them to dominate the Metaverse. Well, MSFT has been toying around with Windows Mixed Reality (WMR) for many years now to no avail. In 2017, MSFT made deals with seven OEMs to make VR headsets for WMR, though now only HP remains, which highlights 1) the lack of quality of WMR, and 2) the difficulties of doing the software and hardware integration if you don't control both in-house. So, WMR is definitely not MSFT's way in to carve out a robust Metaverse ecosystem. One would think that MSFT would attempt to capitalize on the Metaverse with their Xbox ecosystem, however, Xbox is not compatible with VR headsets (well, it does actually connect to some headsets but doesn't provide the immersive experience). And even if MSFT did make Xbox compatible, most of the incremental value would be captured by the VR headset vendors – simply because gaming already has the high-resolution 3D graphics, so to turn that into an immersive Metaverse experience you need the headset. MSFT's stronger way into the Metaverse is actually the enterprise route, as they have a pretty impressive mixed reality headset called HoloLens 2. The problem, however, is that it is an unpolished product absent of a robust developer ecosystem. As a result, so far, the customer implementations of HoloLens 2 appear to be the outcome of a bespoke project between MSFT and the customer. This presents a chicken and egg problem, as HoloLens 2 needs more developers to create applications to give it more of an out-the-box quality which to then increase its scale economies, but in being an unpolished product with little velocity it is not attracting many developers. Quest 2 is very affordable – compared to the starting price of $3,500 for HoloLens 2 – and already has 850 apps available (c. 340 apps of higher-quality vetted for the Quest 2 Store). Companies of size would be willing to buy a bunch of Quest 2 headsets just to experiment with and discover the best use cases, whereas with HoloLens enterprises would need to have a solid predetermined use case in mind before spending tens or hundreds of thousands of dollars. So, whilst MSFT will gain adoption within industrial sector enterprises needing HoloLens 2 for digital twin use cases (something like factory floors and 3D car models), we see Meta gaining faster and broader adoption across the corporate world, especially for general office-based requirements. To summarize, We think Meta's intent and vision is a more powerful reason for targeting the metaverse, The relentless commitment through the VR slowdown period now gives them a notable competitive lead, Meta has already developed tight hardware and software integration and has a flourishing developer ecosystem, VR talent moving away from AAPL and MSFT to Meta highlights the differences in strategic ambition, And the affordability of the Oculus Quest 2 headset combined with Meta making Metaverse access increasingly easier is building out a strong moat. Below is a brief summary of other potential Metaverse contenders. Figure 5 - Meta's Potential Competition ConvequitySentiment It might be the case that Meta always trades at a discount relative to big tech peers. They are in an unprecedented position of being a facilitator of free speech whilst also assuming the role of preventing misinformation and preventing bad deeds from occurring on their platforms, and as a consequence they are under immense media scrutiny and appear to always be surrounded by controversy. However, most of the troubles have occurred since Donald Trump rose to the presidency in 2016, which created a load of polarization and put Meta under even closer scrutiny. And in the grand scheme of things, this isn't many years' worth of experience in handling all these issues, so with time it's plausible to expect that Meta will get better at operating within this role. Conclusion Right now, if you were to buy Meta's stock, in effect, you're getting the Family of Apps division at a huge discount and getting Reality Labs (Oculus division) for free. 1Q22 was a tough quarter given the growth in users and revenue in 1Q21. And considering the current headwinds, the business appears to be navigating things very well. In the near term, investors should expect better monetization from Reels, further progress with server-side tracking and generating better ROI for advertisers (the major caveat being possible macro headwinds, however), and integrating Meta's social media platforms with Metaverse experiences. In the longer-term, Apple's privacy moves could be a blessing in disguise for Meta – leading to much tighter integration with advertisers, including them opting to setup payments on Facebook Shop, which would be a massive win-win for Meta and advertisers. Finally, we recently researched Adobe and discovered its highly impressive financials and business strategy; however, according to our valuation analysis, the stock is trading within a fair value range. This reminded us that ordinarily big tech names are so well covered that little alpha is up for grabs, unless of course, there have been extraordinary events that have impacted the business. This is why we view Meta as being one of the best long-term opportunities in a very long time.Become a member of Asymmetric Tech Investments to gain the information edge required to maximize long-term returns in the tech sector. Receive 50% time-limited legacy discount - this equates to $380 per year, fixed for life. Offer ends when we reach 50 members. | Personalization/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Kyle Wiggers | Adept aims to build AI that can automate any software process | In 2016 at TechCrunch Disrupt New York, several of the original developers behind what became Siri unveiled Viv, an AI platform that promised to connect various third-party applications to perform just about any task. The pitch was tantalizing — but never fully realized. Samsung later acquired Viv, folding a pared-down version of the tech into […] | https://techcrunch.com/2022/04/26/2304039/ | 2022-04-26T15:00:19Z | In 2016 at TechCrunch Disrupt New York, several of the original developers behind what became Siri unveiled Viv, an AI platform that promised to connect various third-party applications to perform just about any task. The pitch was tantalizing — but never fully realized. Samsung later acquired Viv, folding a pared-down version of the tech into its Bixby voice assistant.Six years later, a new team claims to have cracked the code to a universal AI assistant — or at least to have gotten a little bit closer. At a product lab called Adept that emerged from stealth today with $65 million in funding, they’re — in the founders’ words — “build[ing] general intelligence that enables humans and computers to work together creatively to solve problems.”It’s lofty stuff. But Adept’s cofounders, CEO David Luan, CTO Niki Parmar and chief scientist Ashish Vaswani, boil their ambition down to perfecting an “overlay” within computers that works using the same tools people do. This overlay will be able to respond to commands like “generate a monthly compliance report” or “draw stairs between these two points in this blueprint,” Adept asserts, all using existing software like Airtable, Photoshop, Tableau and Twilio to get the job done.“[W]ere training a neural network to use every software tool in the world, building on the vast amount of existing capabilities that people have already created.” Luan told TechCrunch in an interview via email. “[W]ith Adept, youll be able to focus on the work you most enjoy and ask our [system] to take on other tasks … We expect the collaborator to be a good student and highly coachable, becoming more helpful and aligned with every human interaction.”From Luan’s description, what Adept is creating sounds a little like robotic process automation (RPA), or software robots that leverage a combination of automation, computer vision, and machine learning to automate repetitive tasks like filing forms and responding to emails. But the team insists that their technology is far more sophisticated than what RPA vendors like Automation Anywhere and UiPath offer today.“Were building a general system that helps people get things done in front of their computer: a universal AI collaborator for every knowledge worker … Were training a neural network to use every software tool in the world, building on the vast amount of existing capabilities that people have already created,” Luan said. “We think that AIs ability to read and write text will continue to be valuable, but that being able to do things on a computer will be significantly more valuable for enterprise … [M]odels trained on text can write great prose, but they cant take actions in the digital world. You cant ask [them] to book you a flight, cut a check to a vendor, or conduct a scientific experiment. True general intelligence requires models that can not only read and write, but act when people ask it to do something.”Adept isn’t the only one exploring this idea. In a February paper, scientists at Alphabet-backed DeepMind describe what they call a “data-driven” approach for teaching AI to control computers. By having an AI observe keyboard and mouse commands from people completing “instruction-following” computer tasks, like booking a flight, the scientists were able to show the system how to perform over a hundred tasks with “human-level” accuracy.Not-so-coincidentally, DeepMind cofounder Mustafa Suleyman recently teamed up with LinkedIn cofounder Reid Hoffman to launch Inflection AI, which — like Adept — aims to use AI to help humans work more efficiently with computers.Adept’s ostensible differentiator is a brain trust of AI researchers hailing from DeepMind, Google and OpenAI. Vaswani and Parmar helped to pioneer the Transformer, an AI architecture that has gained considerable attention within the last several years. Dating back to 2017, Transformer has become the architecture of choice for natural language tasks, demonstrating an aptitude for summarizing documents, translating between languages, and even classifying images and analyzing biological sequences.Among other products, OpenAI’s language-generating GPT-3 was developing using Transformer technology.“Over the next few years, everyone just piled onto the Transformer, using it to solve many decades-old problems in rapid succession. When I led engineering at OpenAI, we scaled up the Transformer into GPT-2 (GPT-3’s predecessor) and GPT-3,” Luan said. “Googles efforts scaling Transformer models yielded [the AI architecture] BERT, powering Google search. And several teams, including our founding team members, trained Transformers that can write code. DeepMind even showed that the Transformer works for protein folding (AlphaFold) and Starcraft (AlphaStar). Transformers made general intelligence tangible for our field.”At Google, Luan was the overall tech lead for what he describes as the “large models effort” at Google Brain, one of tech giant’s preeminent research divisions. There, he trained bigger and bigger Transformers with the goal of eventually building one general model to power all machine learning use cases, but his team ran into a clear limitation. The best results were limited to models engineered to excel in specific domains, like analyzing medical records or responding to questions about particular topics.“Since the beginning of the field, weve wanted to build models with similar flexibility as human intelligence-ones that can work for a diverse variety of tasks … [M]achine learning has seen more progress in the last five years than in the prior 60,” Luan said. “Historically, long-term AI work has been the purview of large tech companies, and their concentration of talent and compute has been unimpeachable. Looking ahead, we believe that the next era of AI breakthroughs will require solving problems at the heart of human-computer collaboration.”Whatever form its product — and business model — ultimately takes, can Adept succeed where others failed? If it can, the windfall could be substantial. According to Markets and Markets, the market for business process automation technologies — technologies that streamline enterprise customer-facing and back-office workloads — will grow from $9.8 billion in 2020 to $19.6 billion by 2026. One 2020 survey by process automation vendor Camunda (a biased source, granted) found that 84% of organizations are anticipating increased investment in process automation as a result of industry pressures, including the rise of remote work.“Adept’s technology sounds plausible in theory, [but] talking about Transformers needing to be ‘able to act’ feels a bit like misdirection to me,” Mike Cook, an AI researcher at the Knives & Paintbrushes research collective, which is unaffiliated with Adept, told TechCrunch via email. “Transformers are designed to predict the next items in a sequence of things, that’s all. To a Transformer, it doesn’t make any difference whether that prediction is a letter in some text, a pixel in an image, or an API call in a bit of code. So this innovation doesn’t feel any more likely to lead to artificial general intelligence than anything else, but it might produce an AI that is better suited to assisting in simple tasks.”It’s true that the cost of training cutting-edge AI systems is lower than it once was. With a fraction of OpenAI’s funding, recent startups including AI21 Labs and Cohere have managed to build models comparable to GPT-3 in terms of their capabilities.Continued innovations in multimodal AI, meanwhile — AI that can understand the relationships between images, text and more — put a system that can translate requests into a wide range of computer commands within the realm of possibility. So does work like OpenAI’s InstructGPT, a technique that improves the ability of language models like GPT-3 to follow instructions.Cook’s main concern is how Adept trained its AI systems. He notes that one of the reasons other Transformer models have had such success with text is that there’s an abundance of examples of text to learn from. A product like Adept’s would presumably need a lot of examples of successfully-completed tasks in applications (e.g., Photoshop) paired with text descriptions, but this data that doesn’t occur that naturally in the world.In the February DeepMind study, the scientists wrote that, in order to collect training data for their system, they had to pay 77 people to complete over 2.4 million demonstrations of computer tasks.“[T]he training data is probably created artificially, which raises a lot of questions both about who was paid to create it, how scalable this is to other areas in the future, and whether the trained system will have the kind of depth that other Transformer models have,” Cook said. “It’s [also] not a ‘path to general intelligence’ by any means … It might make it more capable in some areas, but it’s probably going to be less capable than a system trained explicitly on a particular task and application.”Even the best-laid roadmaps can run into unforeseen technical challenges, especially where it concerns AI. But Luan is placing his faith in Adept’s founding senior talent, which includes the former lead for Google’s model production infrastructure (Kelsey Schroeder) and one of the original engineers on Google’s production speech recognition model (Anmol Gulati).“[W]hile general intelligence is often described in the context of human replacement, thats not our north star. Instead, we believe that AI systems should be built with people at the center,” Luan said. “We want to give everyone access to increasingly sophisticated AI tools that help empower them to achieve their goals collaboratively with the tool; our models are designed to work hand-in-hand with people. Our vision is one where people remain in the driver’s seat: discovering new solutions, enabling more informed decisions, and giving us more time for the work that we actually want to do.”Greylock and Addition co-led Adept’s funding round. The round also saw participation from Root Ventures and angels including Behance founder Scott Belsky (founder of Behance), Airtable founder Howie Liu, Chris Re, Tesla Autopilot lead Andrej Karpathy, and Sarah Meyohas. | Process Automation/Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | David | Intel Announces New Cloud-to-Edge Technologies to Solve Challenges of Today and Tomorrow | Today at its inaugural Intel Vision event, Intel announced advancements across silicon, software and services, showcasing how it brings together technologies and the ecosystem to unlock business value for customers today and in the future. | https://vmblog.com:443/archive/2022/05/10/intel-announces-new-cloud-to-edge-technologies-to-solve-challenges-of-today-and-tomorrow.aspx | 2022-05-10T18:46:00Z | Today at its inaugural Intel Vision event, Intel announced advancements across silicon, software and services, showcasing how it brings together technologies and the ecosystem to unlock business value for customers today and in the future. Among the benefits highlighted across real-world examples are improved business results and insights, reduced total cost of ownership, accelerated time to market and value, and positive global impact."We are in the most dynamic global market of our lifetime. The challenges organizations face today are complex and interconnected, and success depends on their ability to quickly adopt and maximize leading-edge technologies and infrastructure," said Intel CEO Pat Gelsinger. "Today, we are excited to share how we are applying our scale, resources and the magic of silicon, software and services to help customers and partners accelerate digital transformations in this complex environment."New Silicon, Software and Services to Fuel Digital TransformationsThe technology superpowers of artificial intelligence, ubiquitous computing, pervasive connectivity and cloud-to-edge infrastructure are fueling unprecedented demand for semiconductors and opening the door to infinite possibilities, from a truly hybrid workforce to entirely new immersive experiences. At the same time, businesses face growing pressures around supply chain, security, sustainability and the ability to adapt to the complexity of new workloads. Intel is working to help address these challenges with the introduction of new hardware, software and services from cloud to edge to client.Today's announcements include:A major leap in deep learning processing with Habana Gaudi2: Gaudi processors are used for the highest end deep learning AI training and are known for their ability to enable customers to train more and pay less. Launching today, Habana Gaudi2 and Greco AI accelerators are built on a single software stack, Synapse AI, that easily supports different architectures, enabling end-users to take advantage of the processors' performance and efficiency. In addition, Gaudi2 delivers two times better AI training performance compared with current in-market A100-based offerings for key vision and NLP workloads.4th Gen Intel Xeon Scalable sets a new standard for modern performance: Intel is shipping initial SKUs of 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) today. These are the first of many SKUs, with more due to ramp throughout the remainder of the year. The 4th Gen Intel Xeon Scalable processors deliver exceptional overall performance, will support DDR5, PCIe Gen5 and CXL 1.1, and are equipped with new integrated accelerators that deliver up to 30x performance versus the prior generation through software and hardware optimizations for AI workloads. For telco networks it also has new capabilities that deliver up to two times capacity gains for virtual radio access network (vRAN) deployments. In high performance computing, Intel Xeon processors, code-named Sapphire Rapids with high bandwidth memory (HBM), will dramatically boost memory bandwidth available to the processor, super-charging high-performance computing.AI made more accessible for enterprises through Project Apollo: In partnership with Accenture, Intel kicked off Project Apollo, a program that will provide enterprises with more than 30 open-source AI solutions kits that are optimally designed to make AI more accessible to customers in on-prem, cloud and edge environments. The first Project Apollo kits will be released in the next few months.Preparing for the data center of the future with IPUs: Intel unveiled its IPU roadmap extending through 2026, featuring new FPGA + Intel architecture platforms (code-named Hot Springs Canyon) and the Mount Morgan (MMG) ASIC, as well as next-generation 800GB products. IPUs are dedicated products with hardened acceleration for infrastructure compute needs, allowing businesses to accomplish tasks quicker and solve problems faster.Single GPU solution for media transcode, visual graphics and inference in the cloud: Intel's data center GPU, code-named Arctic Sound-M (ATS-M), is the industry's first discrete GPU with an AV1 hardware encoder. ATS-M is a versatile GPU with leadership transcode quality and performance targeting 150 trillion operations per second (TOPS). Developers will be able to easily design for ATS-M with an open software stack through oneAPI. ATS-M will be available in two form factors and in more than 15 system designs from partners including Dell Technologies, Supermicro,, Inspur, and H3C . It will launch in 2022's third quarter.New 12th Gen Intel Core HX processors for hybrid work: The company completed the 12th Gen family with the launch of the new 12th Gen Intel Core HX processors. Created for professionals who need maximum performance and flexibility to navigate a hybrid environment, and with up to 16 cores and clock speeds up to 5 GHz, the Intel Core i9-12900HX processor is the world's best mobile workstation platform.Recognizing that users also want the flexibility to tap into compute resources when and where they need them, Intel provided a first concept demonstration of its software infrastructure initiative: Project Endgame. Applications can take advantage of this software infrastructure layer that enables devices to harness computing resources from other devices within the network to provide an always-available, low latency, continual compute service. For example, a demanding GPU workload running on one device can sense and tap into additional graphics processing horsepower from a more powerful machine to enhance the user's experience. Project Endgame is in development, and Intel will begin beta testing the first milestones of the technology this year.Today's announcements also included an early look at steps Intel is taking to enable service models across the ecosystem. The introduction of the Intel On Demand service offering helps enterprises meet the needs of evolving workloads, product sustainability and opportunities to scale systems near the data. Currently offered through select partners HPE GreenLake, Lenovo TruScale and PhoenixNAP's Bare Metal Cloud, Intel introduced a new consumption business model to enable customers to align their infrastructure with their business needs and requirements.Working Together to Create World-Changing TechnologyThe power and positive impact of Intel's broad portfolio truly comes to life through examples that show not only how the hardware, software and services work together, but also the deep way that Intel collaborates with customers, partners and the ecosystem.Highlights from today include:High performance computing to solve the world's most complex challenges: Argonne National Laboratories is on track to deliver 2 exaflops of peak performance with the Aurora supercomputer running on the Intel Xeon processor, code-named Sapphire Rapids with High Bandwidth Memory (HBM), and the Intel data center GPU, code-named Ponte Vecchio, with Intel oneAPI providing developers seamless system integration. During today's opening keynote, Rick Stevens, laboratory director for Computing, Environment and Life Science of Argonne, showed the installation of the Aurora supercomputer for the first time, discussing how it will help solve some of humankind's most complex problems, such as more accurate climate predictions and discovery of new cancer treatments, while making exascale accessible for more research and development, and innovation.Confidence with confidential computing: In an increasingly dynamic regulatory landscape, global companies must address several considerations when determining how to use regulated data to effectively train and develop neural networks. Bosch and Intel collaborated on a research effort to develop a confidential AI solution that allows Bosch to train its neural networks confidentially in the public cloud. To help achieve this at scale, Bosch Corporate Research has built a confidential AI platform powered by Intel® Software Guard Extensions available with 3rd Gen Intel® Xeon® Scalable platforms.Agriculture autonomy with private wireless networks: Intelligent edge solutions have the potential to transform food cultivation by helping farmers increase yields and operational efficiency while addressing labor shortages and human error. Data analytics also offer the opportunity to deliver insights that help farmers to increase yields and improve crop health while reducing the resources they require. Blue White Robotics developed a new type of autonomous agricultural solution that transforms a grower's existing equipment into a fleet of autonomous tractors connected to an internet-based management platform. With help from Intel and Federated Wireless, Blue White Robotics made this a scalable solution that leverages Intel® Smart Edge and Intel® Xeon® D processors, and employs the power of edge computing and shared spectrum to create a private wireless network on any farm anywhere.Frictionless retail experience: The pandemic has changed the way people want to shop, with many preferring stores with touchless or self-checkout options. Nourish + Bloom Market set out to design a frictionless shopping experience that embraced automation without replacing jobs. To accomplish this, Nourish + Bloom worked with Intel and UST, a leading transformation solutions company, to tap into the collective technical knowledge for building innovations like a next-generation self-checkout using computer vision technology and a fully autonomous store shopping experience.Tech for good: Intel works across the ecosystem to drive positive global change for future generations, such as working to further reduce its direct and indirect greenhouse gas emissions and ensuring the brightest future and next-generation skills for tomorrow's workforce through programs like Intel's AI Festival and collaboration with the Hidden Genius Project and Autodesk. | Content Synthesis/Decision Making/Discovery | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Kyle Alspach | DoControl raises $30M to automate SaaS security | DoControl, which offers a platform for securing data in SaaS applications, announced it has raised a $30 million series B funding round. | https://venturebeat.com/2022/04/14/docontrol-raises-30m-to-automate-saas-security/ | 2022-04-14T12:00:00Z | We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!DoControl, which today announced raising a $30 million series B round, said the funding will go toward accelerating the market expansion for its automated software-as-a-service (SaaS) security platform which is already used by customers including cybersecurity firms such as CrowdStrike, Devo and Armis.The platform stands out with capabilities such as automated workflows that can be triggered using any SaaS event which makes our solution the most granular in the SaaS security industry, said Adam Gavish, cofounder and CEO at DoControl, in an email to VentureBeat.According to a recent report from Productiv, the average company now uses 254 SaaS apps and enterprises are even higher, with 364 SaaS apps being used on average. But while many businesses are increasingly relying on SaaS for sharing data, the problem is that these shared files usually remain shared and accessible forever thus overexposing your company to multiple security threats and increasing the likelihood of data loss over time, Gavish said.DoControls platform, which entered general availability in August 2021, addresses the issue with automation, he said. The platform provides the ability to trigger automated workflows to revoke data access over time, across different internal groups and external collaborators; quickly remediate data exposure with a few clicks; and perform security investigations using SaaS metadata inventory, according to Gavish.Along with the ability to trigger workflows with any SaaS event, the workflows can also be conditioned based on any of the event metadata so you can narrow down the scope to solve any business use case, he said.Meanwhile, DoControl performs classification of personally identifiable information (PII) on the fly as part of an automated workflow so that the PII findings are associated with clear business context to avoid alert fatigue, Gavish said.The DoControl platform currently integrates with SaaS applications from Microsoft (Teams, SharePoint and OneDrive) as well as Google Drive, Salesforce, Slack, Dropbox, Box, Okta and GitHub.Among the startups goals for the new funding is to add integrations with more SaaS applications, including Jira, Zoom, Monday and Workday.DoControl is also looking to grow its customer base, which currently consists of more than 30 customers. In addition to CrowdStrike, Devo and Armis, other named customers include IronNet, Minted, BigPanda, Sisense, Similarweb, Papaya Global, Crossbeam and Snappy.To help drive sales, the company intends to use some of the new funding to roll out channel and other partnership programs, according to Gavish.New York-based DoControl currently employs 47, and expects to reach 70 employees by the end of the year.The series B round was led by Insight Partners. Existing investors RTP Global, StageOne Ventures, Cardumen Capital the CrowdStrike Falcon Fund also participated. DoControl has now raised a total of $45 million in funding since its launch in 2020.Along with Gavish, formerly a product manager at Google Cloud, DoControls founders are chief revenue officer Omri Weinberg (formerly a general manager at SafeDK) and CTO Liel Ran (previously a software and cloud architect at Amenity Analytics).VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Lanner Electronics Launches Falcon H8 PCIe AI Accelerator Card, Powered by Hailo-8™AI Processors | Lanner Electronics & Hailo collaborate on one of the most cost-efficient PCIe accelerator cards on the market, with record high tera operations per second (TOPS), enabling high-end deep learning applications on edge servers NEW TAIPEI, Taiwan and TEL AVIV, Israel, May 10, 2022... | https://www.prnewswire.com/news-releases/lanner-electronics-launches-falcon-h8-pcie-ai-accelerator-card-powered-by-hailo-8ai-processors-301543753.html | 2022-05-10T12:42:00Z | Lanner Electronics & Hailo collaborate on one of the most cost-efficient PCIe accelerator cards on the market, with record high tera operations per second (TOPS), enabling high-end deep learning applications on edge servers NEW TAIPEI, Taiwan and TEL AVIV, Israel, May 10, 2022 /PRNewswire/ -- Lanner Electronics, a global leader in the design and manufacturing of intelligent edge computing appliances, announced its first Hailo-8AI-powered PCIe accelerator card, the Falcon H8. Lanner collaborated with leading AI (Artificial Intelligence) chipmaker Hailo to design the Falcon H8, enabling scalable and powerful intelligent video analytics applications for multiple industries operating at the edge, including intelligent transport systems (ITS), smart cities, smart retail, and Industry 4.0. The Falcon H8 is one of the most cost-efficient PCIe AI accelerator cards on the market, with a low power consumption and record high of up to 156 tera operations per second (TOPS) to allow high-end deep learning applications on edge servers. Lanner's Falcon H8 modular, PCIe FHHL form factor provides a compact and easily deployable solution for engineers looking to offload CPU loading for low-latency deep learning inference. With high-density AI processors, the Falcon H8 accommodates 4, 5, or 6 Hailo-8 AI processors, offering a modular, cost-effective Edge AI solution with high processing capabilities and power efficiency. Through a standard PCIe interface, the Falcon H8 AI Accelerator Card enables legacy devices such as NVRs, Edge AI boxes, Industrial PCs and robots to run video-intensive, mission-critical Edge AI applications such as video analytics, traffic management, access control, and beyond.The Falcon H8 delivers unprecedented inference processing of over 15,000 Frames Per Second (FPS) for MobileNet-v2 and 8,000 FPS for ResNet-50. Its performance is up to 4x more cost effective (TOPS/$) and 2x more power efficient (TOPS/W) compared to leading GPU-based solutions."Optimized for AI functionality, performance, and ease of deployment, Lanner is pleased to partner with Hailo to design a next-gen AI accelerator card that brings top-performing AI computing to the edge of industrial IoT," said Jeans Tseng, CTO of Lanner Electronics. "Our expertise in creating high-density hardware platforms, combined with Hailo's state-of-the-art neural chip and software framework, provides service providers and system integrators a best-in-class AI accelerator that enables running deep learning applications most efficiently with the lowest total cost of ownership.""The integration of Lanner's Falcon H8 and the Hailo-8 provides unmatched AI performance at the edge. This joint solution is more powerful, scalable, and cost-effective than other solutions available on the market today," said Orr Danon, CEO and Co-Founder of Hailo. "Our collaboration with Lanner will better power edge devices across industries, including transportation, smart cities, smart retail, industrial IoT, and more."About Lanner ElectronicsLanner Electronics is a world-leading hardware provider for advanced network appliances, ruggedized edge AI appliances. Lanner's Edge AI hardware platforms bring proven reliability, with a purpose-built design that can withstand the distinct challenges of the industrial edge and enable mission-critical applications such as video analytics, traffic management, access control, and beyond.About HailoHailo, an AI-focused, Israel-based chipmaker, has developed a specialized Artificial Intelligence (AI) processor that delivers the performance of a data center-class computer to edge devices. Hailo's AI processor reimagines traditional computer architecture, enabling smart devices to perform sophisticated deep learning tasks such as object detection and segmentation in real time, with minimal power consumption, size, and cost. Supported by its Hailo-8 M.2 and Mini PCIe high-performance AI acceleration modules, the deep learning processor is designed to fit into a multitude of smart machines and devices, impacting a wide variety of sectors including automotive, industry 4.0, smart cities, smart homes, and retail.Press ContactsLanner ElectronicsBrian ChenMarketing [email protected]+886-2-8692-6060HailoGarrett KrivicichHeadline Media [email protected]+1 786 233 7684 SOURCE Lanner Electronics, Inc | Unknown | Computer and Mathematical/Architecture and Engineering | null | null | null | null | null | null |
||
news | ServiceTitan Launches Titan Intelligence, Bringing AI To The Trades | The Next Generation of ServiceTitan's Operating System for the Trades Helps Contractors Harness the Power of their Data to Predict Outcomes, Automate Tasks, and Embrace Digital Transformation LOS ANGELES, April 22, 2022 /PRNewswire/ -- Pantheon 2022 - ServiceTitan, a leading cloud-based... | https://www.prnewswire.com/news-releases/servicetitan-launches-titan-intelligence-bringing-ai-to-the-trades-301530828.html | 2022-04-22T16:30:00Z | The Next Generation of ServiceTitan's Operating System for the Trades Helps Contractors Harness the Power of their Data to Predict Outcomes, Automate Tasks, and Embrace Digital TransformationLOS ANGELES, April 22, 2022 /PRNewswire/ -- Pantheon 2022 - ServiceTitan, a leading cloud-based software platform built to power the trades, today announced the launch of Titan Intelligence (TI), a suite of AI solutions for the trades. These new features provide home and commercial service providers with actionable insights and recommendations, enabling them to optimize, automate, predict, and innovate across every aspect of their business by harnessing the power of data. Natively built into ServiceTitan, Titan Intelligence will deliver trade-specific AI capabilities across its entire suite of new data products and feature enhancements. With aggregated intelligence and insights, and educational content products on data and analytic literacy, Titan Intelligence will evolve into powerful AI solutions that supercharge trades businesses and empower contractors to reach the level of success they deserve.The trades industry is at an inflection point as trends in data analytics, machine learning, and artificial intelligence are changing how the world operates. With Titan Intelligence, businesses on the ServiceTitan platform will be able to unlock the power of their own data to help automate repetitive and simple tasks, improve customer experience, predict outcomes, lower costs, and stay competitive in a world run by data. Over 7,500 ServiceTitan customers will have the equivalent of their own personal consultant built directly into the platform, helping contractors connect the dots throughout their business to extract meaningful and actionable insights."Titan Intelligence is bringing the power of data and AI to the trades on a scale that has never been seen before," said Vahe Kuzoyan, co-founder and President of ServiceTitan. "This industry will no longer have to wait years, or even decades, to take advantage of modern technology innovations. The launch of Titan Intelligence represents a giant leap forward for the trades, and another example of ServiceTitan's commitment to provide contractors with access to the best technology to fuel their business."Several new ServiceTitan product updates and features powered by Titan Intelligence were also announced today at Pantheon 2022, including:Smart Dispatch (Beta) is ServiceTitan's new dispatch tool, which uses machine learning to analyze job data and assist in assigning technicians for jobs based on factors such as geographical zones, skills, drive time, and technician's sales performance to help save time and maximize revenue.Price Insights is a feature delivered through ServiceTitan Pricebook Pro, allowing contractors to easily see average pricing in their region, enabling them to competitively price their services based on the average cost for similar services locally.Marketing Pro Ads (Beta) enables home service marketers to lower their cost per lead and maximize ROI. By leveraging Google Ads' measurement tools and dynamic call tracking features, ServiceTitan customers are able to see the true impact of their Google Ads budgets and then optimize and automate the flow of key revenue, job, and audience data back into Google."The trades industry is undergoing rapid digital transformation, and we want to equip our customers with the trades-specific tools and valuable data-driven insights they need to meet this critical moment head on," said Ara Mahdessian, co-founder and CEO of ServiceTitan." We believe that all tradespeople deserve not only the best, but also the most innovative technology available to power their businesses that power our lives."To learn more about Titan Intelligence and all of ServiceTitan's new tools and features visit: https://www.servicetitan.com/features/titan-intelligenceServiceTitan is a cloud-based software platform built to power trades businesses. The company's end-to-end solution gives contractors the tools they need to run and grow their business, manage their back office, and provide a stellar customer experience. By bringing an integrated SaaS platform to an industry historically underserved by technology, ServiceTitan is equipping tradespeople with the technology they need to keep the world running. ServiceTitan is backed by world-class investors including Bessemer Venture Partners, ICONIQ Growth, Index Ventures, Tiger Global Management, and T. Rowe Price.SOURCE ServiceTitan | Process Automation/Prediction/Recommendation | Business and Financial Operations/Management | null | null | null | null | null | null |
||
news | Sharon Goldman | Beyond chatbots: How conversational AI makes customer service smarter | Now, machines can not only better understand the words being said, but the intent behind them, while also being more flexible with responses. | https://venturebeat.com/2022/04/21/beyond-chatbots-how-conversational-ai-makes-customer-service-smarter/ | 2022-04-21T19:51:00Z | We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!Over the past few years, weve all encountered Lets chat! buttons on websites that promise a quick, helpful customer service experience. But heavily-hyped AI-driven chatbots, an important part of the CX mix since 2016, have also proven to be a mixed bag. Consumers found many bot interactions disappointing and time-consuming. Meanwhile, enterprises often needed to provide far more costly care and feeding of chatbots than expected. Thanks to open-source AI language models such as Googles BERT and Microsoft/Open AIs GBT, it is now far easier for organizations and technology software vendors to build on top of these innovations. They can create more sophisticated conversational AI tools, from smarter chatbots and asynchronous messaging to voice and mobile assistants. These days, deep learning models can be designed very quickly. And, depending on how theyre done, they might need only a small amount of training data, Hayley Sutherland, senior research analyst for conversational AI at IDC, told VentureBeat. Over the last two or three years, the ability for machines to understand both written and spoken human language has really, really improved, she said. The technology has moved quite a bit past what we would think of as a rules-based, scripted approach where a human manually writes a rigid script that, if it goes outside of that, can break easily and increase frustration. Now, machines can not only better understand the words being said, but the intent behind them, while also being more flexible with responses. That means we can create much more sophisticated virtual assistants or customer care agents, whether they are text-based or voice-based, Sutherland said. Solving for conversational customer challengesDerek Roberti is VP of technology, North America at Cognigy, a low-code, flexible conversational AI platform based in Germany. It was named a leader in Gartners Magic Quadrant for Enterprise conversational AI platforms, along with companies such as Kore.ai, OneReach.ai and Amelia. According to Gartner, the enterprise conversational AI platform market focuses on the needs of larger enterprises by targeting multiple use cases, modalities of conversation (such as speech, chat, text messaging and email) and the ability to operationalize within the enterprise. Overall, the conversational AI market in the customer service space is divided into three key categories, Roperti explained. The first are conversational AI specialists, with platforms that have user interfaces tailored for both the technical and non-technical user; out-of-the-box integrations; and a wide variety of channels. Those are the ones that Gartner has called out as leaders in the space, he said. Next, there are the giant cloud providers, such as Microsoft, Google, Amazon and IBM. These offer core services, perhaps translation, or natural language understanding, or speech-to-text, but dont necessarily have that set of user interfaces and prebuilt components, he added. Finally, there are also thousands of other, smaller market players taking advantage of open source innovations to provide off-the-shelf tools with varied levels of sophistication. All of these companies, across categories, are working to solve the same problem, said Roberti. That is, to create first-class customer experiences, particularly with tooling accessible to both the non-technical and the technical builder. How can we empower people to build automated interactions that are welcoming, easy to get started with and lets you build out even the most advanced conversations? he explained. Conversational AI targets two types of customer service buyersRoberti cites two primary types of buyers in the market for conversational AI tools for customer service and support. First, there are buyers who own the contact center or customer-facing support systems. These are generally non-technical buyers, he said. These buyers may have never worked with conversational AI before, or dont have developer resources, Sutherland added. Thats why were increasingly seeing these kinds of low-code or no-cold tools, she said. You can have someone who isnt a developer but is an expert in customer conversations who knows what a good conversation looks like, who can help to train and check on the capabilities of that conversational AI thats being built and really ensure that the human element is there. On the other hand, there are more technical buyers, including enterprise architects who get requests from every part of the organization for chatbot and voice automation capabilities, Roberti explained. Theyre looking for a platform that can be used across the enterprise. They do care about the user interfaces, but they also care about how the tools will integrate into other systems and how it works within security and compliance ecosystems. Context-aware conversational AI is essential Quiq is a Bozeman, Montana-based AI-powered conversational platform that enables brands to engage customers on the most popular asynchronous text messaging channels. According to founder and CEO Mike Myer, first-generation chatbots lacked good natural language capabilities and often did not allow customers to access the right data. They also had very little context awareness to boost personalization, Myer explained: For example, If you recently completed a purchase and a couple days later you come back to the website, its helpful if the chat box actually says Welcome back. Do you have a question about the order you placed yesterday? Now, however, conversational AI technologies, such as the underlying AI-driven natural language capabilities, are reaching a plateau. The difference between vendors when it comes to natural language understanding is imperceptible from a customer perspective, he explained. What is different now is the quality of implementation, the design, how much training has gone into it. The UX, he said, has become the big differentiator. Acquisitions lead to holistic conversational offeringsIts a sign of the massive, fragmented conversational AI market in the customer service space, as well as the VC money flowing into it, that Sutherland told VentureBeat that she had not heard of Quiq. That is even though the company recently announced a $25 million Series C funding round and last year acquired Snaps, another conversational AI tool. Thats very characteristic of this space right now, the really big infusions of VC money, Sutherland said. We really are at an inflection point where well start to hit some consolidation. In fact, acquisitions have become a regular occurance in the space. Beerud Sheth, co-founder and CEO of messaging leader Gupshup, recently announced three conversational AI acquisitions, including Active.ai and AskSkid, while adding there are another two in the pipeline. We constantly evaluate in terms of technology, so if we find some other company thats done something interesting that augments what we do we will happily consider it, Sheth said, explaining that the acquisitions become part of a holistic platform, as conversational AI becomes part of what every business will need. Sutherland also says a smaller conversational AI company, Uniphore, is making interesting acquisitions to round out their AI-driven offerings. One was a company using AI to analyze video and help salespeople understand customer sentiment, she explained. At the time, Uniphore was mainly focused on customer care, but now theres this sales-focused conversational AI. The idea is that companies might eye the rest of the customer funnel. Tight labor market leads to bigger push for smarter conversational AI A tight labor market is driving conversational AI growth in customer service, said Roberti. In the early days of the pandemic, many contact center agents were let go, for example. Now, even if they could hire as many people as they needed to at 25% more in compensation, staffing is not available, he said. So companies are being pushed towards automation as an imperative, as a matter of survival. The good news is, the latest in conversational AI for customer service has the potential to improve the image of an industry previously filled with unhelpful chatbots, Roberti said: I would say if you did a customer satisfaction survey about chatbots and voice bots in January of this year and compare it to January 2023, you will see a much more favorable reaction. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Tim Keary | AlphaMountain.ai raises $2.7M to launch AI-driven threat intelligence feeds | AlphaMountain.ai’s solution offers coverage against malicious threats and provides access to high quality domain and intelligence feeds. | https://venturebeat.com/2022/04/20/alphamountain-ai-raises-2-7m-to-launch-ai-driven-threat-intelligence-feeds/ | 2022-04-20T20:02:37Z | We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!Today, AlphaMountain.ai announced it has closed a seed funding round led by Mercato Partners Prelude Fund with $2.7 million in funding. The organization intends to use the funding to develop the sophistication of its threat intelligence.AlphaMountain.ais AI platform is designed to provide up-to-date domain and IP intelligence feeds for security platforms. Its intelligence feeds are based on continuously trained AI models that provide coverage for over 1 billion domains and IPs, and can be used for measuring malicious risk and web reputation, content classification, and detecting cyber squatting or phishing. For enterprises, AlphaMountain.ais solution offers a wider range of coverage against malicious threats and provides access to high quality domain and intelligence feeds, with higher accuracy than existing legacy solutions. Threat intelligence: working smarter not harder Over the past decade, threat intelligence solutions have become an integral part of the enterprise security strategy. However, most of the threat intelligence that organizations rely on is outdated. Research highlights that 90% of CISOs rely on outdated, report-based threat intelligence that is too old to inform decisions. Alpha mountain.ais platform is designed to address this challenge by providing organizations security platforms with access to more comprehensive and accurate AI-driven intelligence feeds so that security teams can work smarter and not harder when managing threats. Whats unique about AlphaMountain aside from having two of the brightest minds in network security and machine learning at the helm is that they are actually using AI. Many cyber tools claim to use AI but are actually using static based human authored rules, said Zach Sivertson, partner at Mercato Partners Prelude Fund. Even at this early stage, AlphaMountain.ais technology is very impressive, and partners can tell a difference in the quality of data, Sivertson said. The global threat intelligence market While AlphaMountain.ais seed funding round is modest, the organization has already had some major wins since it was founded in 2020. For instance, the firm has already partnered with Cisco and the Cisco Secure Technical Alliance to enable security for major events, including the Super Bowl earlier this year, and Black Hat USA last year. Its also entering a growing market, with researchers valuing the threat intelligence market at $5.28 billion in 2020 and anticipating it will reach $13.9 billion by 2026 as more organizations look to stay up-to-date on the latest threats. The organization is competing against a range of providers including CrowdStrike with Falcon X, an automated threat intelligence solution for automatically investigating security incidents and supplying organizations with threat reports and research from CrowdStrike experts, which recently announced annual recurring revenue of $1.73 billion. Another competitor is Fortinet with FortiGuard Labs, a threat intelligence platform thats designed to feed threat intelligence directly into Fortinet security products, and the enterprises wider environment. Recently reporting a revenue of $3.34 billion. However, AlphaMountain.ais AI pedigree puts it in a strong position to present itself to the market as one of the most extensive and accurate threat intelligence solutions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership. | Content Synthesis/Detection and Monitoring/Recommendation | Computer and Mathematical/Management | null | null | null | null | null | null |
|
news | Synthesis AI Raises a $17 Million Series A To Expand Its Synthetic Data Platform for Computer Vision AI | Led by 468 Capital, the investment will drive team and product growth to further establish Synthesis AI as the leader in synthetic data SAN FRANCISCO, April 28, 2022 /PRNewswire/ -- Synthesis AI, a pioneer in synthetic data technologies to build advanced computer vision AI models, today... | https://www.prnewswire.com/news-releases/synthesis-ai-raises-a-17-million-series-a-to-expand-its-synthetic-data-platform-for-computer-vision-ai-301534781.html | 2022-04-28T12:00:00Z | Led by 468 Capital, the investment will drive team and product growth to further establish Synthesis AI as the leader in synthetic dataSAN FRANCISCO, April 28, 2022 /PRNewswire/ -- Synthesis AI, a pioneer in synthetic data technologies to build advanced computer vision AI models, today announced it has closed $17 million in Series A financing led by new investor 468 Capital, with additional participation from Sorenson Ventures and Strawberry Creek Ventures and existing investors, Bee Partners, PJC, iRobot Boom Capital and Kubera Venture Capital.The latest round brings Synthesis AI's total funding to over $24 million. The new funds will allow Synthesis AI to grow its world-class team and introduce new products to enable companies to build more advanced computer vision models faster. The company also plans to expand research surrounding the intersection of CGI and AI with a focus on neural rendering, mixed training (real and synthetic), and modeling of complex human behavior."Synthesis AI is uniquely positioned to win in the emerging synthetic data space. The breadth and depth of Synthesis AI's platform, the quality of the team, and the extensive list of Fortune 50 customers firmly establish Synthesis AI as a category leader, " states Florian Leibert, partner at 468 Capital. "We are excited to support Synthesis AI as they push forward their vision to transform how AI models are fundamentally developed."Synthesis AI's leading technology and proven customer traction with leading AI and technology companies were critical to the company's successful round. The company has a track record of innovation and, over the last year, the company recorded several noteworthy firsts in the industry. The company released the first book on synthetic data, produced the first white paper surrounding facial analysis with synthetic data, published the first industry survey, and launched the first self-serve product (HumanAPI) in the space that has delivered well over 10 million generated images.Accelerating Company MomentumThe Series A financing follows the launch of OpenSynthetics, the first dedicated community for creating and using synthetic data in AI/ML and computer vision with centralized access to synthetic datasets, research, papers, and code. Through OpenSynthetics, AI/ML practitioners, regardless of experience, can share tools and techniques for creating and using synthetic data to build more capable AI models and work to power the next generation of computer vision.Additionally, Synthesis AI is expanding its HumanAPI solution to support the development of advanced digital humans, with new functionalities for pose estimation, action recognition, and high-density landmarks. The new capabilities enable advanced applications in the metaverse, Augmented Reality (AR), Virtual Reality (VR), and others in generated media, home & enterprise security, and AI fitness."Last year was a momentous year, and we're excited to continue growing our teams and products with the support of our terrific investors," said Yashar Behzadi, CEO of Synthesis AI. "Synthetic data is at an inflection point of adoption, and our goal is to develop the technology further and drive a paradigm change in how computer vision systems are built. The industry will soon fully design and train computer vision models in virtual worlds enabling for more advanced and ethical AI."Synthesis AI was recently recognized as #4 in Fast Company's prestigious global list of the most innovative small companies and as one of the top 10 breakthrough technologies of 2022 by MIT Technology Review.To learn more about Synthesis AI, visit https://synthesis.ai/. About Synthesis AI Synthesis AI, a San Francisco-based technology company, is pioneering the use of synthetic data to build more capable and ethical computer vision models. Through a proprietary combination of generative neural networks and cinematic CGI pipelines, Synthesis' platform can programmatically create vast amounts of perfectly-labeled image data at orders of magnitude increased speed and reduced cost compared to current approaches. Synthesis AI's customers include Fortune 500 technology, AR/VR/metaverse, automobile, teleconferencing, and AI companies.SOURCE Synthesis AI | Content Creation/Content Synthesis/Process Automation | Computer and Mathematical | null | null | null | null | null | null |
||
news | BenevolentAI Begins Trading On Euronext Amsterdam | LONDON, April 25, 2022 /PRNewswire/ -- BenevolentAI (Euronext Amsterdam: BAI), a leading, clinical-stage AI-enabled drug discovery company, announces that trading in its shares is expected to begin today, following completion of the business combination of BenevolentAI Limited with... | https://www.prnewswire.com/news-releases/benevolentai-begins-trading-on-euronext-amsterdam-301531567.html | 2022-04-25T05:00:00Z | LONDON, April 25, 2022 /PRNewswire/ -- BenevolentAI (Euronext Amsterdam: BAI), a leading, clinical-stage AI-enabled drug discovery company, announces that trading in its shares is expected to begin today, following completion of the business combination of BenevolentAI Limited with Odyssey Acquisition S.A. ("Odyssey") on 22 April 2022 (the "Business Combination").HighlightsThe combined company now operates under the name BenevolentAI and will trade on Euronext Amsterdam under the ticker symbol BAIGross proceeds from the Business Combination of 225 million will put BenevolentAI in a strong position to accelerate the development of its clinical pipeline and deliver multiple value inflection points in the near futureBenevolentAI also intends to continue investing in its technology platform and consolidate its position in AI-enabled drug discoveryOlivier Brandicourt, the former CEO of Sanofi, and Jean Raby, the former CEO of Natixis Investment Managers, have joined the board of BenevolentAIBackgroundBenevolentAI is a leading, clinical-stage AI-enabled drug discovery company headquartered in London, with a research facility in Cambridge (UK) and an office in New York. BenevolentAI, employs a total of over 300 people.Through the combined capabilities of its AI platform, scientific expertise, and wet-lab facilities, BenevolentAI aims to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods.The Benevolent Platform is a disease-agnostic drug discovery platform that powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies across multiple therapeutic indications. BenevolentAI also maintains successful collaborations with leading pharmaceutical company AstraZeneca, alongside collaborations with research and charitable institutions.The Business Combination provides BenevolentAI with a multi-year cash runway to continue development of the Benevolent Platform and progress its pipeline of drug candidates. In particular, BenevolentAI aims to complete the Phase I/II trial for BEN-2293 (atopic dermatitis) to make it ready for out-licensing, as well as to complete the Phase I trial for BEN-8744 (ulcerative colitis) in 2023 before commencing a Phase II trial.The BenevolentAI business model leverages the Benevolent Platform to generate new drug IP at scale and accelerate multiple novel assets to IND stage to commercialise in-house or through out-licensing agreements. Separately, BenevolentAI will seek to enter into selective strategic platform collaborations to drive greater shareholder value in disease areas that it would not currently select for in-house clinical development.Joanna Shields, Chief Executive Officer of BenevolentAI, said: "At BenevolentAI, we set out to build a company that can bring about real transformation in drug R&D by leveraging AI and advanced technologies to improve drug efficacy, decrease failure rates and accelerate the development of new medicines. BenevolentAI is well on its way to achieving this mission.Our revolutionary approach empowers scientists to better understand disease biology and discover more effective therapies. Today, the Benevolent Platform is fully operational, scientifically validated, and producing results, demonstrated by our in-house pipeline of over 20 platform-generated drug candidates and successful collaborations. This listing underscores our commitment to becoming a key player in the global drug discovery sector."Dr François Nader, Chairman of BenevolentAI, said: "BenevolentAI is strongly positioned to unlock a new level of growth as a public company and advance its ambitious plans to scale its innovative R&D platform. The next chapter for BenevolentAI is a compelling one, as we look to strengthen our position within the AI-driven drug discovery sector and advance our mission to deliver life-changing medicines to patients. I also take pleasure in welcoming Olivier Brandicourt and Jean Raby to our Board they bring immense experience and expertise, which will be invaluable as we pursue our value creation strategy."Michael Zaoui, former Chairman of Odyssey, said: "The completion of the Business Combination and BenevolentAI's listing on Euronext Amsterdam marks the final step of Odyssey's journey. Again, we would like to thank our shareholders as well as all those who contributed to this result. BenevolentAI now starts its life as a public company, and we have every confidence that it will thrive in the important and exciting space of AI-led drug discovery."Transaction summaryThrough the Business Combination, BenevolentAI has received gross proceeds of 225 million.The issued share capital of BenevolentAI now amounts to 145,126.303, divided into 137,626,303 Ordinary Shares and 7,500,000 class B shares (the "Sponsor Shares"). Following redemption of Ordinary Shares by their holders prior to closing, BenevolentAI will hold 25,137,581 Ordinary Shares in treasury.On 25 April 2022, 5,000,000 Sponsor Shares automatically converted on a 1-to-1 basis into 5,000,000 Ordinary Shares. As a result, the share capital of BenevolentAI remains 145,126.303, divided into 142,626,303 Ordinary Shares and 2,500,000 Sponsor Shares.Certain BenevolentAI shareholders that were shareholders in BenevolentAI Limited prior to the Business Combination and certain shareholders of Odyssey's Ordinary Shares are subject to a 180-day lock-up until 19 October 2022 and Odyssey's sponsor entity is subject to a 365-day lock-up until 22 April 2023, subject to terms set forth in the respective lock-up agreements. For more details on transaction rationale, transaction structure, and lock-up terms, please refer to the prospectus dated 22 April 2022, which can be found on the Investor section of www.benevolent.com. ABOUT BENEVOLENTAIBenevolentAI (AMS: BAI) is a leading, clinical-stage AI-enabled drug discovery company listed on the Euronext Amsterdam stock exchange. Through the combined capabilities of its AI platform, scientific expertise, and wet-lab facilities, BenevolentAI is well-positioned to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods. BenevolentAI has a consistently proven track record of scientifically validated discoveries. The BenevolentAI Platform powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies, and it maintains successful collaborations with AstraZeneca, as well as leading research and charitable institutions. BenevolentAI is headquartered in London, with a research facility in Cambridge (UK) and a further office in New York.ADVISORSGoldman Sachs International served as exclusive financial advisor to BenevolentAI in connection with the business combination. Goldman Sachs International also acted as placement agent on the PIPE. Latham & Watkins (London) LLP, NautaDutilh Avocats Luxembourg S.a r.l. and NautaDutilh N.V. served as legal advisors to BenevolentAI.J.P. Morgan SE and Zaoui & Co served as financial advisors to Odyssey. J.P. Morgan SE also acted as placement agent on the PIPE. Skadden, Arps, Slate, Meagher & Flom (UK) LLP, ELVINGER HOSS PRUSSEN, société anonyme and Stibbe N.V. served as legal advisors for Odyssey.Linklaters LLP served as legal advisor to the placement agents on the PIPE.DISCLAIMER:These materials may not be published, distributed or transmitted in the United States, Canada, Australia or Japan. These materials do not constitute an offer of securities for sale or a solicitation of an offer to purchase securities of BenevolentAI (the "Securities") in the United States, Australia, Canada, Japan or any other jurisdiction in which such offer or solicitation is unlawful. The Securities may not be offered or sold in the United States absent registration or an exemption from registration under the U.S. Securities Act of 1933, as amended (the "Securities Act"). There will be no public offering of the Securities in the United States. The Securities have not been, and will not be, registered under the Securities Act. The Securities referred to herein may not be offered or sold in Australia, Canada or Japan or to, or for the account or benefit of, any national, resident or citizen of Australia, Canada or Japan, subject to certain exceptions.This publication constitutes neither an offer to sell nor a solicitation to buy securities. An investment decision regarding the Securities should only be made on the basis of the prospectus, which has been approved by the Financial Sector Supervisory Commission (Commission de Surveillance du Secteur Financier (CSSF)) and is available free of charge on the Odyssey website.This announcement does not constitute a prospectus. BenevolentAI has not authorised any offer to the public of Securities in any Member State of the European Economic Area. With respect to any Member State of the European Economic Area (each a "Relevant Member State"), no action has been undertaken or will be undertaken to make an offer to the public of Securities requiring publication of a prospectus in any Relevant Member State. As a result, the Securities may only be offered in Relevant Member States (i) to any legal entity which is a qualified investor as defined in the Prospectus Regulation; or (ii) in any other circumstances falling within Article 1(4) of the Prospectus Regulation. For the purpose of this paragraph, the expression "offer of securities to the public" means the communication in any form and by any means of sufficient information on the terms of the offer and the Securities to be offered so as to enable the investor to decide to purchase or subscribe for the Securities and the expression "Prospectus Regulation" means Regulation (EU) 2017/1129 and includes any relevant delegated regulations.This release may contain forward-looking statements. Forward-looking statements are statements that are not historical facts and may be identified by words such as "plans", "targets", "aims", "believes", "expects", "anticipates", "intends", "estimates", "will", "may", "continues", "should" and similar expressions. These forward-looking statements reflect, at the time made, BenevolentAI's beliefs, intentions and current targets/aims concerning, among other things, BenevolentAI's and its group's results of operations, financial condition, liquidity, prospects, growth and strategies. Forward-looking statements include statements regarding: objectives, goals, strategies, outlook and growth prospects; future plans, events or performance and potential for future growth; lease-up potentials; economic outlook and industry trends; developments of BenevolentAI's or its group's markets; the impact of regulatory initiatives; and the strength of BenevolentAI or any other member of its group's competitors. Forward-looking statements involve risks and uncertainties because they relate to events and depend on circumstances that may or may not occur in the future. The forward-looking statements in this release are based upon various assumptions, many of which are based, in turn, upon further assumptions, including without limitation, management's examination of historical operating trends, data contained in BenevolentAI's records (and those of other members of its group) and other data available from third parties. Although BenevolentAI believes that these assumptions were reasonable when made, these assumptions are inherently subject to significant known and unknown risks, uncertainties, contingencies and other important factors which are difficult or impossible to predict and are beyond BenevolentAI's control.Forward-looking statements are not guarantees of future performance and such risks, uncertainties, contingencies and other important factors could cause the actual outcomes and the results of operations, financial condition and liquidity of BenevolentAI and other members of its group or the industry to differ materially from those results expressed or implied in the Information by such forward-looking statements. No assurances can be given that the forward-looking statements will be realised. The forward-looking statements speak only as of the date of this release. BenevolentAI expressly disclaims any obligation or undertaking to release any updates or revisions to any forward-looking statements to reflect any change in their expectations with regard thereto or any changes in events, conditions or circumstances on which any forward-looking statements are based. No representation or warranty is made that any of these forward-looking statements or forecasts will come to pass or that any forecast result will be achieved. Undue influence should not be given to, and no reliance should be placed on, any forward-looking statement.SOURCE BenevolentAI | Content Synthesis/Decision Making | Life, Physical, and Social Science | null | null | null | null | null | null |
||
news | AnChain.AI Deploys Next-Gen Web3 Analytics On Elrond To Boost Compliance And Fraud Prevention | SAN FRANCISCO, May 10, 2022 /PRNewswire/ -- Elrond Network, the developer of internet-scale blockchain infrastructure for Web3 payments, DeFi and the new Metaverse economy, has announced the completion of their integration with AnChain.AI, the AI-powered next-gen blockchain analytics... | https://www.prnewswire.com/news-releases/anchainai-deploys-next-gen-web3-analytics-on-elrond-to-boost-compliance-and-fraud-prevention-301542813.html | 2022-05-10T12:00:00Z | SAN FRANCISCO, May 10, 2022 /PRNewswire/ -- Elrond Network, the developer of internet-scale blockchain infrastructure for Web3 payments, DeFi and the new Metaverse economy, has announced the completion of their integration with AnChain.AI, the AI-powered next-gen blockchain analytics platform.This integration has strategic implications for Elrond's recent moves into traditional and blockchain-based digital payments infrastructure. Through consecutive acquisitions, Elrond has expanded its portfolio to include an e-money license, a Virtual Asset Service Provider (VASP) license, and the ability to issue debit cards, all valid throughout the entire European Economic Area."We are making decisive steps for the integration of blockchain technology into the global financial system," said Beniamin Mincu, Elrond Network CEO. "International financial institutions and national economies can operate and collaborate using our scalable architecture. It is therefore essential that they have access to the highest levels of compliance and fraud prevention. AnChain is a great enabler in this regard."AnChain's industry leading blockchain analytics capabilities has made the AI-driven platform an indispensable tool for the forensics and compliance capabilities of major financial institutions such as the US SEC (Securities and Exchange Commission), governments, digital asset exchanges and other VASPs and financial institutions."We are at an inflection point where enterprises and governments are rapidly realizing that blockchain technology has become indispensable for efficiency increase and sustainable growth," said Dr Victor Fang, CEO & Co-founder of AnChain.AI. "The AnChain Web3 security suite will complement Elrond's promising technology and give it an important edge that will enable it to meet the influx of new institutional demand."The AnChain toolset will provide applications using Elrond Network blockchain technology with the ability to quickly make sense of large data sets pertaining to on-chain transactions. This will bolster the ability of organizations to avoid interactions with addresses flagged for illicit activities, prevent fraud attempts and help with the tracking and recovery of misappropriated digital assets.With the powerful AI-driven security toolkit deployed on top of its internet-scale infrastructure, the Elrond Network establishes itself as a leader in the push for the adoption of blockchain technology in organizations, nations and international institutions, and for the creation of a new digital financial system at a global scale.About ElrondElrond is a new blockchain architecture, designed from scratch to bring a 1,000-fold cumulative improvement in throughput and execution speed. To achieve this, Elrond introduces two key innovations: a novel Adaptive State Sharding mechanism, and a Secure Proof of Stake (PoS) algorithm, enabling linear scalability with a fast, efficient, and secure consensus mechanism. Thus, Elrond can process upwards of 10,000 transactions per second (TPS), with 5-second latency, and negligible cost, attempting to become the backbone of a permissionless, borderless, globally accessible internet economy.About AnChain.AIAnChain.AI is an AI-powered cybersecurity company enhancing blockchain security, risk, and compliance strategies. AnChain.AI was founded in 2018 by cybersecurity and enterprise software veterans from FireEye and Mandiant. Backed by both Silicon Valley and Wall Street VCs, and selected in the Berkeley Blockchain Xcelerator, the company is trusted by 100+ customers from over 10+ countries in these sectors: VASPs, financial institutions, and government, including the U.S. SEC (Securities and Exchange Commission). Featured by CBS News, MIT Tech Review, Coindesk, and DEFCON, AnChain.AI's AML engine screens over $1 billion in daily crypto transactions.SOURCE AnChain.AI | Detection and Monitoring/Process Automation | Business and Financial Operations/Management | null | null | null | null | null | null |
||
news | Kyle Wiggers | Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions | Inflection AI, the startup co-founded by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has raised $225 million in funding. | https://techcrunch.com/2022/05/13/inflection-ai-led-by-linkedin-and-deepmind-co-founders-raises-225m-to-transform-computer-human-interactions/ | 2022-05-13T16:23:33Z | Inflection AI, the machine learning startup headed by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has secured $225 million in equity financing, according to a filing with the U.S. Securities and Exchange Commission. The source of the capital isn’t yet clear — Inflection didn’t immediately respond to a request for more information — but the massive round suggests strong investor confidence in Suleyman, who serves as the company’s CEO.Palo Alto, California-based Inflection has kept a low profile to date, granting relatively few interviews to the media. But in CNBC profile from January, Suleyman described wanting to build products that eliminate the need for people to simplify their ideas to communicate with machines, with the overarching goal being to leverage AI to help humans “talk” to computers.“[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like were on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.”The concept of translating human intentions into a language computers can understand dates back decades. Even the best chatbots and voice assistants today havent delivered on the promise, but Suleyman and Hoffman are betting that coming advancements in AI will make an intuitive human-computer interface possible within the next five years.They’ll have competition. Just last month, Adept, a startup co-founded by former DeepMind, OpenAI, and Google engineers and researchers, emerged from stealth with a similar concept: AI that can automate any software process. DeepMind itself has explored an approach for teaching AI to control computers, having an AI observe keyboard and mouse commands from people completing “instruction-following” computer tasks, such as booking a flight.Regardless, the size of Inflection’s funding round reflects the high cost of building sophisticated AI systems. OpenAI is estimated to have spent millions of dollars developing GPT-3, the company’s system that can generate human-like text given a prompt. Anthropic, another startup developing cutting-edge AI models, recently raised over half a billion to — in co-founder Dario Amodei’s words — “explore the predictable scaling properties of machine learning systems.”AI expertise doesn’t come cheap, either, particularly in the midst of a talent shortage. In 2018, a tax filing spotted by the New York Times revealed that OpenAI paid its top researcher, Ilya Sutskever, more than $1.9 million in 2016. Inflection recently poached AI experts from Google and Meta, CNBC reported in March.Even at the bigger tech companies, theres a relatively small number of people actually building these [AI] models. One of the advantages of doing this in a startup is that we can go much faster and be more dynamic, Suleyman told CNBC. My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close-knit, small, focused team. Im going to try and preserve that for as long as possible.A cloud surrounds Inflection, somewhat, following reports that Suleyman allegedly bullied staff members at Google, where he worked after being placed on administrative leave at DeepMind for controversy surrounding some of his projects. Google launched an investigation into his behavior at the time, according to the Wall Street Journal, but it never made its findings public. | Digital Assistance/Information Retrieval Or Search | Unknown | null | null | null | null | null | null |
|
news | [email protected] (Andie Burjek) | Inside the Latest Wave of Customer Experience Innovation or Evolution | According to Interactions CEO Michael Iacobucci, there are two major reasons for the current customer experience evolution. We spoke with him about these.Continue reading... | https://www.cmswire.com/customer-experience/inside-the-latest-wave-of-customer-experience-innovation-or-evolution/ | https://www.cmswire.com/-/media/b4979fe573574cd59108cddb733e76da.ashx | 2022-04-28T15:02:28Z | PHOTO:Adobe Stock Customer experience has evolved in exciting ways in the past few years. Michael Iacobucci, CEO at Interactions Corporation, cites two major reasons for this industry-wide evolution: the increasingly critical need for better customer experiences as a brand differentiator, and the AI advancement to enable those experiences. According to a recent survey, 75% of customers say they will pay more to do business with a company that provides good CX.Exceptional customer service is not optional for todays businesses, Iacobucci said. In order to succeed, CX must be a top priority.He added that along with the growing demand for better CX, technology has also evolved. Some of the major tech advancements for CX include improvements to AI, more sophisticated natural language processing and the introduction of voice assistants that can engage with customers in natural, human-like conversations.The pandemic has only added fuel to the fire, Iacobucci said. Factors including labor shortages, increased customer uncertainty and the need for business continuity in a remote-first world only accelerated the adoption of these technologies, ushering us into a new era of CX.Eighty percent of customers today say that the experiences companies provide are as important as their products or services, and customers are 2.7 times more likely to continue doing business with a company that provides high-quality service. Iacobucci spoke to Simpler Media Group about the changes the industry has experienced since 2008 and what he expects from the future of the industry.Simpler Media Group: You've been in this industry for a long time more than 13 years at Interactions alone. Can you take us back to what the customer experience industry looked like when you first started, including the role technology was playing at the time?Michael Iacobucci: The promise of the customer experience industry has been the same since my first day on the job: fast, effortless experiences for every customer, every time. But back in 20082009, companies were deploying technology that wasnt up to that task. The industry was riddled with limited dial menus, ineffective Intelligent Voice Response systems and speech systems masquerading as artificial intelligence but incapable of understanding, responding to or learning from customers. The goal was more often to deflect or contain customers, rather than to engage and serve them.Take first-generation chat solutions and chatbots as an example. At the time, they were the shiny new object everyone had to have but not only was the technology not up to par, brands also implemented these solutions without thinking about how they would fit into a larger, cohesive customer experience strategy. Naturally, this approach soon disappointed and frustrated customers.I knew as an industry, we could and should do better. I was intrigued by the potential for technology to reimagine the most frustrating experiences in our daily lives. And Im glad our team at Interactions stuck with it weve advanced leaps and bounds since then.SMG: Are there places where this industry is still falling short? What can be done about this?Iacobucci: Oftentimes, we still see AI and automation applied in a way that is not customer-centric. Take the rise of one-size-fits-all DIY solutions as an example. Companies are drawn to these options because they promise fast deployment and ease of use, but theres a lot more that goes into fostering excellent CX than throwing technology at the problem, including design and customization. Too often, these DIY solutions that promise to fix a companys CX are actually deployed at the expense of CX. Technology, even when its the latest and greatest, underwhelms when it isnt designed and deployed with the consumer in mind; it can only be impactful if it improves the user experience.Were also seeing a lot of companies across this industry applying technologies like AI to incrementally improve the customer experience for agents, customers, or both from prompting agents on what to say next, to using AI to predict customer emotions. Theres nothing inherently wrong with this approach, but I often watch these companies and think theyre building on a shaky foundation like adding a fresh coat of paint or a snazzy new mailbox to a house thats bound to collapse due to a faulty foundation.I believe weve pushed our existing CX paradigm about as far as it can go. Weve seen a lot of improvement in the last decade, but the moment for incremental improvement is past now. Its time for some fundamentally different thinking about how we can most effectively leverage the newest AI technologies, and [its time for us to] reimagine the relationship between AI and people to better serve our customers.SMG: Can you elaborate on this redefined relationship between AI and people?Iacobucci: AI and people both work best when they work together, and I dont see that changing, even in this new era of CX. But as call volumes rise, labor shortages stretch on, and customers become more irritable, AI can and should take on an even more prominent role to keep businesses running smoothly.Companies can, for example, lead with AI to open and facilitate each conversation with customers. Many tasks from making a payment, to changing a billing address or scheduling an appointment can be done quickly and easily with AI. This allows customers to control their own journeys, taking care of matters on their own time and their own terms, 24/7, while taking some of the strain off overworked agents. With recent advancements, AI-powered solutions are capable of handling more than ever, and I think its time we let them.SMG: What keeps you interested in this industry after more than a decade? What are you most excited for regarding what's to come?Iacobucci: I said at the beginning that our industry has promised fast, easy, painless experiences for customers for more than a decade; Ive stuck around because we get a little closer every day. Today, I believe were at a real inflection point for the industry. The technology, investment, market maturity and demand are advancing in a way that allows us to deliver on this promise.Im excited for the reality I see right around the corner. A reality where businesses are implementing an opti-channel strategy to provide customers with the best channel, for that moment in time and for the task at hand, so they can get their problem resolved quickly and painlessly. No repetition, no getting bounced around to multiple agents, no choosing from a restrictive dial menu just fast, easy resolution to questions and concerns.Discover how Interactions can improve your CX technology stack at interactions.com.Andie Burjek is a writer based in Chicago and a regular contributor to CMSWire. | Content Synthesis/Digital Assistance/Recommendation | Sales and Related/Business and Financial Operations | null | null | null | null | null | null |
news | [email protected] (Sascha Brittner & René Walter), Sascha Brittner & René Walter | AI dreams of Elon Musk buying Twitter | neural network shenanigans worth 44 billion bucks | https://goodinternet.substack.com/p/ai-dreams-of-elon-musk-buying-twitter | 2022-04-26T09:51:13Z | Musicvideos have their own Newsletter now: GOOD MUSIC. All killers and absolutely zero fillers. The latest issues featuring The Strokes, Foo Fighters, Tyler the Creator, Say Sue Me, Soccer Mommy, Bloc Party, Psychedelic Porn Crumpets,Kadhja Bonet and many more. You can also find all the tracks from all Musicvideos in a Spotify-Playlist.So, lets have some fun with the 2 gamechangers this week: Artificial Intelligence and Computer Vision making a big leap with Dall-E and Elon Musk actually buying the Tweeties for 44 billion bucks.If you want my personal take on Musk buying Twitter, here you go: 1. Musk is one of the most interesting figures in tech space there is and hes a far shot from palpatine wannabe Peter Thiel, i actually believe Musk when he muses about his philanthropic views which he combines with some accelerationism and anarchism. If anyone should buy Twitter, its him. 2. Im pretty much with Jack Dorseys take on the deal: [Twitter] wants to be a public good at a protocol level, not a company. Solving for the problem of it being a company however, Elon is the singular solution I trust. Musk already said hes not interested in the economics of Twitter and may get rid of advertising. Well see how this goes, but Im pretty optimistic about this.However, I only came across very few AI-artists imagining this deal today, which is astounding, no? The Tesla-founder aquiring maybe the most unique social network there is, this should be a bigger thing in the AI-artist-space too, with these two dramatic renaissance painting of Elon Musk buying Twitter being nearly the only interesting AI-artworks out there.So I went on some AI playground sites, generated some semi-coherent text to accompany some images I created with CLIP+QGan, Dribnet, Dall-E 1 and I also produced some weird prose with Jurassic-1 to go with it. Someone had to do it, so heres Robot-Elon doing his thing.Elon Musk bought Twitter for 44 Billion DollarsElon Musk bought Twitter for 44 Billion Dollars today and 3 hours later he quit. The transaction will be finalized late 2018 or early 2019.In a shocking turn of events, Elon Musk just announced that he's quitting Twitter. In a tweet today, Musk announced that he's deleting the Boring Company's and SpaceX's Twitter accounts and that he's quitting Twitter. Musk's tweets about deleting the Twitter accounts came in response to a question from a Twitter user about when those accounts would be deleted. "Twitter is to blame. But of course, so is @jack and the Twitter team. It's their product," Musk responded.After the aquisition he announced that they are ditching the 140 character limit. Perhaps one of the biggest problems facing Twitter today is that the character limit is a major hindrance. It causes people to miss out on a lot of content and is a very, very archaic way of interacting with people. This move will allow people to instantly share their thoughts and opinions and should lead to increased usage. I personally welcome the change as I use Twitter to post blogs.Elon Musk bought Twitter because Elon Musk is rich. Elon Musk bought Twitter because Elon Musk likes Twitter. Elon Musk bought Twitter because Elon Musk likes money. Elon Musk bought Twitter because Elon Musk likes Twitter and money. He made a great deal of money from that. Now he has said that he is planning to make a deal with Twitter to build a tunnel between LA and San Francisco. He said that this will cost 10 Billion Dollars. If that is true, then he has a real chance of making a deal with Twitter. He will get to build the tunnel without any public funds. It is not a good idea to do a deal with Twitter without asking questions. The last thing we need is a tunnel that will be used to transport users. Twitter should not be involved in any such thing. If he can do a deal with Twitter, then he will have to make sure that it is a deal that will benefit both sides. Musk has also said that he will use the tunnel to transport cars. That is a great idea. He will save a lot of money on parking and gas.Elon Musk bought Twitter for 44 Billion Dollars today and 4Chan and 8Chan are now up for sale. 4Chan and 8Chan are two of the most popular websites on the internet. 4Chan is a board where you can post anonymously. 8Chan is a board where you can post anonymously. 4Chan and 8Chan have been in the news a lot lately because a lot of conspiracy theories are posted on these websites. 4Chan and 8Chan are run by the same person. 4Chan and 8Chan are both for sale. 4Chan and 8Chan are both worth a lot of money. 4Chan and 8Chan are both popular websites. 4Chan and 8Chan are both for sale for 44 Billion Dollars. Elon Musks private Twitter account is set to become public. Musks tweets and replies will now be searchable. Musks tweets and replies will no longer be deleted. Musks tweets are now public.Musks private account @elonmusk will not transfer. Musks official handle @elonmusk will remain. Musks tweets and replies will not be public. Future tweets and replies will be published on Twitter.Enjoy your Tuesday! Scene 1: The 2 Musks Elon Musk, billionaire, innovator and visionary, was thinking about buying Twitter.Subscribe to GOOD INTERNET on Substack or on Patreon or on Steady and feel free to leave a buck or two. If you dont want to subscribe to anything but still want to send a pizza or two, you can paypal me.Thanks. | Content Creation/Content Synthesis | Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Michaël Trazzi | Why Copilot Accelerates Timelines | Published on April 26, 2022 10:06 PM GMT"Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen?" (source)When discussing Takeoff Speeds, I feel the debate often gets stuck in some kind of false dichotomy between Fast and Slow, where the crux seems to be about whether some self-improving AI would be able to foom without human assistance.Instead, we could get a Moderate Takeoff (think months or years), where AI does not self-improve (by itself). Instead, there would be a reinforcing feedback loop where progress in AI leads to AI becoming increasingly useful to make progress in AI, with humans in the loop at all times.On top of that, things might just happen privately at some AI lab for a few months until the AI is able to foom by itself, which will look like foom for everyone outside that lab.AI Helping Humans with AIIn Superintelligence, takeoff is defined as the period between AGI and superintelligence. In this post, I will be using as takeoff's starting point the first "AI Helping Humans with AI" (in a meaningful way), or AIHHAI for short, since it will arise before we get fully general intelligence and accelerate AI progress. Here are some examples of what I have in mind for "helping humans in a meaningful way":GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.Another example would be if some code generation tool like Copilot makes ML engineers substantially more productive in writing ML code. Making predictions about productivity is tricky and hard to measure, but it would involve something like accepting code suggestions a decent amount, like 100x more than what engineers using Copilot currently accept.(Sidenote: My general impression from people using Copilot is that they believe it's becoming increasingly useful, and use it on a daily basis, though it rarely auto-completes the right line of code right away. Given that we had Codex/Copilot last year, and that Sam Altman hinted at some new Codex capabilities in his ACX Q&A[2], I think we will get some impressive release for Copilot/Codex sometime this year that most engineers will want to use. (Similar to how a lot of developers are used to using Visual Studio's suite, especially intellisense.) The model I have in mind for "AI helping humans with AI" could be this one, though it will probably require 1-2x more iterations.)Moderate TakeoffA Moderate Takeoff is defined[1] as "one that occurs over some intermediary temporal interval, such as months or years". For AIHHAI, we can distinguish two cases:AIHHAI is developed by some AI lab working on it privately. That lab has a lead compared to the other labs, since they are working more productively using AIHHAI. Thus, they might reach superintelligence first, without allowing enough time for the rest of the world to compete.AIHHAI is made public, or quickly (think months) reproduced by others publicly or privately. In any case, some AIHHAI model is eventually made public, and there is not only one group using AIHHAI--other companies are adopting the same strategy (multipolar scenario).For the first case, you can think of OpenAI using a new version of Copilot internally, that enables their team to quickly build another, even better version of Copilot without releasing the intermediate stage publicly. They would already have a couple of months of lead time in terms of engineering, and after using their latest version, the lead (in terms of how long it would take for a competitor to catch up using publicly available tools) would increase over time due to the compounding advantage.For the second case, you could consider a similar scenario where they do a public release, or other labs like Google Research just build something equivalent a couple of months after. Even if is not public, the thing might be so impressive that employees talk about it to their friends or anonymously, and the news eventually gets leaked. In that regime, you get many companies possibly expanding capabilities in Code Generation, possibly by scaling models and datasets aggressively. The AI race becomes an engineering race, though we might need more scientific breakthroughs for scaling laws to continue (for more on this, see section "Why Code Generation might Plateau Instead" at the end).It is unclear to me which case is more likely. On the one hand, the usefulness of AIHHAI would cause some rapid self-improvement of the system {Humans developing the AI + AI}, and the pace would be so quick the model would not have time to leak. On the other hand, the results being exciting enough increases the probability of the news getting leaked and that other players start (were already?) investing in similar models heavily.Self-improving (Humans + AI)One thing I have not seen discussed a lot is how the system "humans + AI" could have different takeoff speeds, where, for this hybrid system, takeoff would basically mean "going from {human + AIHHAI} to {human + superintelligent AI}".Note that there are many such systems we could study, such as:System 1. {All humans + all used resources, including ML models and science}System 2. {All humans working on code generation + all resources they use}System 3. {Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N}The importance of closed systemsThinking about how "closed" or "small" a system is helps us to understand its kinetics, and also has some implications regarding AI races.Indeed, a small system using only its own output as input could independently foom, without encountering major bottlenecks. Conversely, it your system requires a lot of insights in mathematics or engineering to overcome bottlenecks, foom becomes less likely. However, a smaller model with less humans might have less resources, labor-wise.With regard to races, if your system does not require the output of other disciplines to make progress, you could keep it private for longer. (If the system required a lot of insights, publishing preliminary results about the system could prove necessary to get the outside world to publish research relevant to your system.) In practice:System 1 is a closed system. Thinking about how fast it would improve basically brings us back to "when will GDP double in a year" territory. The abstraction is not precise enough to give insights about kinetics without basically studying macro-economics.System 2 is not closed, since it actually uses insights from other discipline in CS/Math and others as inputs. That said, the research in code generation directly helps the humans doing work relevant to code generation (assuming they use it).System 3 would also definitely need to take research and tools from somewhere else as inputs, though you could assume that, as N gets bigger, most of insights on how to debug deep learning models would be actually fed to Copilot-N's training data via telemetry (or would be accessible via GPT-N's Q&A interface).Among the systems presented above, System 3 could experience exponential self-improvement in complete stealth mode and is therefore worth studying in more details.Self-improving Code GenerationI am especially interested in Sstem 3 (=="Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N"), because progress in AIHHAI straightforwardly leads to productivity increases in developing AIHHAI.Let's imagine that Copilot-N successfully auto-completes 1% of code lines, and for the sake of argument people immediately press "Tab" to move to the next line in those cases. Without thinking about the fact that the auto-completed parts would actually be the easiest parts of the developer's pre-existing workload, this would make developers ~1% more productive.You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1), which would in turn imply 2% more "perfect auto-complete" than what we started with, etc.Obviously, the Copilot we have right now is still pretty rudimentary. It is mostly useful for beginners to an API or language, not for doing cutting edge PyTorch development. And you could say that a lot of ML work is done outside of coding, like reading papers and building infrastructure. (More on this in my Codex Skeptic FAQ).I agree that improvements in productivity from AI are currently marginal, though one should consider what those improvements might be for future versions, including things like question-answering GPT-N helping to debug high-level problems. It is also important to keep in mind that engineers from many different fields are currently using Copilot regularly, and could benefit more from code generation than ML engineers (think web programmers). Those engineers would in turn accelerate GDP growth, which would fasten the total amount of investments in AI.How that would actually lead to FoomWhen we will get from marginal improvements in Code Generation to some Q&A language model that helps you re-consider your transformer architecture, the gains of productivity will start to be more substantial.Assuming we are in the scenario where one company (think OpenAI) has access to increasingly better code generation tools (that no one else has access to), and possibly also some lead in terms of useful language models to debug their tools, they might get a bigger and bigger lead in how useful their AIHHAI is.At some point, you would be able to ask more open questions, solving harder and harder tasks, for complex things like making money in financial markets, or just setting strategy for the entire company. In a matter of months, the company would achieve extraordinary economic output, re-investing everything into AIHHAI.Eventually, the AIHHAI would be optimizing developer productivity over some time horizon, not just completing the next line of code. When something like planning is implemented (eg. expected reward maximization), the AIHHAI might just Foom by modifying its own code to generate code better.Why Code Generation might Plateau instead As Kaplan mentions in his recent talk about the implications of Scaling Laws for Code Generation, current progress is bottlenecked by:Data available. If you remove duplicates, you have about 50B tokens of Python code on Github. In comparison, GPT-3 was trained on about 300B tokens. You could possibly do data augmentation or transfer learning to bypass this problem. Though Kaplan also guesses that in AlphaCode, researchers were also bottlenecked by dataset size when scaling things up. On top of that, the Chinchilla paper shows that scaling data about as much as model size is also necessary for compute-optimal training.Writing longer programs. Assuming you have a constant error rate when writing your program token by token, you get an exponential decay in how likely your program is to solve the problem. (They tested this by asking a model to write longer programs doing essentially the same thing, and they got an exponentially worse “pass rate”.) Therefore, asking Codex to write very programs might plateau even when scaling models, at least with our current methods. (Kaplan mentions that probably a method would imply doing what humans do, aka writing bad code until it works, instead of just asking the model to write one long piece of code.)ConclusionModerate Takeoffs (think months) are a useful abstraction to think about scenarios between Foom and Slow Takeoffs (years, decades).When discussing Takeoff speed, it is worth noting that progress can be heterogeneous between what happens privately and publicly, especially as we get closer to superintelligence. This is especially true when considering humans will be using the AIs they developed to build AI even faster.More generally, discussion on Takeoff Speed has historically focused on whether an AI would be able to Foom, when in practice there will be an intermediate regime where the system {the humans building the AI + the AI} will self-improve, not the AI by itself.Even if this intermediate regime might, through compounding progress, lead to Foom, our current understanding of scaling laws predicts that we will soon be bottlenecked by dataset size and programs that cannot be longer than a certain size.(Acknowledgements: thanks to the edits suggested by Justis, facilitated by Ruby.)^(Superintelligence, Bostrom) Chapter 4.^Sam did a Q&A for an Astral Codex Ten meetup in September 2021. I will not be linking to the post doing a recap of what he said since it was taken down from LW.Sam's take on Codex was summarized in the post as: a) current codex is bad compared to what they will have next b) they are making fast progress c) Codex is Discuss | https://www.lesswrong.com/posts/aqTAd7KzsYmHWYdei/why-copilot-accelerates-timelines | 2022-04-26T22:06:19Z | "Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what its worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen?" (source)When discussing Takeoff Speeds, I feel the debate often gets stuck in some kind of false dichotomy between Fast and Slow, where the crux seems to be about whether some self-improving AI would be able to foom without human assistance.Instead, we could get a Moderate Takeoff (think months or years), where AI does not self-improve (by itself). Instead, there would be a reinforcing feedback loop where progress in AI leads to AI becoming increasingly useful to make progress in AI, with humans in the loop at all times.On top of that, things might just happen privately at some AI lab for a few months until the AI is able to foom by itself, which will look like foom for everyone outside that lab.AI Helping Humans with AIIn Superintelligence, takeoff is defined as the period between AGI and superintelligence. In this post, I will be using as takeoff's starting point the first "AI Helping Humans with AI" (in a meaningful way), or AIHHAI for short, since it will arise before we get fully general intelligence and accelerate AI progress. Here are some examples of what I have in mind for "helping humans in a meaningful way":GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.Another example would be if some code generation tool like Copilot makes ML engineers substantially more productive in writing ML code. Making predictions about productivity is tricky and hard to measure, but it would involve something like accepting code suggestions a decent amount, like 100x more than what engineers using Copilot currently accept.(Sidenote: My general impression from people using Copilot is that they believe it's becoming increasingly useful, and use it on a daily basis, though it rarely auto-completes the right line of code right away. Given that we had Codex/Copilot last year, and that Sam Altman hinted at some new Codex capabilities in his ACX Q&A[2], I think we will get some impressive release for Copilot/Codex sometime this year that most engineers will want to use. (Similar to how a lot of developers are used to using Visual Studio's suite, especially intellisense.) The model I have in mind for "AI helping humans with AI" could be this one, though it will probably require 1-2x more iterations.)Moderate TakeoffA Moderate Takeoff is defined[1] as "one that occurs over some intermediary temporal interval, such as months or years". For AIHHAI, we can distinguish two cases:AIHHAI is developed by some AI lab working on it privately. That lab has a lead compared to the other labs, since they are working more productively using AIHHAI. Thus, they might reach superintelligence first, without allowing enough time for the rest of the world to compete.AIHHAI is made public, or quickly (think months) reproduced by others publicly or privately. In any case, some AIHHAI model is eventually made public, and there is not only one group using AIHHAI--other companies are adopting the same strategy (multipolar scenario).For the first case, you can think of OpenAI using a new version of Copilot internally, that enables their team to quickly build another, even better version of Copilot without releasing the intermediate stage publicly. They would already have a couple of months of lead time in terms of engineering, and after using their latest version, the lead (in terms of how long it would take for a competitor to catch up using publicly available tools) would increase over time due to the compounding advantage.For the second case, you could consider a similar scenario where they do a public release, or other labs like Google Research just build something equivalent a couple of months after. Even if is not public, the thing might be so impressive that employees talk about it to their friends or anonymously, and the news eventually gets leaked. In that regime, you get many companies possibly expanding capabilities in Code Generation, possibly by scaling models and datasets aggressively. The AI race becomes an engineering race, though we might need more scientific breakthroughs for scaling laws to continue (for more on this, see section "Why Code Generation might Plateau Instead" at the end).It is unclear to me which case is more likely. On the one hand, the usefulness of AIHHAI would cause some rapid self-improvement of the system {Humans developing the AI + AI}, and the pace would be so quick the model would not have time to leak. On the other hand, the results being exciting enough increases the probability of the news getting leaked and that other players start (were already?) investing in similar models heavily.Self-improving (Humans + AI)One thing I have not seen discussed a lot is how the system "humans + AI" could have different takeoff speeds, where, for this hybrid system, takeoff would basically mean "going from {human + AIHHAI} to {human + superintelligent AI}".Note that there are many such systems we could study, such as:System 1. {All humans + all used resources, including ML models and science}System 2. {All humans working on code generation + all resources they use}System 3. {Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N}The importance of closed systemsThinking about how "closed" or "small" a system is helps us to understand its kinetics, and also has some implications regarding AI races.Indeed, a small system using only its own output as input could independently foom, without encountering major bottlenecks. Conversely, it your system requires a lot of insights in mathematics or engineering to overcome bottlenecks, foom becomes less likely. However, a smaller model with less humans might have less resources, labor-wise.With regard to races, if your system does not require the output of other disciplines to make progress, you could keep it private for longer. (If the system required a lot of insights, publishing preliminary results about the system could prove necessary to get the outside world to publish research relevant to your system.) In practice:System 1 is a closed system. Thinking about how fast it would improve basically brings us back to "when will GDP double in a year" territory. The abstraction is not precise enough to give insights about kinetics without basically studying macro-economics.System 2 is not closed, since it actually uses insights from other discipline in CS/Math and others as inputs. That said, the research in code generation directly helps the humans doing work relevant to code generation (assuming they use it).System 3 would also definitely need to take research and tools from somewhere else as inputs, though you could assume that, as N gets bigger, most of insights on how to debug deep learning models would be actually fed to Copilot-N's training data via telemetry (or would be accessible via GPT-N's Q&A interface).Among the systems presented above, System 3 could experience exponential self-improvement in complete stealth mode and is therefore worth studying in more details.Self-improving Code GenerationI am especially interested in Sstem 3 (=="Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N"), because progress in AIHHAI straightforwardly leads to productivity increases in developing AIHHAI.Let's imagine that Copilot-N successfully auto-completes 1% of code lines, and for the sake of argument people immediately press "Tab" to move to the next line in those cases. Without thinking about the fact that the auto-completed parts would actually be the easiest parts of the developer's pre-existing workload, this would make developers ~1% more productive.You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1), which would in turn imply 2% more "perfect auto-complete" than what we started with, etc.Obviously, the Copilot we have right now is still pretty rudimentary. It is mostly useful for beginners to an API or language, not for doing cutting edge PyTorch development. And you could say that a lot of ML work is done outside of coding, like reading papers and building infrastructure. (More on this in my Codex Skeptic FAQ).I agree that improvements in productivity from AI are currently marginal, though one should consider what those improvements might be for future versions, including things like question-answering GPT-N helping to debug high-level problems. It is also important to keep in mind that engineers from many different fields are currently using Copilot regularly, and could benefit more from code generation than ML engineers (think web programmers). Those engineers would in turn accelerate GDP growth, which would fasten the total amount of investments in AI.How that would actually lead to FoomWhen we will get from marginal improvements in Code Generation to some Q&A language model that helps you re-consider your transformer architecture, the gains of productivity will start to be more substantial.Assuming we are in the scenario where one company (think OpenAI) has access to increasingly better code generation tools (that no one else has access to), and possibly also some lead in terms of useful language models to debug their tools, they might get a bigger and bigger lead in how useful their AIHHAI is.At some point, you would be able to ask more open questions, solving harder and harder tasks, for complex things like making money in financial markets, or just setting strategy for the entire company. In a matter of months, the company would achieve extraordinary economic output, re-investing everything into AIHHAI.Eventually, the AIHHAI would be optimizing developer productivity over some time horizon, not just completing the next line of code. When something like planning is implemented (eg. expected reward maximization), the AIHHAI might just Foom by modifying its own code to generate code better.Why Code Generation might Plateau instead As Kaplan mentions in his recent talk about the implications of Scaling Laws for Code Generation, current progress is bottlenecked by:Data available. If you remove duplicates, you have about 50B tokens of Python code on Github. In comparison, GPT-3 was trained on about 300B tokens. You could possibly do data augmentation or transfer learning to bypass this problem. Though Kaplan also guesses that in AlphaCode, researchers were also bottlenecked by dataset size when scaling things up. On top of that, the Chinchilla paper shows that scaling data about as much as model size is also necessary for compute-optimal training.Writing longer programs. Assuming you have a constant error rate when writing your program token by token, you get an exponential decay in how likely your program is to solve the problem. (They tested this by asking a model to write longer programs doing essentially the same thing, and they got an exponentially worse pass rate.) Therefore, asking Codex to write very programs might plateau even when scaling models, at least with our current methods. (Kaplan mentions that probably a method would imply doing what humans do, aka writing bad code until it works, instead of just asking the model to write one long piece of code.)ConclusionModerate Takeoffs (think months) are a useful abstraction to think about scenarios between Foom and Slow Takeoffs (years, decades).When discussing Takeoff speed, it is worth noting that progress can be heterogeneous between what happens privately and publicly, especially as we get closer to superintelligence. This is especially true when considering humans will be using the AIs they developed to build AI even faster.More generally, discussion on Takeoff Speed has historically focused on whether an AI would be able to Foom, when in practice there will be an intermediate regime where the system {the humans building the AI + the AI} will self-improve, not the AI by itself.Even if this intermediate regime might, through compounding progress, lead to Foom, our current understanding of scaling laws predicts that we will soon be bottlenecked by dataset size and programs that cannot be longer than a certain size.(Acknowledgements: thanks to the edits suggested by Justis, facilitated by Ruby.)^(Superintelligence, Bostrom) Chapter 4.^Sam did a Q&A for an Astral Codex Ten meetup in September 2021. I will not be linking to the post doing a recap of what he said since it was taken down from LW.Sam's take on Codex was summarized in the post as: a) current codex is bad compared to what they will have next b) they are making fast progress c) Codex is <1y away from having a huge impact on developers. | Content Creation/Content Synthesis/Decision Making/Process Automation | Unknown | null | null | null | null | null | null |
|
news | rohinmshah | DeepMind is hiring for the Scalable Alignment and Alignment Teams | Published on May 13, 2022 12:17 PM GMTWe are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief,The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions.The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist - Machine Learning, Research Scientist - Cognitive Science, Research Engineer, and Software Engineer positions.We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions.“Alignment” vs “Scalable Alignment”Very roughly, the split between Alignment and Scalable Alignment reflects the following decomposition:Generate approaches to AI alignment – Alignment TeamMake those approaches scale – Scalable Alignment TeamIn practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available.Scalable Alignment’s current approach: make AI critique itselfImagine a default approach to building AI agents that do what humans want:Pretrain on a task like “predict text from the internet”, producing a highly capable model such as Chinchilla or Flamingo.Fine-tune into an agent that does useful tasks, as evaluated by human judgements.There are several ways this could go wrong:Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task.The agent’s reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we don’t like, and because we expect good reasoning to better generalize to other situations.Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations.Our current plan to address these problem is (in part):**Give humans help in supervising strong agents: **On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence.**Align explanations with the true reasoning process of the agent: **Ensure that agent’s are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge.Red team models to exhibit failure modes that don’t occur in normal useWe believe none of these pieces are sufficient by themselves:(1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer.(2) without (1) doesn’t scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even with compression, or exponentially large without compression if the agent is using advanced heuristics which expand into very large human-interpretable reasoning traces.(1)+(2) without (3) will miss rare failures.(3) needs (1)+(2) to define failure.An example proposal for (1) is debate, in which two agents are trained in a zero-sum game to provide evidence and counterarguments for answers, as evaluated by a human judge. If we imagine the exponentially large tree of all possible debates, the goals of debate are to (1) engineer the whole tree so that it captures all relevant considerations and (2) train agents so that the chosen single path through the tree reflects the tree as a whole.Figure 1, AI Safety Needs Social ScientistsThe full picture will differ from the pure debate setting in many ways, and we believe the correct interpretation of the debate idea is “agents should critique themselves”. There is a large space of protocols that include agents critiquing agents as a component, and choosing between them will involveHuman aspects: Whether a particular human-machine interaction is aligned depends on the humans involved, and we need cognitive science experiments probing these uncertainties.Theoretical strengthenings: There are a variety of theoretical strengthenings to debate such as cross-examination, learning the prior, and market making, but little work has been done to turn these into practical systems (example obstacle to learning the prior).Flexibility in integrating other components of alignment: For example, if strong interpretability tools are developed they should be smoothly integrated into the human-machine interaction, so that the human supervision process has access to internal reasoning.Practicalities: Citing sources, better uncertainty estimation, declining to answer if uncertain, etc.Governance: By explaining themselves, agents can provide a lever for external oversight.The three goals of “help humans with supervision”, “align explanations with reasoning”, and “red teams” will be blurry once we put the whole picture together. Red teaming can occur either standalone or as an integrated part of a training scheme such as cross-examination, which allows agents to interrogate opponent behavior along counterfactual trajectories. Stronger schemes to help humans with supervision should improve alignment with reasoning by themselves, as they grow the space of considerations that can be exposed to humans. Thus, a key part of the Scalable Alignment Team’s work is planning out how these pieces will fit together.Examples of our work, involving extensive collaboration with other teams at DeepMind:Risk analyses, both for long-term alignment risks and harms that exist today:Kenton et al. 2021, Alignment of language agentsWeidinger et al. 2021, Ethical and social risks of harm from language modelsLanguage model pretraining, analysis, and safety discussionRae et al. 2021, Scaling language models: Methods, analysis & insights from training GopherBorgeaud et al. 2021, Improving language models by retrieving from trillions of tokensSafetyPerez et al. 2022, Red teaming language models with language modelsGleave and Irving 2022, Uncertainty Estimation for Language Reward ModelsMenick et al. 2022, Teaching language models to support answers with verified quotesEarlier proposals for debate and human aspects of debateIrving et al. 2018, AI safety via debateIrving and Askell 2019, AI safety needs social scientistsWe view our recent safety papers as steps towards the broader scalable alignment picture, and continue to build out towards debate and generalizations. We work primarily with large language models (LLMs), both because LLMs are a tool for safety by enabling human-machine communication and are examples of ML models that may cause both near-term and long-term harms.Alignment Team’s portfolio of projectsIn contrast to the Scalable Alignment Team, the Alignment Team explores a wide variety of possible angles on the AI alignment problem. Relative to Scalable Alignment, we check whether a technique could plausibly scale based on conceptual and abstract arguments. This lets us iterate much faster at the cost of getting less useful feedback from reality. To give you a sense of the variety, here’s some examples of public past work that was led by current team members:Learning objectives from human feedback on hypothetical behaviorUnderstanding agent incentives using causal influence diagramsExamples of specification gamingEliciting latent knowledge contestAvoiding side effects through impact regularizationImproving our philosophical understanding of “agency” using Conway’s game of lifeRelating specification problems and Goodhart’s LawDecoupling approval from actions to avoid tamperingThat being said, over the last year there has been some movement away from previous research topics and towards others. To get a sense of our current priorities, here are short descriptions of some projects that we are currently working on:Primarily conceptual:Investigate threat models in which due to increasing AI sophistication, humans are forced to rely on evaluations of outcomes (rather than evaluations of process or reasoning).Investigate arguments about the difficulty of AI alignment, including as a subproblem the likelihood that various AI alignment plans succeed.Compare various decompositions of the alignment problem to see which one is most useful for guiding future work.Primarily empirical:Create demonstrations of inner alignment failures, in a similar style as this paper.Dig deeper into the grokking phenomenon and give a satisfying account of how and why it happens.Develop interpretability tools that allow us to understand how large language models work (along similar lines as Anthropic’s work).Evaluate how useful process-based feedback is on an existing benchmark.Relative to most other teams at DeepMind, on the Alignment team there is quite a lot of freedom in what you work on. All you need to do to start a project is to convince your manager that it’s worth doing (i.e. reduces x-risk comparably well to other actions you could take), and convince enough collaborators to work on the project.In many ways the team is a collection of people with very different research agendas and perspectives on AI alignment that you wouldn’t normally expect to work together. What ties us together is our meta-level focus on reducing existential risk through alignment failures:Every new project must come accompanied by a theory of change that explains how it reduces existential risk; this helps us avoid the failure mode of working on interesting conceptual projects that end up not connecting to the situations we are worried about. It’s encouraged to talk to people on the team with very different perspectives and try to come to agreement, or at least better understand each other’s positions. This can be an explicit project even though it isn’t “research” in the traditional sense.Interfacing with the rest of DeepMindBoth Alignment and Scalable Alignment collaborate extensively with people across DeepMind.For Alignment, this includes both collaborating on projects that we think are useful, and by explaining our ideas to other researchers. As a particularly good example, we recently ran a 2 hour AI alignment “workshop” with over 100 attendees. (That being said, you can opt out of these engagements in order to focus on research, if you prefer.)As Scalable Alignment’s work with large language models is very concrete, we have tight collaborations with a variety of teams, including large-scale pretraining and other language teams, Ethics and Society, and Strategy and Governance.The rolesBetween our two teams we have open roles for Research Scientists (RSs), Research Engineers (REs), and (for Scalable Alignment) Software Engineers. Scalable Alignment RSs can have either a machine learning background or a cognitive science background (or equivalent). The boundaries between these roles are blurry. There are many skills involved in overall Alignment / Scalable Alignment research success: proposing and leading projects, writing and publishing papers, conceptual safety work, algorithm design and implementation, experiment execution and tuning, design and implementation of flexible, high-performance, maintainable software, and design and analysis of human interaction experiments. We want to hire from the Pareto frontier of all relevant skills. This means RSs are expected to have more research experience and more of a track record of papers, but SWEs are expected to be better at scalable software design / collaboration / implementation, with REs in between, but also that REs can and do propose and lead projects if capable (e.g., this recent paper had an RE as last author). For more details on the tradeoffs, see the career section of Rohin’s FAQ.For Scalable Alignment, most of our work focuses on large language models. For Machine Learning RSs, this means experience with natural language processing is valuable, but not required. We are also interested in candidates motivated by other types of harms caused by large models, such as those described in Weidinger et al., Ethical and social risks of harm from language models, as long as you are excited by the goal of removing such harms even in subtle cases which humans have difficulty detecting. For REs and SWEs, a focus on large language models means that experience with high performance computation or large, many-developer codebases is valuable. For the RE role for Alignment, many of the projects you could work on would involve smaller models that are less of an engineering challenge, though there are still a few projects that work with our largest language models.Scalable Alignment Cognitive Scientists are expected to have a track record of research in cognitive scientists, and to design, lead, and implement either standalone human-only experiments to probe uncertainty, or the human interaction components of mixed human / machine experiments. No experience with machine learning is required, but you should be excited to collaborate with people who do!Apply now!We will be evaluating applications on a rolling basis until positions are filled, but we will at least consider all applications that we receive by May 31. Please do apply even if your start date is up to a year in the future, as we probably will not run another hiring round this year. These roles are based in London, with a hybrid work-from-office / work-from-home model.While we do expect these roles to be competitive, we have found that people often overestimate how much we are looking for. In particular:We do not expect you to have a PhD if you are applying for the Research Engineer or Software Engineer roles. Even for the Research Scientist role, it is fine if you don’t have a PhD if you can demonstrate comparable research skill (though we do not expect to see such candidates in practice).We do not expect you to have read hundreds of blog posts and papers about AI alignment, or to have a research agenda that aims to fully solve AI alignment. We will look for understanding of the basic motivation for AI alignment, and the ability to reason conceptually about future AI systems that we haven’t yet built.If we ask you, say, whether an assistive agent would gradient hack if it learned about its own training process, we’re looking to see how you go about thinking about a confusing and ill-specified question (which happens all the time in alignment research). We aren’t expecting you to give us the Correct Answer, and in fact there isn’t a correct answer; the question isn’t specified well enough for that. We aren’t even expecting you to know all the terms; it would be fine to ask what we mean by “gradient hacking”.As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.We do not expect SWE candidates to have experience with ML, but you should have experience with high performance code and experience with large, collaborative codebases (including the human aspects of collaborative software projects).Go forth and apply!Alignment Team:Research ScientistResearch EngineerScalable Alignment Team:Research Scientist - Machine LearningResearch Scientist - Cognitive ScienceResearch EngineerSoftware EngineerDiscuss | https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment | 2022-05-13T12:17:13Z | We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief,The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions.The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist - Machine Learning, Research Scientist - Cognitive Science, Research Engineer, and Software Engineer positions.We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions.Alignment vs Scalable AlignmentVery roughly, the split between Alignment and Scalable Alignment reflects the following decomposition:Generate approaches to AI alignment Alignment TeamMake those approaches scale Scalable Alignment TeamIn practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available.Scalable Alignments current approach: make AI critique itselfImagine a default approach to building AI agents that do what humans want:Pretrain on a task like predict text from the internet, producing a highly capable model such as Chinchilla or Flamingo.Fine-tune into an agent that does useful tasks, as evaluated by human judgements.There are several ways this could go wrong:Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task.The agents reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we dont like, and because we expect good reasoning to better generalize to other situations.Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations.Our current plan to address these problem is (in part):Give humans help in supervising strong agents: On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence.Align explanations with the true reasoning process of the agent: Ensure that agents are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge.Red team models to exhibit failure modes that dont occur in normal useWe believe none of these pieces are sufficient by themselves:(1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer.(2) without (1) doesnt scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even with compression, or exponentially large without compression if the agent is using advanced heuristics which expand into very large human-interpretable reasoning traces.(1)+(2) without (3) will miss rare failures.(3) needs (1)+(2) to define failure.An example proposal for (1) is debate, in which two agents are trained in a zero-sum game to provide evidence and counterarguments for answers, as evaluated by a human judge. If we imagine the exponentially large tree of all possible debates, the goals of debate are to (1) engineer the whole tree so that it captures all relevant considerations and (2) train agents so that the chosen single path through the tree reflects the tree as a whole.Figure 1, AI Safety Needs Social ScientistsThe full picture will differ from the pure debate setting in many ways, and we believe the correct interpretation of the debate idea is agents should critique themselves. There is a large space of protocols that include agents critiquing agents as a component, and choosing between them will involveHuman aspects: Whether a particular human-machine interaction is aligned depends on the humans involved, and we need cognitive science experiments probing these uncertainties.Theoretical strengthenings: There are a variety of theoretical strengthenings to debate such as cross-examination, learning the prior, and market making, but little work has been done to turn these into practical systems (example obstacle to learning the prior).Flexibility in integrating other components of alignment: For example, if strong interpretability tools are developed they should be smoothly integrated into the human-machine interaction, so that the human supervision process has access to internal reasoning.Practicalities: Citing sources, better uncertainty estimation, declining to answer if uncertain, etc.Governance: By explaining themselves, agents can provide a lever for external oversight.The three goals of help humans with supervision, align explanations with reasoning, and red teams will be blurry once we put the whole picture together. Red teaming can occur either standalone or as an integrated part of a training scheme such as cross-examination, which allows agents to interrogate opponent behavior along counterfactual trajectories. Stronger schemes to help humans with supervision should improve alignment with reasoning by themselves, as they grow the space of considerations that can be exposed to humans. Thus, a key part of the Scalable Alignment Teams work is planning out how these pieces will fit together.Examples of our work, involving extensive collaboration with other teams at DeepMind:Risk analyses, both for long-term alignment risks and harms that exist today:Kenton et al. 2021, Alignment of language agentsWeidinger et al. 2021, Ethical and social risks of harm from language modelsLanguage model pretraining, analysis, and safety discussionRae et al. 2021, Scaling language models: Methods, analysis & insights from training GopherBorgeaud et al. 2021, Improving language models by retrieving from trillions of tokensSafetyPerez et al. 2022, Red teaming language models with language modelsGleave and Irving 2022, Uncertainty Estimation for Language Reward ModelsMenick et al. 2022, Teaching language models to support answers with verified quotesEarlier proposals for debate and human aspects of debateIrving et al. 2018, AI safety via debateIrving and Askell 2019, AI safety needs social scientistsWe view our recent safety papers as steps towards the broader scalable alignment picture, and continue to build out towards debate and generalizations. We work primarily with large language models (LLMs), both because LLMs are a tool for safety by enabling human-machine communication and are examples of ML models that may cause both near-term and long-term harms.Alignment Teams portfolio of projectsIn contrast to the Scalable Alignment Team, the Alignment Team explores a wide variety of possible angles on the AI alignment problem. Relative to Scalable Alignment, we check whether a technique could plausibly scale based on conceptual and abstract arguments. This lets us iterate much faster at the cost of getting less useful feedback from reality. To give you a sense of the variety, heres some examples of public past work that was led by current team members:Learning objectives from human feedback on hypothetical behaviorUnderstanding agent incentives using causal influence diagramsExamples of specification gamingEliciting latent knowledge contestAvoiding side effects through impact regularizationImproving our philosophical understanding of agency using Conways game of lifeRelating specification problems and Goodharts LawDecoupling approval from actions to avoid tamperingThat being said, over the last year there has been some movement away from previous research topics and towards others. To get a sense of our current priorities, here are short descriptions of some projects that we are currently working on:Primarily conceptual:Investigate threat models in which due to increasing AI sophistication, humans are forced to rely on evaluations of outcomes (rather than evaluations of process or reasoning).Investigate arguments about the difficulty of AI alignment, including as a subproblem the likelihood that various AI alignment plans succeed.Compare various decompositions of the alignment problem to see which one is most useful for guiding future work.Primarily empirical:Create demonstrations of inner alignment failures, in a similar style as this paper.Dig deeper into the grokking phenomenon and give a satisfying account of how and why it happens.Develop interpretability tools that allow us to understand how large language models work (along similar lines as Anthropics work).Evaluate how useful process-based feedback is on an existing benchmark.Relative to most other teams at DeepMind, on the Alignment team there is quite a lot of freedom in what you work on. All you need to do to start a project is to convince your manager that its worth doing (i.e. reduces x-risk comparably well to other actions you could take), and convince enough collaborators to work on the project.In many ways the team is a collection of people with very different research agendas and perspectives on AI alignment that you wouldnt normally expect to work together. What ties us together is our meta-level focus on reducing existential risk through alignment failures:Every new project must come accompanied by a theory of change that explains how it reduces existential risk; this helps us avoid the failure mode of working on interesting conceptual projects that end up not connecting to the situations we are worried about. Its encouraged to talk to people on the team with very different perspectives and try to come to agreement, or at least better understand each others positions. This can be an explicit project even though it isnt research in the traditional sense.Interfacing with the rest of DeepMindBoth Alignment and Scalable Alignment collaborate extensively with people across DeepMind.For Alignment, this includes both collaborating on projects that we think are useful, and by explaining our ideas to other researchers. As a particularly good example, we recently ran a 2 hour AI alignment workshop with over 100 attendees. (That being said, you can opt out of these engagements in order to focus on research, if you prefer.)As Scalable Alignments work with large language models is very concrete, we have tight collaborations with a variety of teams, including large-scale pretraining and other language teams, Ethics and Society, and Strategy and Governance.The rolesBetween our two teams we have open roles for Research Scientists (RSs), Research Engineers (REs), and (for Scalable Alignment) Software Engineers. Scalable Alignment RSs can have either a machine learning background or a cognitive science background (or equivalent). The boundaries between these roles are blurry. There are many skills involved in overall Alignment / Scalable Alignment research success: proposing and leading projects, writing and publishing papers, conceptual safety work, algorithm design and implementation, experiment execution and tuning, design and implementation of flexible, high-performance, maintainable software, and design and analysis of human interaction experiments. We want to hire from the Pareto frontier of all relevant skills. This means RSs are expected to have more research experience and more of a track record of papers, but SWEs are expected to be better at scalable software design / collaboration / implementation, with REs in between, but also that REs can and do propose and lead projects if capable (e.g., this recent paper had an RE as last author). For more details on the tradeoffs, see the career section of Rohins FAQ.For Scalable Alignment, most of our work focuses on large language models. For Machine Learning RSs, this means experience with natural language processing is valuable, but not required. We are also interested in candidates motivated by other types of harms caused by large models, such as those described in Weidinger et al., Ethical and social risks of harm from language models, as long as you are excited by the goal of removing such harms even in subtle cases which humans have difficulty detecting. For REs and SWEs, a focus on large language models means that experience with high performance computation or large, many-developer codebases is valuable. For the RE role for Alignment, many of the projects you could work on would involve smaller models that are less of an engineering challenge, though there are still a few projects that work with our largest language models.Scalable Alignment Cognitive Scientists are expected to have a track record of research in cognitive scientists, and to design, lead, and implement either standalone human-only experiments to probe uncertainty, or the human interaction components of mixed human / machine experiments. No experience with machine learning is required, but you should be excited to collaborate with people who do!Apply now!We will be evaluating applications on a rolling basis until positions are filled, but we will at least consider all applications that we receive by May 31. Please do apply even if your start date is up to a year in the future, as we probably will not run another hiring round this year. These roles are based in London, with a hybrid work-from-office / work-from-home model.While we do expect these roles to be competitive, we have found that people often overestimate how much we are looking for. In particular:We do not expect you to have a PhD if you are applying for the Research Engineer or Software Engineer roles. Even for the Research Scientist role, it is fine if you dont have a PhD if you can demonstrate comparable research skill (though we do not expect to see such candidates in practice).We do not expect you to have read hundreds of blog posts and papers about AI alignment, or to have a research agenda that aims to fully solve AI alignment. We will look for understanding of the basic motivation for AI alignment, and the ability to reason conceptually about future AI systems that we havent yet built.If we ask you, say, whether an assistive agent would gradient hack if it learned about its own training process, were looking to see how you go about thinking about a confusing and ill-specified question (which happens all the time in alignment research). We arent expecting you to give us the Correct Answer, and in fact there isnt a correct answer; the question isnt specified well enough for that. We arent even expecting you to know all the terms; it would be fine to ask what we mean by gradient hacking.As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, were probably interested in interviewing you.We do not expect SWE candidates to have experience with ML, but you should have experience with high performance code and experience with large, collaborative codebases (including the human aspects of collaborative software projects).Go forth and apply! | Decision Making/Content Synthesis/Detection and Monitoring | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Kathleen Schaub | Can You Realize Potential With AI in Marketing? | Marketing won’t deliver on AI’s promise unless the human side of the equation is given equal attention. | https://www.cmswire.com/digital-marketing/if-you-want-to-succeed-with-artificial-intelligence-in-marketing-invest-in-people/ | https://www.cmswire.com/-/media/014241e84a4e44bd907eb47e9b299287.ashx | 2022-04-14T15:28:10Z | PHOTO:Monopoly919 Almost every list of martech trends forecast how artificial intelligence (AI) will transform marketing. While AI offers benefits, optimizing automation is only half the job. Marketing wont deliver on AIs promise unless the human side of the equation is given equal attention. Because business value increasingly depends on human factors including agility, innovation and relationships, those companies that best cultivate human potential will be the most successful.Sources of Business Value Are ChangingBusinesses will always need efficiency but squeezing out another drop has diminishing returns. CEOs realize that agility, innovation and improved customer experience will deliver tomorrows gains. KPMG revealed that 67% of CEOs agree with the statement that agility is the new currency of business. If we act too slow, we will be bankrupt. BCG found that 75% of companies say that innovation has become a top three priority, a 10% jump since pre-pandemic. Agility and innovation are essential strategies in a world that the US Army called VUCA (volatile, uncertain, complex, ambiguous). Digital dynamics dramatically accelerated VUCA effects.VUCA reality is especially obvious at a companys edge and causes many persistent marketing challenges. The capriciousness of marketing derives from the same complexity as traffic or natures ecosystems. Science calls these complex adaptive systems, and they acquire their VUCA behavior from many interacting agents (e.g., customers, competitors, social networks, partners and regulatory entities) producing numerous feedback loops which cause situations to change rapidly and unexpectedly. VUCA is why customer journeys look more like a childs scribble than a linear funnel, why a campaign that succeeded for months suddenly failed yesterday, and why calculating marketing ROI remains a frustrating challenge. Markets behave a lot like weather and stock markets.Related Article: A Look at Marketing's Biggest Data Challenges of the 2020s, Part 2AI Can Be a Great Partner for Marketing But With CaveatsAI offers many benefits when working in VUCA environments. Markets are complex, but they are also semi-predictable within the bounds of probability and time. Previous generations of marketers have been largely blind to these patterns because humans are ill-equipped to comb through the mountains of data needed to see them. AI excels at this task. AI can also help ameliorate other human challenges. For example, AI can spot mental biases such as the recency bias where humans tend to over-value what just happened and under-value high impact events of the past. AI can also tirelessly perform repetitive tasks that irritate humans.But AI fails miserably at interpreting ambiguity and nuance. It is extremely literal. Popular culture fantasizes about AI as becoming nearly human. The 2021 bestseller, Klara and the Sun by Nobel laureate Kazuo Ishiguro, is voiced by the sensitive artificial friend of a lonely 14-year-old girl. The 2013 movie "Her" features Scarlett Johansson as brilliant virtual assistant. In real life, AI algorithms flop when generalizing tasks into broader contexts. They perform well only if trained in narrow, focused, tasks. Marketings VUCA world is anything but narrow and focused, and because of this complexity there are many risks when applying AI. Nicholas Bostrum, in the book Superintelligence: Paths, Dangers, and Strategies offers an example of a machine simulation that when given the task of ferrying a passenger to the airport as quickly as possible has no reservations about running over pedestrians. Humans, on the other hand, are well-suited for performing in ambiguous, nuanced situations. We excel at creativity, critical thinking, judgment, problem-solving, and interpersonal skills. We grasp context. For example, we can sense meaning in a customers inflection change and evaluate the subtle trade-offs such as giving a money-losing discount today to increase future loyalty. Humans also excel at physically dexterous work beyond the scope of AI capability.Related Article: How AI-Based Marketing Can Reduce Customer Retention3 Tasks for the AI-Human PartnershipA collaboration between humans and AI is the best opportunity for an agile, innovative response to marketings VUCA digital world. This partnership requires attention to both automation and developing human potential. Three tasks need special focus:Develop Hybrid Processes and Teams That Integrate the AI and Human CapabilityA fresh look at the customer journey reveals skills ideal for both AI-enabled technology and humans everywhere. Take, for example, the mid-funnel phase where customers evaluate alternatives. Customers enjoy digital, self-directed education, and this task can be aided by AI-curated content, AI-enabled prototyping, dynamic pricing and emotional-AI enhanced chat. But when customers get stuck, they need a human problem solver to investigate, discern emotions, match unique situations to appropriate solutions, persuade and build consensus. Customers now bounce between digital and human interactions making the traditional, linear, first-marketing-then-sales process archaic.Related Article: CX Decoded Podcast: Practical Use Cases of AI in MarketingBecome Competent Stewards of AIThe authors of a Harvard Business Review article, Why You Arent Getting More from Your Marketing AI, insist that because of AIs literalness and power, marketers must develop new mindsets and skills to ensure success. The article describes how a consumer products firm reduced the error rate in their sales-volume forecast from 25% to 17%, yet lost money despite improved accuracy. While human decision-makers could tell that the underlying intent of error reduction was improving profits, the AI was ignorant of this assumption. The AI had improved precision in the low-margin products where most errors had been produced but had inadvertently reduced accuracy in high-margin products. This unintended consequence caused the company to underestimate demand for their most profitable products. Partnering with AI will require a long list of new capabilities including training, managing, troubleshooting, decision-making, governance and ethics.Prepare Workers for the Jobs Needed in an AI-Infused WorldThroughout history, technology has displaced outmoded jobs. In 1910, approximately 40% of Americans worked as either household servants or in farm-related jobs, according to the US Bureau of Labor Statistics. That percentage shrunk to 1.65% by 2000. During the same period jobs for professional, technical, managerial, and service workers ballooned. In addition to the new jobs needed to operate AI, leaders must prepare workers for jobs requiring uniquely human skills. For marketing, these jobs include applying scientific and design methods, creative development and production, behavioral sciences, security and privacy, and of course, jobs requiring emotional and social intelligence.The VUCA customer world has produced many persistent challenges for marketing. AI can break through many of these barriers to new levels of value, but only if leaders also cultivate human potential.Kathleen Schaub is a writer and advisor on marketing leaders quest to modernize organizations and operations for greater effectiveness in the complex digital world. She led IDCs CMO Advisory practice for nine years advising hundreds of technology marketing leaders on management best practices. | Content Synthesis/Decision Making/Prediction | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Ftuuky | Tackling multiple tasks with a single visual language model | Article URL: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-modelComments URL: https://news.ycombinator.com/item?id=31193521Points: 4# Comments: 0 | https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model | 2022-04-28T15:17:00Z | One key aspect of intelligence is the ability to quickly learn how to perform a new task when given a brief instruction. For instance, a child may recognise real animals at the zoo after seeing a few pictures of the animals in a book, despite any differences between the two. But for a typical visual model to learn a new task, it must be trained on tens of thousands of examples specifically labelled for that task. If the goal is to count and identify animals in an image, as in three zebras, one would have to collect thousands of images and annotate each image with their quantity and species. This process is inefficient, expensive, and resource-intensive, requiring large amounts of annotated data and the need to train a new model each time its confronted with a new task. As part of DeepMinds mission to solve intelligence, weve explored whether an alternative model could make this process easier and more efficient, given only limited task-specific information.Today, in the preprint of our paper, we introduce Flamingo, a single visual language model (VLM) that sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks. This means Flamingo can tackle a number of difficult problems with just a handful of task-specific examples (in a few shots), without any additional training required. Flamingos simple interface makes this possible, taking as input a prompt consisting of interleaved images, videos, and text and then output associated language. Similar to the behaviour of large language models (LLMs), which can address a language task by processing examples of the task in their text prompt, Flamingos visual and text interface can steer the model towards solving a multimodal task. Given a few example pairs of visual inputs and expected text responses composed in Flamingos prompt, the model can be asked a question with a new image or video, and then generate an answer. Figure 1. Given the two examples of animal pictures and a text identifying their name and a comment about where they can be found, Flamingo can mimic this style given a new image to output a relevant description: This is a flamingo. They are found in the Caribbean..Of the 16 tasks we studied, Flamingo beats all previous few-shot learning approaches when given as few as four examples per task. In several cases, the same Flamingomodel outperforms methods that are fine-tuned and optimised for each task independently and uses multiple orders of magnitude more task-specific data. This should allow non-expert people to quickly and easily use accurate visual language models on new tasks at hand.Figure 2. Left: Few-shot performance of the Flamingo across 16 different multimodal tasks against task specific state-of-the-art performance. Right: Examples of expected inputs and outputs for three of our 16 benchmarks.In practice, Flamingo fuses large language models with powerful visual representations each separately pre-trained and frozen by adding novel architecture components in between. Then it is trained on a mixture of complementary large-scale multimodal data coming only from the web, without using any data annotated for machine learning purposes. Following this method, we start from Chinchilla, our recently introduced compute-optimal 70B parameter language model, to train our final Flamingomodel, an 80B parameter VLM. After this training is done, Flamingo can be directly adapted to vision tasks via simple few-shot learning without any additional task-specific tuning.We also tested the models qualitative capabilities beyond our current benchmarks. As part of this process, we compared our model's performance when captioning images related to gender and skin colour, and ran our model's generated captions through Google's Perspective API, which evaluates toxicity of text. While the initial results are positive, more research towards evaluating ethical risks in multimodal systems is crucial and we urge people to evaluate and consider these issues carefully before thinking of deploying such systems in the real world.Multimodal capabilities are essential for important AI applications, such as aiding the visually impaired with everyday visual challenges or improving the identification of hateful content on the web. Flamingo makes it possible to efficiently adapt to these examples and other tasks on-the-fly without modifying the model. Interestingly, the model demonstrates out-of-the-box abilities of multimodal dialogue, as seen here.Figure 3 - Flamingo can engage in multimodal dialogue out of the box, seen here discussing an unlikely "soup monster" image generated by OpenAI's DALL·E 2 (left), and passing and identifying the famous Stroop test (right).Flamingo is an effective and efficient general-purpose family of models that can be applied to image and video understanding tasks with minimal task-specific examples. Models like Flamingo hold great promise to benefit society in practical ways and were continuing to improve their flexibility and capabilities so they can be safely deployed for the benefit of everyone. Flamingos abilities pave the way towards rich interactions with learned visual language models that can enable better interpretability and exciting new applications, like a visual assistant which helps people in everyday life and were delighted by the results so far. | Content Synthesis/Prediction/Process Automation | Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Michaël Trazzi | AI Helping Humans with AI | Published on April 26, 2022 10:06 PM GMT"Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen?" (source)When discussing Takeoff Speeds, I feel the debate often gets stuck in some kind of false dichotomy between Fast and Slow, where the crux seems to be about whether some self-improving AI would be able to foom without human assistance.Instead, we could get a Moderate Takeoff (think months or years), where AI does not self-improve (by itself). Instead, there would be a reinforcing feedback loop where progress in AI leads to AI becoming increasingly useful to make progress in AI, with humans in the loop at all times.On top of that, things might just happen privately at some AI lab for a few months until the AI is able to foom by itself, which will look like foom for everyone outside that lab.AI Helping Humans with AIIn Superintelligence, takeoff is defined as the period between AGI and superintelligence. In this post, I will be using as takeoff's starting point the first "AI Helping Humans with AI" (in a meaningful way), or AIHHAI for short, since it will arise before we get fully general intelligence and accelerate AI progress. Here are some examples of what I have in mind for "helping humans in a meaningful way":GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.Another example would be if some code generation tool like Copilot makes ML engineers substantially more productive in writing ML code. Making predictions about productivity is tricky and hard to measure, but it would involve something like accepting code suggestions a decent amount, like 100x more than what engineers using Copilot currently accept.(Sidenote: My general impression from people using Copilot is that they believe it's becoming increasingly useful, and use it on a daily basis, though it rarely auto-completes the right line of code right away. Given that we had Codex/Copilot last year, and that Sam Altman hinted at some new Codex capabilities in his ACX Q&A[2], I think we will get some impressive release for Copilot/Codex sometime this year that most engineers will want to use. (Similar to how a lot of developers are used to using Visual Studio's suite, especially intellisense.) The model I have in mind for "AI helping humans with AI" could be this one, though it will probably require 1-2x more iterations.)Moderate TakeoffA Moderate Takeoff is defined[1] as "one that occurs over some intermediary temporal interval, such as months or years". For AIHHAI, we can distinguish two cases:AIHHAI is developed by some AI lab working on it privately. That lab has a lead compared to the other labs, since they are working more productively using AIHHAI. Thus, they might reach superintelligence first, without allowing enough time for the rest of the world to compete.AIHHAI is made public, or quickly (think months) reproduced by others publicly or privately. In any case, some AIHHAI model is eventually made public, and there is not only one group using AIHHAI--other companies are adopting the same strategy (multipolar scenario).For the first case, you can think of OpenAI using a new version of Copilot internally, that enables their team to quickly build another, even better version of Copilot without releasing the intermediate stage publicly. They would already have a couple of months of lead time in terms of engineering, and after using their latest version, the lead (in terms of how long it would take for a competitor to catch up using publicly available tools) would increase over time due to the compounding advantage.For the second case, you could consider a similar scenario where they do a public release, or other labs like Google Research just build something equivalent a couple of months after. Even if is not public, the thing might be so impressive that employees talk about it to their friends or anonymously, and the news eventually gets leaked. In that regime, you get many companies possibly expanding capabilities in Code Generation, possibly by scaling models and datasets aggressively. The AI race becomes an engineering race, though we might need more scientific breakthroughs for scaling laws to continue (for more on this, see section "Why Code Generation might Plateau Instead" at the end).It is unclear to me which case is more likely. On the one hand, the usefulness of AIHHAI would cause some rapid self-improvement of the system {Humans developing the AI + AI}, and the pace would be so quick the model would not have time to leak. On the other hand, the results being exciting enough increases the probability of the news getting leaked and that other players start (were already?) investing in similar models heavily.Self-improving (Humans + AI)One thing I have not seen discussed a lot is how the system "humans + AI" could have different takeoff speeds, where, for this hybrid system, takeoff would basically mean "going from {human + AIHHAI} to {human + superintelligent AI}".Note that there are many such systems we could study, such as:System 1. {All humans + all used resources, including ML models and science}System 2. {All humans working on code generation + all resources they use}System 3. {Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N}The importance of closed systemsThinking about how "closed" or "small" a system is helps us to understand its kinetics, and also has some implications regarding AI races.Indeed, a small system using only its own output as input could independently foom, without encountering major bottlenecks. Conversely, it your system requires a lot of insights in mathematics or engineering to overcome bottlenecks, foom becomes less likely. However, a smaller model with less humans might have less resources, labor-wise.With regard to races, if your system does not require the output of other disciplines to make progress, you could keep it private for longer. (If the system required a lot of insights, publishing preliminary results about the system could prove necessary to get the outside world to publish research relevant to your system.) In practice:System 1 is a closed system. Thinking about how fast it would improve basically brings us back to "when will GDP double in a year" territory. The abstraction is not precise enough to give insights about kinetics without basically studying macro-economics.System 2 is not closed, since it actually uses insights from other discipline in CS/Math and others as inputs. That said, the research in code generation directly helps the humans doing work relevant to code generation (assuming they use it).System 3 would also definitely need to take research and tools from somewhere else as inputs, though you could assume that, as N gets bigger, most of insights on how to debug deep learning models would be actually fed to Copilot-N's training data via telemetry (or would be accessible via GPT-N's Q&A interface).Among the systems presented above, System 3 could experience exponential self-improvement in complete stealth mode and is therefore worth studying in more details.Self-improving Code GenerationI am especially interested in Sstem 3 (=="Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N"), because progress in AIHHAI straightforwardly leads to productivity increases in developing AIHHAI.Let's imagine that Copilot-N successfully auto-completes 1% of code lines, and for the sake of argument people immediately press "Tab" to move to the next line in those cases. Without thinking about the fact that the auto-completed parts would actually be the easiest parts of the developer's pre-existing workload, this would make developers ~1% more productive.You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1), which would in turn imply 2% more "perfect auto-complete" than what we started with, etc.Obviously, the Copilot we have right now is still pretty rudimentary. It is mostly useful for beginners to an API or language, not for doing cutting edge PyTorch development. And you could say that a lot of ML work is done outside of coding, like reading papers and building infrastructure. (More on this in my Codex Skeptic FAQ).I agree that improvements in productivity from AI are currently marginal, though one should consider what those improvements might be for future versions, including things like question-answering GPT-N helping to debug high-level problems. It is also important to keep in mind that engineers from many different fields are currently using Copilot regularly, and could benefit more from code generation than ML engineers (think web programmers). Those engineers would in turn accelerate GDP growth, which would fasten the total amount of investments in AI.How that would actually lead to FoomWhen we will get from marginal improvements in Code Generation to some Q&A language model that helps you re-consider your transformer architecture, the gains of productivity will start to be more substantial.Assuming we are in the scenario where one company (think OpenAI) has access to increasingly better code generation tools (that no one else has access to), and possibly also some lead in terms of useful language models to debug their tools, they might get a bigger and bigger lead in how useful their AIHHAI is.At some point, you would be able to ask more open questions, solving harder and harder tasks, for complex things like making money in financial markets, or just setting strategy for the entire company. In a matter of months, the company would achieve extraordinary economic output, re-investing everything into AIHHAI.Eventually, the AIHHAI would be optimizing developer productivity over some time horizon, not just completing the next line of code. When something like planning is implemented (eg. expected reward maximization), the AIHHAI might just Foom by modifying its own code to generate code better.Why Code Generation might Plateau instead As Kaplan mentions in his recent talk about the implications of Scaling Laws for Code Generation, current progress is bottlenecked by:Data available. If you remove duplicates, you have about 50B tokens of Python code on Github. In comparison, GPT-3 was trained on about 300B tokens. You could possibly do data augmentation or transfer learning to bypass this problem. Though Kaplan also guesses that in AlphaCode, researchers were also bottlenecked by dataset size when scaling things up. On top of that, the Chinchilla paper shows that scaling data about as much as model size is also necessary for compute-optimal training.Writing longer programs. Assuming you have a constant error rate when writing your program token by token, you get an exponential decay in how likely your program is to solve the problem. (They tested this by asking a model to write longer programs doing essentially the same thing, and they got an exponentially worse “pass rate”.) Therefore, asking Codex to write very programs might plateau even when scaling models, at least with our current methods. (Kaplan mentions that probably a method would imply doing what humans do, aka writing bad code until it works, instead of just asking the model to write one long piece of code.)ConclusionModerate Takeoffs (think months) are a useful abstraction to think about scenarios between Foom and Slow Takeoffs (years, decades).When discussing Takeoff speed, it is worth noting that progress can be heterogeneous between what happens privately and publicly, especially as we get closer to superintelligence. This is especially true when considering humans will be using the AIs they developed to build AI even faster.More generally, discussion on Takeoff Speed has historically focused on whether an AI would be able to Foom, when in practice there will be an intermediate regime where the system {the humans building the AI + the AI} will self-improve, not the AI by itself.Even if this intermediate regime might, through compounding progress, lead to Foom, our current understanding of scaling laws predicts that we will soon be bottlenecked by dataset size and programs that cannot be longer than a certain size.(Acknowledgements: thanks to the edits suggested by Justis, facilitated by Ruby.)^(Superintelligence, Bostrom) Chapter 4.^Sam did a Q&A for an Astral Codex Ten meetup in September 2021. I will not be linking to the post doing a recap of what he said since it was taken down from LW.Sam's take on Codex was summarized in the post as: a) current codex is bad compared to what they will have next b) they are making fast progress c) Codex is Discuss | https://www.lesswrong.com/posts/aqTAd7KzsYmHWYdei/ai-helping-humans-with-ai | 2022-04-26T22:06:19Z | "Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what its worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen?" (source)When discussing Takeoff Speeds, I feel the debate often gets stuck in some kind of false dichotomy between Fast and Slow, where the crux seems to be about whether some self-improving AI would be able to foom without human assistance.Instead, we could get a Moderate Takeoff (think months or years), where AI does not self-improve (by itself). Instead, there would be a reinforcing feedback loop where progress in AI leads to AI becoming increasingly useful to make progress in AI, with humans in the loop at all times.On top of that, things might just happen privately at some AI lab for a few months until the AI is able to foom by itself, which will look like foom for everyone outside that lab.AI Helping Humans with AIIn Superintelligence, takeoff is defined as the period between AGI and superintelligence. In this post, I will be using as takeoff's starting point the first "AI Helping Humans with AI" (in a meaningful way), or AIHHAI for short, since it will arise before we get fully general intelligence and accelerate AI progress. Here are some examples of what I have in mind for "helping humans in a meaningful way":GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.Another example would be if some code generation tool like Copilot makes ML engineers substantially more productive in writing ML code. Making predictions about productivity is tricky and hard to measure, but it would involve something like accepting code suggestions a decent amount, like 100x more than what engineers using Copilot currently accept.(Sidenote: My general impression from people using Copilot is that they believe it's becoming increasingly useful, and use it on a daily basis, though it rarely auto-completes the right line of code right away. Given that we had Codex/Copilot last year, and that Sam Altman hinted at some new Codex capabilities in his ACX Q&A[2], I think we will get some impressive release for Copilot/Codex sometime this year that most engineers will want to use. (Similar to how a lot of developers are used to using Visual Studio's suite, especially intellisense.) The model I have in mind for "AI helping humans with AI" could be this one, though it will probably require 1-2x more iterations.)Moderate TakeoffA Moderate Takeoff is defined[1] as "one that occurs over some intermediary temporal interval, such as months or years". For AIHHAI, we can distinguish two cases:AIHHAI is developed by some AI lab working on it privately. That lab has a lead compared to the other labs, since they are working more productively using AIHHAI. Thus, they might reach superintelligence first, without allowing enough time for the rest of the world to compete.AIHHAI is made public, or quickly (think months) reproduced by others publicly or privately. In any case, some AIHHAI model is eventually made public, and there is not only one group using AIHHAI--other companies are adopting the same strategy (multipolar scenario).For the first case, you can think of OpenAI using a new version of Copilot internally, that enables their team to quickly build another, even better version of Copilot without releasing the intermediate stage publicly. They would already have a couple of months of lead time in terms of engineering, and after using their latest version, the lead (in terms of how long it would take for a competitor to catch up using publicly available tools) would increase over time due to the compounding advantage.For the second case, you could consider a similar scenario where they do a public release, or other labs like Google Research just build something equivalent a couple of months after. Even if is not public, the thing might be so impressive that employees talk about it to their friends or anonymously, and the news eventually gets leaked. In that regime, you get many companies possibly expanding capabilities in Code Generation, possibly by scaling models and datasets aggressively. The AI race becomes an engineering race, though we might need more scientific breakthroughs for scaling laws to continue (for more on this, see section "Why Code Generation might Plateau Instead" at the end).It is unclear to me which case is more likely. On the one hand, the usefulness of AIHHAI would cause some rapid self-improvement of the system {Humans developing the AI + AI}, and the pace would be so quick the model would not have time to leak. On the other hand, the results being exciting enough increases the probability of the news getting leaked and that other players start (were already?) investing in similar models heavily.Self-improving (Humans + AI)One thing I have not seen discussed a lot is how the system "humans + AI" could have different takeoff speeds, where, for this hybrid system, takeoff would basically mean "going from {human + AIHHAI} to {human + superintelligent AI}".Note that there are many such systems we could study, such as:System 1. {All humans + all used resources, including ML models and science}System 2. {All humans working on code generation + all resources they use}System 3. {Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N}The importance of closed systemsThinking about how "closed" or "small" a system is helps us to understand its kinetics, and also has some implications regarding AI races.Indeed, a small system using only its own output as input could independently foom, without encountering major bottlenecks. Conversely, it your system requires a lot of insights in mathematics or engineering to overcome bottlenecks, foom becomes less likely. However, a smaller model with less humans might have less resources, labor-wise.With regard to races, if your system does not require the output of other disciplines to make progress, you could keep it private for longer. (If the system required a lot of insights, publishing preliminary results about the system could prove necessary to get the outside world to publish research relevant to your system.) In practice:System 1 is a closed system. Thinking about how fast it would improve basically brings us back to "when will GDP double in a year" territory. The abstraction is not precise enough to give insights about kinetics without basically studying macro-economics.System 2 is not closed, since it actually uses insights from other discipline in CS/Math and others as inputs. That said, the research in code generation directly helps the humans doing work relevant to code generation (assuming they use it).System 3 would also definitely need to take research and tools from somewhere else as inputs, though you could assume that, as N gets bigger, most of insights on how to debug deep learning models would be actually fed to Copilot-N's training data via telemetry (or would be accessible via GPT-N's Q&A interface).Among the systems presented above, System 3 could experience exponential self-improvement in complete stealth mode and is therefore worth studying in more details.Self-improving Code GenerationI am especially interested in Sstem 3 (=="Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N"), because progress in AIHHAI straightforwardly leads to productivity increases in developing AIHHAI.Let's imagine that Copilot-N successfully auto-completes 1% of code lines, and for the sake of argument people immediately press "Tab" to move to the next line in those cases. Without thinking about the fact that the auto-completed parts would actually be the easiest parts of the developer's pre-existing workload, this would make developers ~1% more productive.You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1), which would in turn imply 2% more "perfect auto-complete" than what we started with, etc.Obviously, the Copilot we have right now is still pretty rudimentary. It is mostly useful for beginners to an API or language, not for doing cutting edge PyTorch development. And you could say that a lot of ML work is done outside of coding, like reading papers and building infrastructure. (More on this in my Codex Skeptic FAQ).I agree that improvements in productivity from AI are currently marginal, though one should consider what those improvements might be for future versions, including things like question-answering GPT-N helping to debug high-level problems. It is also important to keep in mind that engineers from many different fields are currently using Copilot regularly, and could benefit more from code generation than ML engineers (think web programmers). Those engineers would in turn accelerate GDP growth, which would fasten the total amount of investments in AI.How that would actually lead to FoomWhen we will get from marginal improvements in Code Generation to some Q&A language model that helps you re-consider your transformer architecture, the gains of productivity will start to be more substantial.Assuming we are in the scenario where one company (think OpenAI) has access to increasingly better code generation tools (that no one else has access to), and possibly also some lead in terms of useful language models to debug their tools, they might get a bigger and bigger lead in how useful their AIHHAI is.At some point, you would be able to ask more open questions, solving harder and harder tasks, for complex things like making money in financial markets, or just setting strategy for the entire company. In a matter of months, the company would achieve extraordinary economic output, re-investing everything into AIHHAI.Eventually, the AIHHAI would be optimizing developer productivity over some time horizon, not just completing the next line of code. When something like planning is implemented (eg. expected reward maximization), the AIHHAI might just Foom by modifying its own code to generate code better.Why Code Generation might Plateau instead As Kaplan mentions in his recent talk about the implications of Scaling Laws for Code Generation, current progress is bottlenecked by:Data available. If you remove duplicates, you have about 50B tokens of Python code on Github. In comparison, GPT-3 was trained on about 300B tokens. You could possibly do data augmentation or transfer learning to bypass this problem. Though Kaplan also guesses that in AlphaCode, researchers were also bottlenecked by dataset size when scaling things up. On top of that, the Chinchilla paper shows that scaling data about as much as model size is also necessary for compute-optimal training.Writing longer programs. Assuming you have a constant error rate when writing your program token by token, you get an exponential decay in how likely your program is to solve the problem. (They tested this by asking a model to write longer programs doing essentially the same thing, and they got an exponentially worse pass rate.) Therefore, asking Codex to write very programs might plateau even when scaling models, at least with our current methods. (Kaplan mentions that probably a method would imply doing what humans do, aka writing bad code until it works, instead of just asking the model to write one long piece of code.)ConclusionModerate Takeoffs (think months) are a useful abstraction to think about scenarios between Foom and Slow Takeoffs (years, decades).When discussing Takeoff speed, it is worth noting that progress can be heterogeneous between what happens privately and publicly, especially as we get closer to superintelligence. This is especially true when considering humans will be using the AIs they developed to build AI even faster.More generally, discussion on Takeoff Speed has historically focused on whether an AI would be able to Foom, when in practice there will be an intermediate regime where the system {the humans building the AI + the AI} will self-improve, not the AI by itself.Even if this intermediate regime might, through compounding progress, lead to Foom, our current understanding of scaling laws predicts that we will soon be bottlenecked by dataset size and programs that cannot be longer than a certain size.(Acknowledgements: thanks to the edits suggested by Justis, facilitated by Ruby.)^(Superintelligence, Bostrom) Chapter 4.^Sam did a Q&A for an Astral Codex Ten meetup in September 2021. I will not be linking to the post doing a recap of what he said since it was taken down from LW.Sam's take on Codex was summarized in the post as: a) current codex is bad compared to what they will have next b) they are making fast progress c) Codex is <1y away from having a huge impact on developers. | Digital Assistance/Content Creation/Decision Making | Computer and Mathematical/Education, Training, and Library | null | null | null | null | null | null |
|
news | Dan Hendrycks | A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] | Published on May 9, 2022 5:18 PM GMTThis is the second post in a sequence of posts that describe our models for Pragmatic AI Safety. The internal dynamics of the ML field are not immediately obvious to the casual observer. This post will present some important high-level points that are critical to beginning to understand the field, and is meant as background for our later posts.Driving dynamics of the ML fieldHow is progress made in ML? While the exact dynamics of progress are not always predictable, we will present three basic properties of ML research that are important to understand.The importance of defining the problemA problem well-defined is a problem half solved. —John Dewey (apocryphal)The mere formulation of a problem is often more essential than its solution, which [...] requires creative imagination and marks real advances in science.—Albert EinsteinI have been struck by how important measurement is... This may seem basic, but it is amazing how often it is not done and how hard it is to get right. —Bill GatesIf you cannot measure it, you cannot improve it.—Lord Kelvin (paraphrase)For better or worse, benchmarks shape a field. —David Patterson, Turing award winnerProgress in AI arises from objective evaluation metrics.—David McAllesterScience requires that we clarify the question and then refine the answer: it is impossible to solve a problem until we know what it is. Empirical ML research, which is the majority of the field, progresses through well-defined metrics for progress towards well-defined goals. Once a goal is defined empirically, is tractable, and is incentivized properly, the ML field is well-equipped to make progress towards it.A variation on this model is that artists (writers, directors, etc.) come first. They help give ideas, and philosophers add more logical constraints to those ideas to come up with goals or questions, and finally scientists can help make iterative progress towards those goals. To give an example: golems, animate beings created from clay, were a common symbol in Jewish folklore, and at times could create evil. There are many other historical stories of automatons creating problems for humans (Pandora, Frankenstein, etc.). More recent stories, like Terminator, made the ideas more concrete, even as they included fantasy elements not grounded in reality. More recently, Bostrom (2002) recognized the possibility for existential risk from AI, and grounded it in the field of artificial intelligence. Since then, others have worked on concretizing and solving technical problems associated with this risk.For completeness, it’s worth mentioning that sometimes through tinkering people find solutions to questions people were not posing, though many of those solutions aren’t solutions for interesting questions.MetricsAs David McAllester writes, machine learning and deep learning is fundamentally driven by metrics. There are many reasons for this. First, having a concrete metric for a problem is a sign that the problem has been compressed into something simpler and more manageable (see the discussion of microcosms below), which makes it more likely that progress can be made on it. By distilling a problem into a few main components, it is also far clearer when progress has been made, even if that progress is relatively small.Unlike human subjective evaluation, most metrics are objective: even if they do not perfectly track the properties of a system that we care about, it is obvious when somebody has performed well or poorly on an evaluation. Metrics can also be used across methods, which makes different approaches directly comparable rather than relying on many different measuring sticks. High-quality datasets and benchmarks concretize research goals, make them more tractable, and can spur large community research efforts. Good metrics also can allow us to detect minor improvements, which enables iterative progress and accumulated improvements.Metrics that rely too heavily on human evaluation are suspect. First, human evaluation is extremely expensive compared to automatic evaluation, and often requires IRB approval in the case of academic labs. This significantly reduces its utility. Second, human evaluation is slow, which makes feedback loops sparser and thus makes problems far more difficult to iterate on. Third, human feedback is often noisier and more subjective than many automatic evaluation metrics.For these reasons, the first step working towards working on a new ML problem is to define a good metric.Limits of theory for machine learningThere can be a tendency for new fields to try to formulate new problems as mathematics problems, but this is not always possible. Unfortunately, if machine learning just isn’t productively formulated as an applied mathematics problem, throwing lots of resources at it as an applied mathematics problem isn’t going to work. Currently, deep learning progress is not well-formulated as an applied mathematics problem.Consider progress in computer vision:1960s: Beginnings in artificial intelligence, image processing and pattern recognition1970s: Foundational work on image formation: Horn, Koenderink, Longuet-Higgins …1980s: Vision as applied mathematics: geometry, multi-scale analysis, probabilistic modeling, control theory, optimization1990s: Geometric analysis largely completed, vision meets graphics, statistical learning approaches resurface2000s: Significant advances in visual recognition2010s: Progress continues, aided by the availability of large amounts of visual data and massive computing power. Deep learning has become pre-eminentWhile some researchers tried to treat image recognition as an applied math problem, as mathematical solutions have many desirable properties, this did not work and they were forced to shift paradigms. Research communities that need to solve problems don’t get to choose their favorite paradigm. Even if the current paradigm is flawed and a new paradigm is needed, this does not mean that their favorite paradigm will become that new paradigm. They cannot ignore or bargain with the paradigm that will actually work; they must align with it. We may want a machine learning problem to be a math problem, but that does not mean it is.Overtheorizing is a common failure case for solving technical problems, and it happened in AI. Imagine you want to build a mail delivery robot. If you start working on differential geometry, you’re doing it wrong and overtheorizing. It simply is not necessary to solve that problem in order to build the mail delivery robot. The slide with the figure above is from 1995, and argues that many of the subtasks thought relevant for object detection were unnecessary for the task: today, this is obvious. Despite a large number of extremely smart people working on CV as an applied mathematics problem for many years, they did not solve CV, because CV is not an applied mathematics problem.A similar dynamic happened with what is now known as “good old fashioned AI” using explicit high-level symbolic and search algorithms. While it produced some successes, these inclinations have been mainly replaced by deep learning in the state of the art.In natural science, the objective is to discover, study, and explain natural phenomena. Consider optics (1650), thermodynamics (1824), aerodynamics (1757), information theory (1948), and computer science (1950). In natural science, the goal standard is reproducible experiments in controlled conditions. The use of mathematics and statistics is common, since they help provide a theoretical underpinning. In natural science, control over the environment in an experiment is crucial.In engineering science, the basic idea is to invent new artifacts. Consider the telescope (1608), steam engine (1695), sail boat (BCE), teletype (1906), computer (1941), etc. In engineering, one must use intuition, creative inspiration, tinkering, and trying many things to see what sticks. Invention is often created by accident, by people who created many failing inventions. Some theory is surely needed (a sailboat is far less likely if one does not know that wind can push objects) but it often does not need to be very detailed. Engineering is often bottom-up and creative, and theory does not dictate an exact or even approximate design. For a discussion of deep learning and limits of top-down design, see here. In many cases, natural science follows engineering science, where natural science attempts to explain an artifact created by engineering.Yann LeCun discusses this distinction further here. The distinction may also be analogized to the distinction between rationalism and empiricism.DL is mostly engineering science. We have few theories, and the theories we do have provide limited guidance. Attempts at generalizing phenomena in DL are very often not robust. For example, previously l1 loss worked noticeably better than l2 loss for pixel regression, but not recently. Every year, new papers are published that purport to explain the generalization ability of neural networks, but the subject has not been settled. There are some exceptions: for instance, the fact that bigger Transformer models consistently perform better. Since these kind of observations are more robust than theories, and since the phenomena are fragile, the theories are even more fragile. Even ML theorist PhD students will readily admit that theory for DL has not (yet) created action-relevant insights about DL.DL has many factors that increase the probability of surprises or unknown unknowns: complicatedness, fast changes, ambiguity, opacity, and interconnectedness/multiple causes. In view of this, researchers that work for years towards advancing capabilities to get a specific downstream safety research outcome are not being precautious since this field is not that predictable. This level of unpredictability makes armchair/whiteboard research less likely to succeed; much research must be bottom-up and iterative.DL methods are often not even obvious in hindsight, unlike more theoretical fields where proofs are self-evident to audiences with an appropriate background. Why do residual connections work? Why does fractal data augmentation help? This property is why tinkering and rapid experimentation is so important.If hindsight doesn’t even work, that means that it is extremely difficult to achieve foresight into the impact of a proposed project. Instead, it is necessary to fall back on heuristics. First, does the project seem palpably unrealistic on its face? This could be grounds for dismissing it. Otherwise, if there are not yet results, one should instead defer to the researcher’s prior track record, with perhaps some benefit of the doubt given to younger researchers who have not had time to accrue track records.Creative Destruction in MLIf a temple is to be erected a temple must be destroyed: that is the law – let anyone who can show me a case in which it is not fulfilled!—Friedrich NietzscheLike many other fields, ML experiences creative destruction: periods when new and better technology rapidly replaces older methods. This can be related to models for exponential growth as aggregated sigmoid jumps rather than a truly smooth acceleration.Research, especially research into algorithms, architectures, training techniques, and the like, is often entirely wiped away by “tsunamis.” For example, natural language processing techniques from before BERT are almost entirely irrelevant. A large number of computer vision techniques devised prior to AlexNet have almost no influence on the field today. Lastly, speech recognition became an essentially solved problem with the advent of deep learning. After a tsunami, methods often work out-of-the-box, require less effort to use, and performance is much higher.Imagine you’re in the late 2000s and care about AI safety. It is very difficult to imagine that you could have developed any techniques or algorithms which would transfer to the present day. However, it might have been possible to develop datasets that would be used far into the future or amass safety researchers which could enable more safety research in the future. For instance, if more people had been focused on safety in 2009, we would likely have many more professors working on it in 2022, which would allow more students to be recruited to work on safety. In general, research ecosystems, safety culture, and datasets survive tsunamis.By some indications, reinforcement learning is poised for a tsunami. RL currently is extremely unwieldy, requiring careful tuning of a large number of parameters and methods to achieve any reasonable results at all. It often (not always) uses Markovian assumptions and exponential decay, which tends to be replaced by paradigms that better model long-range dependencies (e.g., consider hidden markov models in NLP which have been superseded). Unlike language and vision, RL has not yet been revolutionized by large-scale models. As such, RL appears to have the properties of a field prior to a tsunami. If this is true, it does not bode well for RL safety research methods, which could be washed away.As a result, developing safety proposals for DL is likely to be a safer bet than RL in both the short and long term. There appears to be a reasonable chance that DL will not be washed away. In this case, it is extremely important to have a high number of people working on empirical methods. If DL is not the last tsunami, research in DL will still aid dataset creation, research ecosystem building, and safety culture for later.Thinking of methods that will work in the current paradigm and not using this research to help ecosystem building in the larger ML community is doubly bad: it stands the risk of being wiped away by a tsunami, and it didn’t even help research ecosystem building. If one expects more tsunamis, pay more attention to prestige and resources.The Bitter Lesson argues that there will be more creative destruction and that human ingenuity will matter less and less. Although we do not believe the following scenario is likely, in the long run, AI risk reduction may even be a matter of banal factors: compute, data, and engineering resources allocated towards safety goals, in comparison with other capabilities goals. The amount allocated towards these goals would depend on how important safety is to the system designers, which means safety buy-in among researchers and tech leaders would be a high priority.The ML research ecosystemIf we want to have any hope of influencing the ML community broadly, we need to understand how it works (and sometimes doesn’t work) at a high level.Where is ML research published?The machine learning field, both in industry and academia, is dominated by conferences. Except for a few splashy and publicized industry papers published in Nature, the most important ML papers are all published in a relatively small number of ML conferences. Though there are a few journals, they are not very impactful, nor are papers that are exclusively published at workshops. The vast majority of papers are submitted to conferences.Prior to publication, most ML research papers are posted as preprints on arxiv.org (pronounced “archive”). Because of the speed that the ML research field advances, it is not sufficient for ML researchers to simply read papers that have been published in conferences, since publication typically doesn’t happen for several months until after a paper is posted on arXiv. Instead, ML researchers need to keep updated on the latest preprints. Many do so via relying on word of mouth or Twitter to indicate which papers are important, while others make more of an effort to identify important papers on arXiv themselves.Composition of ML SubfieldsThe size of different machine learning subfields might not be immediately obvious to somebody outside of the field. The graphs and statistics that follow are based on an analysis we conducted of publicly-available data from Semantic Scholar (data is approximate, and is only as accurate as the Semantic Scholar data), and use citations as a rough metric for the size of different subfields.By far the largest subfield within machine learning is computer vision (CV), with the Conference on Computer Vision and Pattern Recognition (CVPR) being by far the most influential conference in machine learning in terms of the number of citations. Natural language processing (NLP) is much smaller than computer vision, with the largest conference, the Proceedings of the Association for Computational Linguistics (ACL), getting about a fifth the total citations of CVPR. Even smaller than NLP are reinforcement learning and robotics. The most influential conference in this area is the IEEE International Conference on Robotics and Automation (ICRA), which receives slightly more than half the citations of ACL (so an order of magnitude less than CVPR). There are also three conferences that publish many kinds of ML research: the International Conference on Learning Representations (ICLR), Neural Information Processing Systems (NeurIPS), and the International Conference on Machine Learning (ICML). These conferences can contain NLP, CV, and RL, and all rank in the top four conferences along with CVPR.The top conferences do not gain their influence simply by having many papers. ICRA and IROS publish as many papers as CVPR (Figure 1), but mean and median citation counts in CV and ML conferences (particularly ICLR) are far above those in NLP and RL/robotics (see Figure 3 and 4).NLP has been growing recently, especially since BERT reduced barriers to entry. For instance, ACL papers from 2020 (see Figure 5) only got 2-3x fewer citations compared with CVPR papers, showing some relative growth of the field (note NAACL and ICCV were not held in 2020). Still, CV was the largest subfield, and RL/robotics has not gained any ground at all.Explanation of ML SubfieldsMicrocosms“Microcosms” in this context are simpler subproblems of harder problems that mirror the larger problems but are more tractable. To give a concrete example, to make an aerodynamic bicycle for a race, you might first want to start by making it aerodynamic in a wind tunnel. Although the conditions of a wind tunnel are simplified compared with the real world, they are similar enough to yield useful insights. Some simpler problems are not that microcosmic, however, because they may make too many simplifying assumptions and thus not be representative of the macrocosm (e.g., some gridworlds).In general, the majority of research inquiry is conducted on microcosms. Work on these problems can inform us about the future or even directly influence future systems, as some current ML algorithms are highly scalable and may be a part of long-term AI systems. Next, we will discuss two of the most important microcosms in machine learning.Why do DL researchers look at ImageNet and image classification so much?Historically, CV and pattern recognition in general have gone together. The ImageNet dataset, an old compilation of images and associated labels for those images, continues to drive the field. Why? It is a good microcosm. Researchers have found that performance on ImageNet is highly predictive of downstream performance in numerous applications like segmentation, clustering, object detection, and downstream image recognition. Many researchers also view image understanding as a problem that is upstream of video understanding, which is important for a whole range of additional applications.Deep learning researchers use CV as a representation learning proxy, not because they are particularly interested in classifying cats. Most DL building blocks (some activation functions, batch normalization, some optimizers, dropout, convolutions, residual connections, etc.) have emerged from researching image classification, so it is a useful whetstone to sharpen DL algorithms against. Consequently, people do not think they’re just researching vision but instead researching how to do representation learning and deep learning in general. There are, of course, exceptions: semantic segmentation, depth maps, and downstream applications of CV do not necessarily help with general representation learning.Many economic incentives and funding sources are available for CV because vision is useful in many industries, which keeps the field large.CV is also large because it’s very possible to improve image classification performance with ideas since scaling is less important. Many methods do not currently have consistent returns to scale: for instance, current masked autoencoders do not consistently perform better with more data or compute. Most researchers, especially academics, are heavily incentivized towards fields where improvements can be made with ideas, because ideas can be found by competent researchers with good taste who spend enough time tinkering. Meanwhile, in NLP, a larger share of progress is made by simple scaling, which makes for less interesting work for researchers, and less incentivized outside very large labs with access to compute resources.In DL, findings for some data transfer to other types of data. This is partly because different kinds of natural data have similar underlying statistical properties. Consequently, studying how to do representation for some data distributions often transfers to other data distributions. Many techniques, such as residual connections, that helped with ImageNet helped with discrete signals (text) and other continuous signals (speech).Why is NLP the second largest field?As detailed above, image classification is not about images as much as it is about general capabilities to analyze continuous structured signals. Likewise, natural language processing is important because it studies discrete signals.In recent years, NLP and CV have started to coalesce, with more multimodal models being able to process both continuous and discrete signals. In addition, insights from NLP, such as large pre-trained transformer models, are now percolating into CV (e.g. vision Transformers), as techniques have become more general and differences between the two paradigms are decreasing.Issues with ConferencesThe conference review process has serious flaws. For instance, best paper awards mean very little; oral or spotlight designations are not highly predictive of longer-term impact either. In ML conferences, awards and designations are highly biased towards theory papers. In vision, these awards have a strong political element, where some awards are sometimes given to researchers that are seen as needing or deserving of a career boost.In 2014 at NIPS (now called NeurIPS), an experiment was conducted where the review committee was split in half, and 10% of papers were assigned to be reviewed independently by both committees. 57% of papers accepted by one committee were rejected by the other committee, and vice versa. In comparison, given the overall acceptance rate, the rate would be 77.5% if decisions were purely random and 0% if decisions were perfectly correlated.A recent analysis found the correlation between reviewer scores and eventual citation rate was very weak at ICLR 2020, after controlling for acceptance decisions. The most transformative ideas are often initially disputed or doubted; to be accepted, consensus often needs to be reached in a short discussion period. Additionally, reviewers adversely select for complicated methods (neomania) and disincentivize simple methods that work, even if impact is one of the dimensions they are supposed to be evaluating. In ML conferences, they incentivize theory, which these days usually is anticorrelated with ultimate impact.As pointed out by Yann LeCun (timestamp 2:15), a large number of young reviewers just want to point out errors rather than assess the paper overall. However, correcting errors is unlikely to make a paper go from low to high impact or help researchers write better papers the next time around.These experiments cast doubts on the review process, even though researchers continue to act as though conference selection processes are wise. This doesn’t mean that the review process is not at all useful. For one, conferences provide comments to papers, not just scores. More importantly, anonymous peer review is necessary for researchers in their first five years of research, as they need tough comments from random community members; without peer review, people will rarely hear what people truly think since specific disparagement is highly uncommon in other CS contexts. Reviewer comments also limit parochialism, an increasingly large problem as the field becomes more expansive. Reviews also require papers to have some level of technical execution ability; if they’re below a threshold, most people do not even submit the paper. Lastly, it’s important to consider the effect that the anticipation of the review process has on researchers. Even if reviews are a noisy process, researchers will be frequently thinking about how to make sure their paper is accepted by reviewers, and in many cases this can encourage them to have a stronger paper. Like democracy, the review process is not a perfect system, but it works better than alternatives (e.g. “trial by upvote”).One implication of the flaws with the review process is that new approaches cannot reliably be evaluated just by thinking about them. Rather than rely on experts to evaluate new works, communities need the test of time, the ultimate filter for bad ideas.Consequential and Inconsequential PapersMarked progress on a specific problem, even if you have a field working on it, is usually fairly infrequent, where there will be a paper that actually moves things ahead every 6 months to 3 years. In the meantime, there are many unpublished and published papers that are inconsequential. It is very hard to come up with something that actually helps.How are so many inconsequential papers published? They may have successfully presented themselves as consequential. For instance, it is common for researchers to publish papers in which they achieve state of the art performance in one metric (e.g. out-of-distribution robustness) while sacrificing performance in another metric (e.g. calibration) compared to the previous state of the art. This might be accepted, because it appears to make progress in one dimension. The best papers, on the other hand, make Pareto improvements, or at the very least make extremely lopsided improvements where a minor sacrifice in one dimension leads to a major gain in another.Another way that inconsequential papers might get published is for authors not to publish results on all of the relevant datasets. For instance, sometimes people publish results of evaluating on CIFAR-10, but not CIFAR-100, often because they are not able to satisfactorily perform on the latter. This sometimes goes unnoticed by reviewers.Finally, papers that might seem consequential at first turn out to be inconsequential when they are wiped away by tsunamis.InterestingnessIn many cases, ML researchers are motivated by interestingness more than usefulness. This is related to the fact that those who choose to enter research have high openness to new ideas and frequently seek out novelty. This can bias them against research that appears more “boring,” even if it has much more practical relevance.The bias towards interestingness becomes even more extreme in areas with less concrete metrics for success. In such cases, researchers can sometimes get away with publishing research that is empirically not very useful, but is interesting to reviewers. This is the reason behind many ML “fads,” which can last many years. Historical ProgressIn order to begin to attempt to extrapolate into the future, it’s necessary to understand historical progress. In this section, we will provide some examples of historical progress, using metrics defined for the field. For many metrics, especially those that are highly researched, progress is roughly linear (or log-linear). For less-studied problems, there can be sudden jumps.Data for the graphs below is from Papers With Code, with some charts consisting of adapted data (mostly to present log error rates rather than accuracies).Image ClassificationImageNet has been one of the most influential datasets, and image classification has remained the most important benchmark in computer vision. Frequently the top-1 or top-5 accuracy on ImageNet is reported, but it’s also useful to look at the log error rate since progress has been roughly linear on that metric. Video understandingThe following chart shows progress on action recognition in videos. The state of the art in 2018 was the temporal relational network, which was a variant of a convolutional neural network. In 2022, the state of the art is a vision transformer that was pretrained on a separate dataset. Progress has been relatively slow in video understanding despite strides in image understanding. One heuristic is that video understanding is ten years behind image understanding. Object DetectionObject detection is sometimes thought of as dramatically accelerated by deep learning. In reality, while there was a period of 1-2 years in which progress seemed to be stalling prior to deep learning, deep learning merely continued the existing trend in capabilities improvements [1,2]. Deep learning did revolutionize the methods in object detection, and the old object detection paradigm was washed away. Starting in 2017, however, performance once again stalled, leading to a plateau that has been longer than any pre-deep learning! In fact, there was nearly no progress until 2021, when a paper made a relatively large improvement. The paper that did so leveraged pre-training with data augmentations.Image SegmentationImage segmentation (the labeling of different pixels of an image) on the COCO progressed extremely quickly between 2015-2016, but it has leveled off.Adversarial RobustnessProgress on adversarial robustness has been fairly slow. See the following graph:Language ModelingPerplexity measures the ability of language models to be able to predict the next word given a sequence of words (a lower perplexity is better). Perplexity on the Penn Treebank dataset has been decreasing over time in a roughly linear way.TranslationBelow is one example, on WMT2014 English-German.Question AnsweringQuestion answering on the SQuAD 1.1 dataset progressed quickly until 2019, when it leveled off.Emergent PropertiesThe view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. —Alan TuringThe qualitative impact of an order of magnitude increase in parameters, or a new algorithm, is often difficult to predict. Capabilities can sometimes emerge suddenly and without warning.For instance, the effect of additional parameters on image generation is not immediately predictable:Scan the images left to right, blocking the rightward images. Try and predict how good the image will be after an order of magnitude increase.BERT could not be easily tuned to do addition, but RoBERTa, which was just pretrained on 10x more data, can.AlphaZero experienced a phase transition where internal representations changed dramatically and capabilities altered significantly at about ~32,000 steps, when the system learned concepts like “king safety, threats, and mobility” suddenly. This can be seen by looking at the system’s preferred opening moves, the distribution of which dramatically changes at 32,000 steps.One paper showed that in some cases, performance can improve dramatically on test data even after it had already saturated on the training data:In a paper studying the Gopher model, gold labels initially did not improve performance on the FEVER fact-checking dataset by much at smaller model sizes, but had significant impact at larger model sizes.Anthropic’s Predictability and Surprise paper demonstrated fast emergence of addition, multitask understanding, and program synthesis as the number of parameters increased.It will be hard to make systems safe if we do not know what they are capable of. Additionally, it will be hard to foresee how systems will behave without empirically measuring them.Lastly, rapid changes can be caused by creative destruction. Most of these graphs did not even start prior to a decade ago, because deep learning ushered in an entirely new paradigm for solving many of the problem | https://www.lesswrong.com/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2 | https://lh3.googleusercontent.com/ZojzI2IGMscZ3s8XSUtaJfN6jK_BLf1kW0adKRuSnnKomgmiEH373wEYU0fP0hJZ3wjQ2QX4_fmoZ7ek4UVCmzMdHdXYMEMl0R6VcyDSWI07WViKuOekpb0iKr4I7zUMaXIhaTWcjE5cDpEbwQ | 2022-05-09T17:18:56Z | This is the second post in a sequence of posts that describe our models for Pragmatic AI Safety. The internal dynamics of the ML field are not immediately obvious to the casual observer. This post will present some important high-level points that are critical to beginning to understand the field, and is meant as background for our later posts.Driving dynamics of the ML fieldHow is progress made in ML? While the exact dynamics of progress are not always predictable, we will present three basic properties of ML research that are important to understand.The importance of defining the problemA problem well-defined is a problem half solved. John Dewey (apocryphal)The mere formulation of a problem is often more essential than its solution, which [...] requires creative imagination and marks real advances in science.Albert EinsteinI have been struck by how important measurement is... This may seem basic, but it is amazing how often it is not done and how hard it is to get right. Bill GatesIf you cannot measure it, you cannot improve it.Lord Kelvin (paraphrase)For better or worse, benchmarks shape a field. David Patterson, Turing award winnerProgress in AI arises from objective evaluation metrics.David McAllesterScience requires that we clarify the question and then refine the answer: it is impossible to solve a problem until we know what it is. Empirical ML research, which is the majority of the field, progresses through well-defined metrics for progress towards well-defined goals. Once a goal is defined empirically, is tractable, and is incentivized properly, the ML field is well-equipped to make progress towards it.A variation on this model is that artists (writers, directors, etc.) come first. They help give ideas, and philosophers add more logical constraints to those ideas to come up with goals or questions, and finally scientists can help make iterative progress towards those goals. To give an example: golems, animate beings created from clay, were a common symbol in Jewish folklore, and at times could create evil. There are many other historical stories of automatons creating problems for humans (Pandora, Frankenstein, etc.). More recent stories, like Terminator, made the ideas more concrete, even as they included fantasy elements not grounded in reality. More recently, Bostrom (2002) recognized the possibility for existential risk from AI, and grounded it in the field of artificial intelligence. Since then, others have worked on concretizing and solving technical problems associated with this risk.For completeness, its worth mentioning that sometimes through tinkering people find solutions to questions people were not posing, though many of those solutions arent solutions for interesting questions.MetricsAs David McAllester writes, machine learning and deep learning is fundamentally driven by metrics. There are many reasons for this. First, having a concrete metric for a problem is a sign that the problem has been compressed into something simpler and more manageable (see the discussion of microcosms below), which makes it more likely that progress can be made on it. By distilling a problem into a few main components, it is also far clearer when progress has been made, even if that progress is relatively small.Unlike human subjective evaluation, most metrics are objective: even if they do not perfectly track the properties of a system that we care about, it is obvious when somebody has performed well or poorly on an evaluation. Metrics can also be used across methods, which makes different approaches directly comparable rather than relying on many different measuring sticks. High-quality datasets and benchmarks concretize research goals, make them more tractable, and can spur large community research efforts. Good metrics also can allow us to detect minor improvements, which enables iterative progress and accumulated improvements.Metrics that rely too heavily on human evaluation are suspect. First, human evaluation is extremely expensive compared to automatic evaluation, and often requires IRB approval in the case of academic labs. This significantly reduces its utility. Second, human evaluation is slow, which makes feedback loops sparser and thus makes problems far more difficult to iterate on. Third, human feedback is often noisier and more subjective than many automatic evaluation metrics.For these reasons, the first step working towards working on a new ML problem is to define a good metric.Limits of theory for machine learningThere can be a tendency for new fields to try to formulate new problems as mathematics problems, but this is not always possible. Unfortunately, if machine learning just isnt productively formulated as an applied mathematics problem, throwing lots of resources at it as an applied mathematics problem isnt going to work. Currently, deep learning progress is not well-formulated as an applied mathematics problem.Consider progress in computer vision:1960s: Beginnings in artificial intelligence, image processing and pattern recognition1970s: Foundational work on image formation: Horn, Koenderink, Longuet-Higgins 1980s: Vision as applied mathematics: geometry, multi-scale analysis, probabilistic modeling, control theory, optimization1990s: Geometric analysis largely completed, vision meets graphics, statistical learning approaches resurface2000s: Significant advances in visual recognition2010s: Progress continues, aided by the availability of large amounts of visual data and massive computing power. Deep learning has become pre-eminentWhile some researchers tried to treat image recognition as an applied math problem, as mathematical solutions have many desirable properties, this did not work and they were forced to shift paradigms. Research communities that need to solve problems dont get to choose their favorite paradigm. Even if the current paradigm is flawed and a new paradigm is needed, this does not mean that their favorite paradigm will become that new paradigm. They cannot ignore or bargain with the paradigm that will actually work; they must align with it. We may want a machine learning problem to be a math problem, but that does not mean it is.Overtheorizing is a common failure case for solving technical problems, and it happened in AI. Imagine you want to build a mail delivery robot. If you start working on differential geometry, youre doing it wrong and overtheorizing. It simply is not necessary to solve that problem in order to build the mail delivery robot. The slide with the figure above is from 1995, and argues that many of the subtasks thought relevant for object detection were unnecessary for the task: today, this is obvious. Despite a large number of extremely smart people working on CV as an applied mathematics problem for many years, they did not solve CV, because CV is not an applied mathematics problem.A similar dynamic happened with what is now known as good old fashioned AI using explicit high-level symbolic and search algorithms. While it produced some successes, these inclinations have been mainly replaced by deep learning in the state of the art.In natural science, the objective is to discover, study, and explain natural phenomena. Consider optics (1650), thermodynamics (1824), aerodynamics (1757), information theory (1948), and computer science (1950). In natural science, the goal standard is reproducible experiments in controlled conditions. The use of mathematics and statistics is common, since they help provide a theoretical underpinning. In natural science, control over the environment in an experiment is crucial.In engineering science, the basic idea is to invent new artifacts. Consider the telescope (1608), steam engine (1695), sail boat (BCE), teletype (1906), computer (1941), etc. In engineering, one must use intuition, creative inspiration, tinkering, and trying many things to see what sticks. Invention is often created by accident, by people who created many failing inventions. Some theory is surely needed (a sailboat is far less likely if one does not know that wind can push objects) but it often does not need to be very detailed. Engineering is often bottom-up and creative, and theory does not dictate an exact or even approximate design. For a discussion of deep learning and limits of top-down design, see here. In many cases, natural science follows engineering science, where natural science attempts to explain an artifact created by engineering.Yann LeCun discusses this distinction further here. The distinction may also be analogized to the distinction between rationalism and empiricism.DL is mostly engineering science. We have few theories, and the theories we do have provide limited guidance. Attempts at generalizing phenomena in DL are very often not robust. For example, previously l1 loss worked noticeably better than l2 loss for pixel regression, but not recently. Every year, new papers are published that purport to explain the generalization ability of neural networks, but the subject has not been settled. There are some exceptions: for instance, the fact that bigger Transformer models consistently perform better. Since these kind of observations are more robust than theories, and since the phenomena are fragile, the theories are even more fragile. Even ML theorist PhD students will readily admit that theory for DL has not (yet) created action-relevant insights about DL.DL has many factors that increase the probability of surprises or unknown unknowns: complicatedness, fast changes, ambiguity, opacity, and interconnectedness/multiple causes. In view of this, researchers that work for years towards advancing capabilities to get a specific downstream safety research outcome are not being precautious since this field is not that predictable. This level of unpredictability makes armchair/whiteboard research less likely to succeed; much research must be bottom-up and iterative.DL methods are often not even obvious in hindsight, unlike more theoretical fields where proofs are self-evident to audiences with an appropriate background. Why do residual connections work? Why does fractal data augmentation help? This property is why tinkering and rapid experimentation is so important.If hindsight doesnt even work, that means that it is extremely difficult to achieve foresight into the impact of a proposed project. Instead, it is necessary to fall back on heuristics. First, does the project seem palpably unrealistic on its face? This could be grounds for dismissing it. Otherwise, if there are not yet results, one should instead defer to the researchers prior track record, with perhaps some benefit of the doubt given to younger researchers who have not had time to accrue track records.Creative Destruction in MLIf a temple is to be erected a temple must be destroyed: that is the law let anyone who can show me a case in which it is not fulfilled!Friedrich NietzscheLike many other fields, ML experiences creative destruction: periods when new and better technology rapidly replaces older methods. This can be related to models for exponential growth as aggregated sigmoid jumps rather than a truly smooth acceleration.Research, especially research into algorithms, architectures, training techniques, and the like, is often entirely wiped away by tsunamis. For example, natural language processing techniques from before BERT are almost entirely irrelevant. A large number of computer vision techniques devised prior to AlexNet have almost no influence on the field today. Lastly, speech recognition became an essentially solved problem with the advent of deep learning. After a tsunami, methods often work out-of-the-box, require less effort to use, and performance is much higher.Imagine youre in the late 2000s and care about AI safety. It is very difficult to imagine that you could have developed any techniques or algorithms which would transfer to the present day. However, it might have been possible to develop datasets that would be used far into the future or amass safety researchers which could enable more safety research in the future. For instance, if more people had been focused on safety in 2009, we would likely have many more professors working on it in 2022, which would allow more students to be recruited to work on safety. In general, research ecosystems, safety culture, and datasets survive tsunamis.By some indications, reinforcement learning is poised for a tsunami. RL currently is extremely unwieldy, requiring careful tuning of a large number of parameters and methods to achieve any reasonable results at all. It often (not always) uses Markovian assumptions and exponential decay, which tends to be replaced by paradigms that better model long-range dependencies (e.g., consider hidden markov models in NLP which have been superseded). Unlike language and vision, RL has not yet been revolutionized by large-scale models. As such, RL appears to have the properties of a field prior to a tsunami. If this is true, it does not bode well for RL safety research methods, which could be washed away.As a result, developing safety proposals for DL is likely to be a safer bet than RL in both the short and long term. There appears to be a reasonable chance that DL will not be washed away. In this case, it is extremely important to have a high number of people working on empirical methods. If DL is not the last tsunami, research in DL will still aid dataset creation, research ecosystem building, and safety culture for later.Thinking of methods that will work in the current paradigm and not using this research to help ecosystem building in the larger ML community is doubly bad: it stands the risk of being wiped away by a tsunami, and it didnt even help research ecosystem building. If one expects more tsunamis, pay more attention to prestige and resources.The Bitter Lesson argues that there will be more creative destruction and that human ingenuity will matter less and less. Although we do not believe the following scenario is likely, in the long run, AI risk reduction may even be a matter of banal factors: compute, data, and engineering resources allocated towards safety goals, in comparison with other capabilities goals. The amount allocated towards these goals would depend on how important safety is to the system designers, which means safety buy-in among researchers and tech leaders would be a high priority.The ML research ecosystemIf we want to have any hope of influencing the ML community broadly, we need to understand how it works (and sometimes doesnt work) at a high level.Where is ML research published?The machine learning field, both in industry and academia, is dominated by conferences. Except for a few splashy and publicized industry papers published in Nature, the most important ML papers are all published in a relatively small number of ML conferences. Though there are a few journals, they are not very impactful, nor are papers that are exclusively published at workshops. The vast majority of papers are submitted to conferences.Prior to publication, most ML research papers are posted as preprints on arxiv.org (pronounced archive). Because of the speed that the ML research field advances, it is not sufficient for ML researchers to simply read papers that have been published in conferences, since publication typically doesnt happen for several months until after a paper is posted on arXiv. Instead, ML researchers need to keep updated on the latest preprints. Many do so via relying on word of mouth or Twitter to indicate which papers are important, while others make more of an effort to identify important papers on arXiv themselves.The size of different machine learning subfields might not be immediately obvious to somebody outside of the field. The graphs and statistics that follow are based on an analysis we conducted of publicly-available data from Semantic Scholar (data is approximate, and is only as accurate as the Semantic Scholar data), and use citations as a rough metric for the size of different subfields.By far the largest subfield within machine learning is computer vision (CV), with the Conference on Computer Vision and Pattern Recognition (CVPR) being by far the most influential conference in machine learning in terms of the number of citations. Natural language processing (NLP) is much smaller than computer vision, with the largest conference, the Proceedings of the Association for Computational Linguistics (ACL), getting about a fifth the total citations of CVPR. Even smaller than NLP are reinforcement learning and robotics. The most influential conference in this area is the IEEE International Conference on Robotics and Automation (ICRA), which receives slightly more than half the citations of ACL (so an order of magnitude less than CVPR). There are also three conferences that publish many kinds of ML research: the International Conference on Learning Representations (ICLR), Neural Information Processing Systems (NeurIPS), and the International Conference on Machine Learning (ICML). These conferences can contain NLP, CV, and RL, and all rank in the top four conferences along with CVPR.The top conferences do not gain their influence simply by having many papers. ICRA and IROS publish as many papers as CVPR (Figure 1), but mean and median citation counts in CV and ML conferences (particularly ICLR) are far above those in NLP and RL/robotics (see Figure 3 and 4).NLP has been growing recently, especially since BERT reduced barriers to entry. For instance, ACL papers from 2020 (see Figure 5) only got 2-3x fewer citations compared with CVPR papers, showing some relative growth of the field (note NAACL and ICCV were not held in 2020). Still, CV was the largest subfield, and RL/robotics has not gained any ground at all.MicrocosmsMicrocosms in this context are simpler subproblems of harder problems that mirror the larger problems but are more tractable. To give a concrete example, to make an aerodynamic bicycle for a race, you might first want to start by making it aerodynamic in a wind tunnel. Although the conditions of a wind tunnel are simplified compared with the real world, they are similar enough to yield useful insights. Some simpler problems are not that microcosmic, however, because they may make too many simplifying assumptions and thus not be representative of the macrocosm (e.g., some gridworlds).In general, the majority of research inquiry is conducted on microcosms. Work on these problems can inform us about the future or even directly influence future systems, as some current ML algorithms are highly scalable and may be a part of long-term AI systems. Next, we will discuss two of the most important microcosms in machine learning.Why do DL researchers look at ImageNet and image classification so much?Historically, CV and pattern recognition in general have gone together. The ImageNet dataset, an old compilation of images and associated labels for those images, continues to drive the field. Why? It is a good microcosm. Researchers have found that performance on ImageNet is highly predictive of downstream performance in numerous applications like segmentation, clustering, object detection, and downstream image recognition. Many researchers also view image understanding as a problem that is upstream of video understanding, which is important for a whole range of additional applications.Deep learning researchers use CV as a representation learning proxy, not because they are particularly interested in classifying cats. Most DL building blocks (some activation functions, batch normalization, some optimizers, dropout, convolutions, residual connections, etc.) have emerged from researching image classification, so it is a useful whetstone to sharpen DL algorithms against. Consequently, people do not think theyre just researching vision but instead researching how to do representation learning and deep learning in general. There are, of course, exceptions: semantic segmentation, depth maps, and downstream applications of CV do not necessarily help with general representation learning.Many economic incentives and funding sources are available for CV because vision is useful in many industries, which keeps the field large.CV is also large because its very possible to improve image classification performance with ideas since scaling is less important. Many methods do not currently have consistent returns to scale: for instance, current masked autoencoders do not consistently perform better with more data or compute. Most researchers, especially academics, are heavily incentivized towards fields where improvements can be made with ideas, because ideas can be found by competent researchers with good taste who spend enough time tinkering. Meanwhile, in NLP, a larger share of progress is made by simple scaling, which makes for less interesting work for researchers, and less incentivized outside very large labs with access to compute resources.In DL, findings for some data transfer to other types of data. This is partly because different kinds of natural data have similar underlying statistical properties. Consequently, studying how to do representation for some data distributions often transfers to other data distributions. Many techniques, such as residual connections, that helped with ImageNet helped with discrete signals (text) and other continuous signals (speech).Why is NLP the second largest field?As detailed above, image classification is not about images as much as it is about general capabilities to analyze continuous structured signals. Likewise, natural language processing is important because it studies discrete signals.In recent years, NLP and CV have started to coalesce, with more multimodal models being able to process both continuous and discrete signals. In addition, insights from NLP, such as large pre-trained transformer models, are now percolating into CV (e.g. vision Transformers), as techniques have become more general and differences between the two paradigms are decreasing.Issues with ConferencesThe conference review process has serious flaws. For instance, best paper awards mean very little; oral or spotlight designations are not highly predictive of longer-term impact either. In ML conferences, awards and designations are highly biased towards theory papers. In vision, these awards have a strong political element, where some awards are sometimes given to researchers that are seen as needing or deserving of a career boost.In 2014 at NIPS (now called NeurIPS), an experiment was conducted where the review committee was split in half, and 10% of papers were assigned to be reviewed independently by both committees. 57% of papers accepted by one committee were rejected by the other committee, and vice versa. In comparison, given the overall acceptance rate, the rate would be 77.5% if decisions were purely random and 0% if decisions were perfectly correlated.A recent analysis found the correlation between reviewer scores and eventual citation rate was very weak at ICLR 2020, after controlling for acceptance decisions. The most transformative ideas are often initially disputed or doubted; to be accepted, consensus often needs to be reached in a short discussion period. Additionally, reviewers adversely select for complicated methods (neomania) and disincentivize simple methods that work, even if impact is one of the dimensions they are supposed to be evaluating. In ML conferences, they incentivize theory, which these days usually is anticorrelated with ultimate impact.As pointed out by Yann LeCun (timestamp 2:15), a large number of young reviewers just want to point out errors rather than assess the paper overall. However, correcting errors is unlikely to make a paper go from low to high impact or help researchers write better papers the next time around.These experiments cast doubts on the review process, even though researchers continue to act as though conference selection processes are wise. This doesnt mean that the review process is not at all useful. For one, conferences provide comments to papers, not just scores. More importantly, anonymous peer review is necessary for researchers in their first five years of research, as they need tough comments from random community members; without peer review, people will rarely hear what people truly think since specific disparagement is highly uncommon in other CS contexts. Reviewer comments also limit parochialism, an increasingly large problem as the field becomes more expansive. Reviews also require papers to have some level of technical execution ability; if theyre below a threshold, most people do not even submit the paper. Lastly, its important to consider the effect that the anticipation of the review process has on researchers. Even if reviews are a noisy process, researchers will be frequently thinking about how to make sure their paper is accepted by reviewers, and in many cases this can encourage them to have a stronger paper. Like democracy, the review process is not a perfect system, but it works better than alternatives (e.g. trial by upvote).One implication of the flaws with the review process is that new approaches cannot reliably be evaluated just by thinking about them. Rather than rely on experts to evaluate new works, communities need the test of time, the ultimate filter for bad ideas.Consequential and Inconsequential PapersMarked progress on a specific problem, even if you have a field working on it, is usually fairly infrequent, where there will be a paper that actually moves things ahead every 6 months to 3 years. In the meantime, there are many unpublished and published papers that are inconsequential. It is very hard to come up with something that actually helps.How are so many inconsequential papers published? They may have successfully presented themselves as consequential. For instance, it is common for researchers to publish papers in which they achieve state of the art performance in one metric (e.g. out-of-distribution robustness) while sacrificing performance in another metric (e.g. calibration) compared to the previous state of the art. This might be accepted, because it appears to make progress in one dimension. The best papers, on the other hand, make Pareto improvements, or at the very least make extremely lopsided improvements where a minor sacrifice in one dimension leads to a major gain in another.Another way that inconsequential papers might get published is for authors not to publish results on all of the relevant datasets. For instance, sometimes people publish results of evaluating on CIFAR-10, but not CIFAR-100, often because they are not able to satisfactorily perform on the latter. This sometimes goes unnoticed by reviewers.Finally, papers that might seem consequential at first turn out to be inconsequential when they are wiped away by tsunamis.InterestingnessIn many cases, ML researchers are motivated by interestingness more than usefulness. This is related to the fact that those who choose to enter research have high openness to new ideas and frequently seek out novelty. This can bias them against research that appears more boring, even if it has much more practical relevance.The bias towards interestingness becomes even more extreme in areas with less concrete metrics for success. In such cases, researchers can sometimes get away with publishing research that is empirically not very useful, but is interesting to reviewers. This is the reason behind many ML fads, which can last many years. Historical ProgressIn order to begin to attempt to extrapolate into the future, its necessary to understand historical progress. In this section, we will provide some examples of historical progress, using metrics defined for the field. For many metrics, especially those that are highly researched, progress is roughly linear (or log-linear). For less-studied problems, there can be sudden jumps.Data for the graphs below is from Papers With Code, with some charts consisting of adapted data (mostly to present log error rates rather than accuracies).Image ClassificationImageNet has been one of the most influential datasets, and image classification has remained the most important benchmark in computer vision. Frequently the top-1 or top-5 accuracy on ImageNet is reported, but its also useful to look at the log error rate since progress has been roughly linear on that metric. Video understandingThe following chart shows progress on action recognition in videos. The state of the art in 2018 was the temporal relational network, which was a variant of a convolutional neural network. In 2022, the state of the art is a vision transformer that was pretrained on a separate dataset. Progress has been relatively slow in video understanding despite strides in image understanding. One heuristic is that video understanding is ten years behind image understanding.Object DetectionObject detection is sometimes thought of as dramatically accelerated by deep learning. In reality, while there was a period of 1-2 years in which progress seemed to be stalling prior to deep learning, deep learning merely continued the existing trend in capabilities improvements [1,2]. Deep learning did revolutionize the methods in object detection, and the old object detection paradigm was washed away. Starting in 2017, however, performance once again stalled, leading to a plateau that has been longer than any pre-deep learning! In fact, there was nearly no progress until 2021, when a paper made a relatively large improvement. The paper that did so leveraged pre-training with data augmentations.Image SegmentationImage segmentation (the labeling of different pixels of an image) on the COCO progressed extremely quickly between 2015-2016, but it has leveled off.Adversarial RobustnessProgress on adversarial robustness has been fairly slow. See the following graph:Language ModelingPerplexity measures the ability of language models to be able to predict the next word given a sequence of words (a lower perplexity is better). Perplexity on the Penn Treebank dataset has been decreasing over time in a roughly linear way.TranslationBelow is one example, on WMT2014 English-German.Question AnsweringQuestion answering on the SQuAD 1.1 dataset progressed quickly until 2019, when it leveled off.Emergent PropertiesThe view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. Alan TuringThe qualitative impact of an order of magnitude increase in parameters, or a new algorithm, is often difficult to predict. Capabilities can sometimes emerge suddenly and without warning.For instance, the effect of additional parameters on image generation is not immediately predictable:Scan the images left to right, blocking the rightward images. Try and predict how good the image will be after an order of magnitude increase.BERT could not be easily tuned to do addition, but RoBERTa, which was just pretrained on 10x more data, can.AlphaZero experienced a phase transition where internal representations changed dramatically and capabilities altered significantly at about ~32,000 steps, when the system learned concepts like king safety, threats, and mobility suddenly. This can be seen by looking at the systems preferred opening moves, the distribution of which dramatically changes at 32,000 steps.One paper showed that in some cases, performance can improve dramatically on test data even after it had already saturated on the training data:In a paper studying the Gopher model, gold labels initially did not improve performance on the FEVER fact-checking dataset by much at smaller model sizes, but had significant impact at larger model sizes.Anthropics Predictability and Surprise paper demonstrated fast emergence of addition, multitask understanding, and program synthesis as the number of parameters increased.It will be hard to make systems safe if we do not know what they are capable of. Additionally, it will be hard to foresee how systems will behave without empirically measuring them.Lastly, rapid changes can be caused by creative destruction. Most of these graphs did not even start prior to a decade ago, because deep learning ushered in an entirely new paradigm for solving many of the problems above.Notes on historical progressIn vision, state of the art algorithms usually change year-to-year, and progress has been driven by alg | Content Synthesis/Decision Making/Discovery | Unknown | null | null | null | null | null | null |
news | Tony Baer | MongoDB’s journey to analytics | MongoDB announced this week that it is beginning to incorporate analytics functionalities within its database offerings. | https://venturebeat.com/2022/06/07/mongodbs-journey-to-analytics/ | 2022-06-07T13:00:00Z | To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.About a half dozen years ago, when writing for ZDNet, we posed the question, what does MongoDB want to be when it grows up? Much of the answer has since become apparent. It made the database more extensible to support the variety of the apps that its developers were already writing. MongoDB added native search for supporting content management; time-series data support for internet of things (IoT) use cases; along with change streams for supporting use cases such as next-best-action for ecommerce apps. Oh, and by the way, MongoDBs customers wanted a cloud experience that matched the ease of use of its developer tooling. The result is Atlas, the managed cloud service that now accounts for 60% of MongoDBs business.But theres a major piece where the surface has barely been scratched: Analytics. Its part of what MongoDB will be talking about this week at its annual live event.[Related: MongoDB fires up new cloud, on-premises releases]Lets rewind the tape. MongoDB was designed from the get-go as an operational database. Its deployed for use cases like managing online subscriber profiles for delivering optimal gaming or entertainment experiences. It can also be used for capturing automotive telematics to track the state of operation of components; providing ready access to clinical patient data for managing healthcare delivery; or powering ecommerce applications for delivering seamless shopping experiences.Its not that MongoDB was focused strictly on writes, as one of its earliest enhancements was the aggregation framework to address multistep group-by queries that are considered checkbox requirements for transaction databases.But MongoDB and, in all fairness most operational databases have until recently never been known for analytics because the last thing that you want to do in an operational database is slow it down to process a complex query involving multiple table (or document collection) joins.Why ask about analytics? The common thread behind most operational applications is that they become far more useful when you add analytics features. For instance, analytics could help automakers expedite preventive maintenance, healthcare providers pinpoint the best care regimen, or ecommerce or gaming providers improve how they engage or prevent churn with customers. Analytics designed for making quick optimization decisions are logical complements to operational databases.Pairing analytics and transaction databases is not a new idea, as reflected by the funny names that some analyst firms have added to the conversation, like HTAP, translytical or augmented transaction databases. Cloud-native, where compute is separated from storage, provides yet another opportunity to rethink how to piece operational data processing together with analytics without impacting performance or throughput, as shown by recent introductions of Oracle MySQL HeatWave and, more recently, Googles AlloyDB.Most of these hybrid databases supplemented row storage with columnar tables designed for analytics and, by the way, they all used the same common relational data structures, making the translation straightforward. Conversely, translating document models, with their hierarchical and nested data structures, has traditionally been more challenging.So, is now the time for MongoDB to take the analytics plunge? This perhaps depends upon how we define analytics. As noted above, applications become far more useful when you add operational analytics that can make transactions smart. If were talking about the analytics that can be used for quick decisions, not complex modeling, then the answer is yes.Not an overnight journeyMongoDB has been gradually dipping its feet in the water for supporting analytics. It started with visualization, where MongoDB provides its own charting capability and offers a business intelligence (BI) connector that makes it look like MySQL to the Tableaus and Qliks of the world. While pictures are worth a thousand words, when it comes to analytics, visualizations just scratch the surface. They provide snapshots of trends, but without further correlation (which typically requires more complex queries), cannot fully answer the question of why something is happening.MongoDB is starting to up its game with analytics, but wont replace Snowflake, Redshift, Databricks or any of the other usual suspects when it comes to performing highly complex analytics. Nor does it necessarily want to do so. The companys focus has never been data analysts, but rather application developers. Going back to the first principle of operational databases, you want to avoid tying them down with queries requiring highly complex joins and/or high concurrency. And for MongoDB to succeed, it needs to enable those developers to build better apps.Atlas has the flexibility to set aside dedicated nodes that could be reserved for analytics. MongoDB is announcing that soon, customers will be able to choose different compute instances on those nodes that would be more appropriate for analytics. The nodes would have in-line data replication, making analytics near real-time.Thats just a first step; with Atlas available on multiple clouds, it leaves an overly wide choice of instances on the customers shoulders. Nonetheless, we believe that down the road, MongoDB will introduce prescriptive guidelines and, after that, some machine learning that could help auto-select the right instance for the workload. Lets not stop there. Atlas Serverless, announced in preview last year, is going GA this week. So, it would make logical sense to add this option for analytics, where the workloads tend to be different and more spikey than operational transactions.What about SQL?The idea of SQL was anathema in MongoDBs early years. MongoDB will never become a relational database. But could cooler heads be prevailing?This week, MongoDB is introducing a new Atlas SQL interface for reading Atlas data, a completely brand-new construct that takes a different track than the BI connector. Atlas SQL will be MongoDBs first real attempt to provide a SQL face to its data that will not simply flatten JSON to make it look like MySQL to Tableau, but provide a more granular view that will reflect the richness of the JSON document schema. As no SQL interface is written overnight, expect that Atlas SQL will also be an evolving story in coming years as it gets enriched with more integrations with SQL tools (beyond visualizations) that are checkbox requirements for data warehouses. We would also like to see support for operations such as upserts, a core capability for analytic platforms, that can insert the equivalent of missing rows in what is surfaced as an analytic table.Along with Atlas SQL interface is the preview of a new column store index that is essential for delivering performance for analytical queries. Again, this is just a start. For instance, MongoDB users would have to manually set up the column store index, specifying the fields. But in the longer run, we could see this being automated through profiling access patterns. And while our imagination is running: enriching the metadata to profile field cardinality, adding capabilities like Bloom filters that would further optimize scanning, and further optimizing the query planner should not be out of the question.Then theres Atlas Data Lake, which provided a federated view of JSON documents in cloud object storage. Atlas Data Lake is being refashioned into more of a general-purpose federated query capability that can target multiple Atlas clusters and cloud object stores. This is accompanied by the introduction of a new storage layer for Atlas Data Lake. The new storage layer automatically extracts Atlas cluster dataset into a combination of cloud object storage and an internal technical catalog (this is not Alation) to help speed-up analytical queries.Up with peopleMongoDB has long thrived as a developer-favorite database because JavaScript and JSON are home turf to developers, not to mention the reality that JavaScript ranks number 7 on the Tiobe index. JavaScript, JSON and the document model are always going to be what MongoDB is about. But MongoDBs historical shunning of SQL kept it off limits to a very large pool of talent: SQL developers, responsible for ranking it as number nine. Its time to change that.While MongoDB still believes that the document model is superior to and will replace the relational model (a point that we would debate), a fact that all can agree on is that to extend its footprint across the enterprise, it must embrace the audience it traditionally ignored. And as a win-win, appealing to both camps means that deployments often can be simplified; in place of having to move and transform data to a separate data warehouse target, for some operational use cases, this could be simplified to working within the same platform, replacing data extract with data replication.Not the end of the data warehouse, lake or lakehouseMongoDB will not replace the need for separate data warehouses, data lakes, or data lakehouses. The complex modeling and discovery that is becoming an essential ingredient for analytics must, of necessity, be performed separately from the operational system. More to the point, the objective for supporting analytics in an operational database is to make the process inline and as close to real time as possible.And that shows how MongoDB and the Snowflakes or Databricks of the world would work together. The models that identify outliers would be developed in the warehouse, lake, or lakehouse and the result would be a relatively simple (from a processing standpoint) classification, predictive or prescriptive model that could be triggered when a transaction appears to be weird.Today, pulling off such a closed-loop process in MongoDB is not impossible, but its complicated. You would have to cobble together change streams, triggers and functions in MongoDB to provide some sort of closed analytic feedback loop. Its not a stretch of the imagination to believe that at some point, MongoDB would bury this complexity under the hood for a closed-loop, near-real-time analytics option. Thats just another example of why we characterize MongoDBs move into analytics as a journey. | Decision Making/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Sharon Goldman | This AI attorney says companies need a chief AI officer — pronto | As the use of artificial intelligence proliferates across the enterprise, the importance of appointing a chief AI officer is clearer than ever. | https://venturebeat.com/2022/06/07/this-ai-attorney-says-companies-need-a-chief-ai-officer-pronto/ | 2022-06-07T21:20:00Z | When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, people were laughing at me, he said. Newman, who leads global law firm Baker McKenzies machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, Whats that?But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics, bias, risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said. This recognition led to a new Baker McKenzie report, released in March, called Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence. The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organizations adoption, use and management of AI-enabled tools. In a press release upon the surveys release, Newman said: Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly. Corporate blind spots about AI riskAccording to Newman, the survey found significant corporate blind spots around AI risk. For one thing, C-level executives inflated the risk of AI cyber intrusions but downplayed AI risks related to algorithm bias and reputation. And while all executives surveyed said that their board of directors has some awareness about AIs potential enterprise risk, just 4% called these risks significant. And more than half considered the risks somewhat significant. The survey also found that organizations lack a solid grasp on bias management once AI-enabled tools are in place. When managing implicit bias in AI tools in-house, for example, just 61% have a team in place to up-rank or down-rank data, while 50% say they can override some not all AI-enabled outcomes. In addition, the survey found that two-thirds of companies do not have a chief artificial intelligence officer, leaving AI oversight to fall under the domain of the CTO or CIO. At the same time, only 41% of corporate boards have an expert in AI on them. An AI regulation inflection pointNewman emphasized that a greater focus on AI in the C-suite, and particularly in the boardroom, is a must. Were at an inflection point where Europe and the U.S. are going to be regulating AI, he said. I think corporations are going to be woefully on their back feet reacting, because they just dont get it they have a false sense of security.While he is anti-regulation in many areas, Newman claims that AI is profoundly different. AI has to have an asterisk by it because of its impact, he said. Its not just computer science, its about human ethicsit goes to the essence of who we are as humans and the fact that we are a Western liberal democratic society with a strong view of individual rights. From a corporate governance standpoint, AI is different as well, he continued: Unlike, for example, the financial function, which is the dollars and cents accounted for and reported properly within the corporate structure and disclosed to our shareholders, artificial intelligence and data science involves law, human resources and ethics, he said. There are a multitude of examples of things that are legally permissible, but are not in tune with the corporate culture. However, AI in the enterprise tends to be fragmented and disparate, he explained. Theres no omnibus regulation where that person whos meaning well could go into the C-suite and say, We need to follow this. We need to train. We need compliance. So, its still sort of theoretical, and C-suites do not usually respond to theoretical, he said. Finally, Newman added, there are many internal political constituents around AI, including AI, data science and supply chain. They all say, its mine,' he said. The need for a chief AI officerWhat will help, said Newman, is to appoint a chief AI officer (CAIO) that is, a C-suite level executive that reports to the CEO, at the same level as a CIO, CISO or CFO. The CAIO would have ultimate responsibility for oversight of all things AI in the corporation. Many people want to know how one person can fit that role, but were not saying the CFO knows every calculation of financial aspects going on deep in the corporation but it reports up to her, he said. So a CAIO would be charged with reporting to the shareholders and externally to regulators and governing bodies. Most importantly, they would have a role for corporate governance, oversight, monitoring and compliance of all things AI, Newman added. Though, Newman admits the idea of installing a CAIO wouldnt solve every AI-related challenge. Would it be perfect? No, nothing is but it would be a large step forward, he said. The chief AI officer should have a background in some facets of AI, in computer science, as well as some facets of ethics and the law. While just over a third of Baker McKenzies survey respondents said they currently have something like a chief artificial intelligence officer, Newman thinks thats a generous statistic. I think most boards are woefully behind, relying on a patchwork of chief information officers, chief security officers, or heads of HR sitting in the C-suite, he said. Its very cobbled together and is not a true job description held by one person with the type of oversight and matrix responsibility Im talking about as far as a real CAIO. The future of the chief AI officerThese days, Newman says people no longer ask What is a chief AI officer? as much. But instead, organizations claim they are ethical and that their AI is not implicitly biased.Theres a growing awareness that the corporations going to have to have oversight, as well as a false sense of security that the oversight that exists in most organizations right now is enough, he continued. It isnt going to be enough when the regulators, the enforcers and the plaintiffs lawyers come if I were to switch sides and start representing the consumers and the plaintiffs, I could poke giant size holes in the majority of corporate oversight and governance for AI. Organizations need a chief AI officer, he emphasized because the questions being posed by this technology far transcend the zeros, the ones, the data sets. Organizations are playing with live ammo, he said. AI is not an area that should be left solely to the data scientist. | Decision Making/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Louis Columbus | How AI and bots strengthen endpoint security | Fast-growing ransomware, malware and endpoint-directed breach attempts are reordering the threat landscape in 2022 which is calling for rapid changes in endpoint security. | https://venturebeat.com/2022/06/13/how-ai-and-bots-strengthen-endpoint-security/ | 2022-06-13T18:45:53Z | We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!Fast-growing ransomware, malware and endpoint-directed breach attempts are reordering the threat landscape in 2022. Its appropriate that RSA Conference 2022s theme is transform,as new threats continue to call for rapid changes in endpoint security. CISOs and CIOs are transforming their cloud infrastructure and hybrid cloud strategies, accelerating devops internally to produce new apps and platforms, and relying more on software-as-a-service (SaaS) apps than ever before to meet time-to-market goals. Vendors promoting cloud security, extended detection and response (XDR) and zero trust dominated RSAC 2022. The Cloud Security Alliance (CSA) released its latest survey results during RSA 2022, which further underscores zero trusts continued growth. The research is Based on interviews with 823 IT and security professionals, including 219 C-level executives. As a result, 80% of C-suite executives have prioritized zero trust in their organizations and 94% are implementing them. In addition, 77% are increasing their spending on zero trust over the next 12 months. Improving endpoint and device security is where most organizations say their approach to implementing a zero-trust framework is the most mature.Analyzing real-time and historic data to uncover, detect and thwart breach attempts underscores why cybersecurity is a data problem first. CISOs, CIOs and their teams need access to more historical data. Bot-based approaches to endpoint security need more data to fine-tune AI and machine learning (ML) models. Just how essential data is to improving cybersecurity defenses was made clear in the keynotes and breakout sessions at RSA 2022. CrowdStrikes launch of Asset Graph and successful integration of its Humio acquisition in Humio for Falcon reflects the high priority their customers and prospects place on real-time telemetry data and long-term data archiving. EventTransform 2022Register Here | Decision Making/Detection and Monitoring | Computer and Mathematical/Management | null | null | null | null | null | null |
|
news | Gaurav Tewari, Forbes Councils Member, Gaurav Tewari, Forbes Councils Member https://www.forbes.com/sites/forbesbusinesscouncil/people/gauravtewari/ | How Conversational AI Is Supercharging Business Productivity—And How To Get Started | Today we are seeing conversational AI systems with unprecedented sophistication in their ability to converse in a manner that is practically indistinguishable from speaking with a human. | https://www.forbes.com/sites/forbesbusinesscouncil/2022/06/10/how-conversational-ai-is-supercharging-business-productivity-and-how-to-get-started/ | 2022-06-10T12:30:00Z | Gaurav Tewari, Founder and Managing Partner of Omega Venture PartnersgettyRecent advances in artificial intelligence (AI) have catalyzed a sea change in the ability of computers to engage in live, human-like conversation with people. Today we are seeing conversational AI systems with unprecedented sophistication in their ability to converse in a manner that is practically indistinguishable from speaking with a human. Going far beyond Alexa or Siri responding to a simple query, todays state-of-the-art conversational AI (CAI) systems navigate complex dialogue, understand nuanced intent, navigate multi-topic, multi-turn conversations and articulate cogent, on-point responses. The rise of CAI is creating a new paradigm for customer experience, personalization and productivity in the modern workplace. As a venture capital investor who focuses on AI and business software, I have been privy to the breakthroughs in CAI and invested in industry-leading companies.Back in the 1960s, researchers at MIT pioneered ELIZA, an early natural language software, that attempted to simulate human dialogue. However, ELIZA and programs like it were severely limited in their real-world applicability. These systems were programmed with brittle rules and canned responses and could only handle a very narrow set of use cases without breaking down.By contrast, todays CAI systems are powered by machine learning, giving them far greater dexterity as well as the ability to self-improve over time. The revolution in CAI has benefitted from an abundance of data as well as the continuously improving price-performance of computeboth necessary ingredients to train good AI systems. And, crucially, recent years have seen extraordinary funding and talent funneled toward developing groundbreaking large language models, which provide a core foundation for the development of conversational AI applications.Use Cases For CAII believe business is poised to undergo a significant transformation as a result of AI. Below are three compelling CAI use cases that businesses can harness today.1. Customer ServiceNo one likes waiting on hold to reach a customer service agent, and navigating an endless phone tree is no fun. The rise of CAI means that large chunks of front-line customer service can now be automated. AI can handle most high-volume, routine queries. When a more nuanced or emotional situation arises, CAI systems are smart enough to escalate to a human agent, while equipping the agent with rich context such as the callers identity, relationship history, applicable company policies and learnings from other similar customer enquiries.We are already witnessing the beginnings of this future, and I've noticed the companies that adopt these solutions are seeing substantial savings and higher customer satisfaction ratings.2. SalesSales processes often require collecting data from prospective customers, both to qualify the customer as well as to curate the appropriate offering. The process can be dull, time-consuming and error-prone. CAI changes the game by enabling such information to be collected automatically and interactively from a prospective customer, and they can do so 24/7. These systems can also tie together information from multiple touchpoints and channels (e.g., chat, text, phone, web) to mitigate redundancy and error, while reducing customer friction. Automating customer qualification and data collection in this manner provides value for both the customer and the business. From a customers perspective, the value of a streamlined, self-serve conversational interface that retains context and memory across interactions and helps navigate a process is hard to overstate. For the sales rep, such solutions reduce the tedium of data collection and data entry, often across disparate systems. Sales reps can now spend their time where it matters mostin building meaningful relationships as well as in customer interactions that require a human touch.3. Cybersecurity While the notion of using a persons voice to verify their identity has been around for a while, CAI takes voice biometrics to the next level. Instead of a pre-formulated phrase or sentence, CAI systems can capture much higher resolution data associated with a persons vocal characteristicsfor example, tone, speed, word choice, sentence structure, grammar, accent, inflections, mannerismsand tie these with identifiers such as location, time, day, weather, prior interaction history and real-time knowledge of threats.A biometric voice system can improve employee and customer satisfaction by providing both efficient and robust identification. Users appreciate not having to fumble for their ID cards or remembering answers to security questions. Organizations benefit from decreased fraud and improved usability. In sectors such as healthcare and financial services, where confidentiality is important, voice biometrics can also serve to limit unnecessary dissemination of private information.Best PracticesBusiness leaders adopting CAI solutions should prioritize defining clear use cases and integration into existing systems. A well-defined business use case, with quantifiable success metrics, is critical to managing the scope of the deployment and measuring return on investment. It is also imperative to ensure that the CAI is well-integrated with existing workflows and that it can access the data it needs to fulfill its use case. Lastly, CAI systems are not a panacea. Despite the fact that CAI systems continually advance, the real world presents many novel situations that require improvisation, flexibility and exception handling. A CAI system cannot anticipate or prepare for every scenario that might arise. When nuance and judgment are required, businesses should provide human oversight and failover. By reducing repetitive tasks, CAI empowers people to spend more of their time engaging in strategic initiatives that require creativity, insight, intuition and common sense.As AI continues to permeate every economic sector, I believe we will see more practical applications that unlock tangible and substantive value. CAI is one such application that has reached an inflection point. With continued advancement, I look forward to the additional novel use cases poised to be unleashed in the years ahead.Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify? | Digital Assistance/Process Automation | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Dylan Martin | Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100 | A win against AMD as a much bigger war over AI compute plays outNvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.… | https://www.theregister.com/2022/06/07/nvidia_intel_sapphire_rapids/ | 2022-06-07T19:30:07Z | Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading them with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.The GPU giant has previously promised that the DGX H100 [PDF] will arrive by the end of this year, and it will pack eight H100 GPUs, based on Nvidia's new Hopper architecture. By using its fourth-generation NVLink interconnect to connect the GPUs, the chip designer has claimed that a single system will be capable of delivering 32 petaflops of AI performance using its FP8 format.Huang confirmed Nvidia's selection of Sapphire Rapids for the DGX H100 while voicing his continued support for x86 CPUs as the company plans to introduce its first Arm-based server CPU, Grace, next year. He also said that Nvidia will use Sapphire Rapids for new supercomputers."We buy a lot of x86s. We have great partnerships with Intel and AMD. For the Hopper generation, I've selected Sapphire Rapids to be the CPU for Nvidia Hopper, and Sapphire Rapids has excellent single-threaded performance. And we're qualifying it for hyperscalers all over the world. We're qualifying it for datacenters all over the world. We're qualifying it for our own server, our own DGX. We're qualifying it for our own supercomputers," he said at the Tuesday event.The selection of Intel's upcoming Sapphire Rapids chip, which has already started shipping to some customers, marks a reversal of sorts for Nvidia after it chose AMD's second-generation Epyc server CPU, code-named Rome, for its DGX A100 system that was introduced in 2020.This comes after industry publication ServeTheHome reported in mid-April that Nvidia had motherboard designs for both Sapphire Rapids and AMD's upcoming Epyc CPU, code-named Genoa, for the DGX H100 as the GPU giant had not yet decided on which x86 chip it would use.While Intel will consider this a victory as the semiconductor giant works to regain technology leadership after years of missteps, it's a relatively small win when considering the bigger battle over GPUs and other accelerators that is playing out between Nvidia, Intel, AMD and other companies. It's why, for instance, Intel is making a big bet on its upcoming Ponte Vecchio GPU and why AMD has pushed to become more competitive against Nvidia with its latest Instinct GPUs.One major reason why Nvidia has decided to build its own Arm-compatible CPU is so it can put a CPU and a GPU together in the same package to significantly speed up the flow of data between the two components to fuel AI workloads and other kinds of demanding applications.Nvidia plans to introduce its first iteration of this design, called the Grace Hopper Superchip, next year alongside the 144-core, CPU-only Grace Superchip, and we think it's likely that Nvidia will introduce a new kind of DGX system that will use Grace. Intel also plans to introduce a CPU-GPU design for servers with the Falcon Shores XPU in 2024.During the Tuesday talk, Huang promised that "Grace is going to be an amazing CPU" that will allow the Nvidia to fine-tune everything from the components to the systems to the software. While the GPU giant is designing the Arm-compatible chip to benefit recommender systems and large language models used by so-called hyperscale companies, it will be used for other applications too, according to Huang."Grace has the advantage that in every single application domain that we go into, we have the full stack, we have all of the ecosystem all lined up, whether it's data analytics, or machine learning, or cloud gaming, or Omniverse, [or] digital twin simulations. In all of the spaces that we're going to take Grace into, we own the whole stack, so we have the opportunity to create the market for it," he said. ® | Unknown | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Ke Wang, Zhenyu Song | Improving Presto Architectural Decisions With Shadow Cache at Meta (Facebook) | Motivation and BackgroundAt Meta (Facebook), Presto is a distributed real-time query engine using SQL language as an interface to perform fast, interactive queries on petabytes of data. It supports standard ANSI SQL, including queries, aggregations, JOINs, and window functions.Alluxio is the data orchestration platform as a critical technology supporting Presto and various other data analytics applications and use cases. Alluxio creates a virtual data layer that federates data from any file system or object store, provides a unified namespace across storage systems, and serves the data to the applications using industry-standard interfaces with fast data access. | https://www.alluxio.io/blog/improving-presto-architectural-decisions-with-alluxio-shadow-cache-at-meta-facebook/ | 2022-05-18T14:41:23Z | Motivation and BackgroundAt Meta (Facebook), Presto is a distributed real-time query engine using SQL language as an interface to perform fast, interactive queries on petabytes of data. It supports standard ANSI SQL, including queries, aggregations, JOINs, and window functions.Alluxio is the data orchestration platform as a critical technology supporting Presto and various other data analytics applications and use cases. Alluxio creates a virtual data layer that federates data from any file system or object store, provides a unified namespace across storage systems, and serves the data to the applications using industry-standard interfaces with fast data access.To improve the performance of Presto, understanding the impact of cache size and cache hit ratio is essential. Presto needs to know certain caching information from Alluxio to determine whether expanding cache size can help improve cache hit ratio and performance when the cache storage is limited. This information is also helpful to optimize caching algorithms. We also want to optimize the routing algorithm for better balance and efficiency. As a result, how to better track and manage Alluxio cache data is key to presto optimization decisions.Two key questions need to be addressed from Prestos side:1. How to size the cache for each tenant2. What is the potential cache hit ratio improvement?We propose Shadow Cache, a lightweight Alluxio component for tracking working set size and cache hit rate. To answer the first question, Shadow Cache will tell the administrator how many non-duplicate bytes the cache has received in the past 24 hours to estimate the future cache demand. For the second question, Shadow Cache will tell the administrator how many requests hit the cache if the cache can keep all the requests over the last 24 hours, i.e., the unhit ones are the ones that never came up, so the maximum hit rate of the cache can be calculated.This lightweight Alluxio component, Shadow Cache, can provide insights into the cache working set and what the cache hit rate would look like if there is infinite cache space. In order to monitor the cache state of the cluster, we define the following key metrics.C1: Real cache usage at a certain point in time C2: Shadow cache working set in a time window (1 day/1 week)H1: Real cache hit-rate H2: Shadow cache hit-rateThe ChallengeWhile we have attempted to provide the above metrics for Alluxios cache, we have encountered several challenges.Low Memory and CPU OverheadThe Shadow Cache is a lightweight component that keeps track of the size of cached working sets. It is difficult to keep track of an infinite working set with limited memory. Shadow Cache must also have a low CPU overhead since it caches the data when processing each query. Otherwise, user requests will be blocked for a long time.AccuracyShadow Cache must also guarantee accuracy. In Presto, Shadow Cache measures the cache status of a cluster, and if the estimated limit cache hit rate is too low, Presto may wrongly determine that this job is cache-unfriendly. In contrast, if the estimated limit cache hit rate is too high, Presto may believe that expanding the cache of the cluster at this point will significantly improve the overall performance.Dynamic UpdatePresto and other modern data applications are mainly used for discovering current or future trends. Therefore, Shadow Cache should also discard obsolete items in real-time. Otherwise, it is likely to bring noise interference to the decision. Sliding windows are one of the most common methods for storing the newest items, but creating the data structure for the sliding window model is not easy. When the window slides, we need to delete the items that were just moved out in real-time. It is important to find the item that needs to be deleted as fast as possible and delete it.The SolutionIn light of the two requirements of high accuracy and low overhead, we immediately think of the Bloom filter, which has gained popularity in various distributed databases. Shadow Cache estimates the working set size and limit hit rate based on the Bloom filter. Here is how Bloom filters solve the three challenges.Bloom Filters: The Solution to Overhead and Accuracy ChallengesBloom filter is a space-efficient probabilistic data structure membership testing. A Bloom filter is an array initialized with all zeros in bits, and each object is represented with only several bits, significantly saving space overhead and providing queries with excellent efficiency. Bloom filters can determine whether an item exists or not. The item must not exist if the Bloom filter returns that it does not exist. Note that false positives are possible, but false negatives are not.The Bloom filter has k hash functions. To add an element, apply each hash function and set the bit to 1. To query an element, apply each hash function and AND the bits. When all the bits on the k positions are 1, the item is considered to exist. Otherwise, the item is not considered to exist. Bloom Filter Chain: The Solution to Dynamic UpdateBloom filters can provide both low overhead and high accuracy, so can we directly apply them to Shadow Cache?The first issue we encounter is that Bloom filters do not support deletion. This is because we only care about the size of the working set of the users application over time, and Shadow Cache is required to do this. Shadow Cache does this by linking multiple filters together to create a Bloom filter chain. Here is how the Bloom filter chain can be used to update the load size of the working set in real-time.QueryAs shown above, Shadow Cache is a chain composed of multiple Bloom filters. When tracking the size of a users working set in the past 24 hours, we can divide 24 hours into four periods. A Bloom filter tracks each period in Shadow Cache, and each Bloom filter tracks a period. Shadow Cache uses all the existing Bloom filters or creates a new Bloom filter for the query, as shown in the following figure.Live UpdateTo keep the data real-time, we need Shadow Cache to discard the data that has become obsolete when the time window is sliding. The Bloom filter values must be updated continuously with time t, and the Bloom filter items already outside the time window must be deleted. Since we are combining multiple Bloom filters, it is easy to determine where the outdated items are located at the very end of the Bloom filter, as shown in the figure below. Every time a new period begins, we delete the oldest filter from the chain and add a new all-empty filter to record the latest data.Working SetSizeAs bloom filters map an item to multiple bits, judging the working set size based solely on the number of bits to 1 would introduce an unacceptable error since a bit may represent multiple items and an item can be scattered among multiple bits. Therefore, we employ the formula derived by Swamidass & Baldi (2007). We leverage the approximation with the following equation to measure the working set size.Where n* is an estimate of the number of items in the filter, m is the length (size) of the filter, k is the number of hash functions, and X is the number of bits set to one.Infinite Size Hit Ratio After providing the working set size metric, Shadow Cache also needs to provide the infinite size hit ratio. We can use Bloom filters as a cache with infinite space because they can track huge amounts of data with little memory usage. The number of user requests hitting a Bloom filter equals the number of hits in an infinite cache, denoted as a hit. The total number of user requests is denoted as queryNum. QueryNum is the total number of user requests, so the hit rate is equal to hit/queryNum.Using Shadow Cache to Determine Presto Cluster Cache StatusAfter completing the Bloom filter chain, we can quickly learn the previously defined metrics H1, H2, C1, C2. In the next step, Presto can determine the cache status of the cluster by comparing the size relationship between them, as shown in the following figure.When H2 is low, it indicates that the cache hit rate of the application in this cluster cannot be reached even with unlimited cache space. This implies that this application is not cache-friendly. When H2 is high and H1 is low and C2 > C1, it indicates that the cluster is under-allocated cache space and the hit rate can be further improved if the cache capacity is expanded. When H2 is high and H1 is high and C2 < C1, the cluster cache is over-allocated and the resources are wasted. A cluster is in good shape if H2 > H1 and C2 > C1 and C2 > C1, meaning scaling the cache is not needed.ImplementationShadow Caches implementation of Bloom filters is based on the Guava BloomFilter lib and supports specific filter configurations based on user-defined memory overhead budget and shadow cache window. Currently, Shadow Cache supports working set size in terms of #pages and #byte, which represent how many pages and how many specific bytes the working set contains, respectively. For hit rate calculation, Shadow Cache supports infinite size byte hit ratio and object hit ratioBelow are the configurations:#The past window to define the working set: alluxio.user.client.cache.shadow.window=24h#The total memory overhead for bloom filters used for tracking:alluxio.user.client.cache.shadow.memory.overhead=125MB#The number of bloom filters used for tracking. Each tracks a segment of window:alluxio.user.client.cache.shadow.bloomfilter.num=4Test ResultsWe tested Shadow Cache and found that with only 125MB of space, Shadow Cache can track 27TB of working sets with an error rate of only 3%. Moreover, the error rate can be further reduced by using HyperLogLog, but the infinite size hit ratio estimation will not be supported if HyperLogLog is used.Presto Routing OptimizationTo improve the performance, Presto needs a way to adjust the cluster in time if it learns the specific cluster state from Shadow Cache. Our next step is to describe the current Presto routing algorithm and then provide several options for routing optimization after introducing Shadow Cache.Presto RoutingPresto stores different tables in different clusters, sharing the cache across clusters by table name. Therefore, a query that accesses the same table will always go to the same target cluster to maximize its cache. The cluster cache would be filled with various disparate tables if this wasnt done. Below is a diagram of the routing algorithm.As shown in the figure above, table 1 to table 4 have different table names and are therefore assigned to different clusters. When requesting data from table1, the routing algorithm will send the request to cluster1, and when requesting data from table3, the routing algorithm will send the request to cluster3.Routing Optimization OptionsThe response time of a cluster request is a simple way to determine if a cluster is working. When the cluster is slow to respond or takes too long to respond, we assume that the cluster is having a problem. With Shadow Cache, as mentioned above, combined with H1, H2, C1, and C2, we can quickly determine if a cluster is experiencing performance degradation due to cache stress.Presto proposes the following three routing optimization options for such an underperforming cluster. Of course, each option has its tradeoff.Option 1When the primary cluster is busy, have a designated secondary cluster with the cache turned on for those queries. However, this approach requires storing additional tables cache on each cluster.Option 2 Both clusters are treated as the primary cluster for serving requests and load balancing is performed in both clusters. However, this option will multiply the cache disk space usage.Option 3Build the Map from tables to clusters and make the CPU utilization more even. However, it could make cache storage distribution not even and requires extra cache space.SummaryThe challenge of tracking and estimating the size of the working set in the cache is significant, so we developed a lightweight Alluxio component Shadow Cache using Bloom filters. Because we are only interested in the latest status of the working set, it is necessary to use a time window model to eliminate obsolete items. Shadow Cache divides the time window into four segments for this purpose. Each segment is tracked with a different Bloom filter. A new Bloom filter is created to track the latest data, replacing the earliest one in each elimination. Finally, when the working set size needs to be provided, we use the Swamidass & Baldi (2007) proposed formula for base estimation.Overall, Shadow Cache provides Presto with four convenient metrics: H1, H2, C1, and C2, where H1 and C1 represent the real cache hit rate and usage, respectively, while H2 and C2 represent the limit hit rate of the cache and the size of the users working set over a period of time. Presto can quickly determine the relationship between cache capacity and application performance and optimize the routing algorithm for better balance and efficiency based on the above four metrics.Check out the merged code on GitHub. | Decision Making/Process Automation | Unknown | null | null | null | null | null | null |
|
news | Brandy Shaul | Messenger: How to Use the Jurassic World: Dominion Group Effect in Video Calls | To mark the release of Jurassic World: Dominion in theaters June 9, Messenger released a Group Effect that adds Blue the velociraptor to users' video calls. When this effect is activated, velociraptors will randomly appear on each participant's screen. Our guide will show you how to use the Jurassic World: Dominion group effect in the... | https://www.adweek.com/social-marketing/messenger-how-to-use-the-jurassic-world-dominion-group-effect-in-video-calls/ | 2022-06-09T16:00:00Z | Join the Adweek live virtual summit Elevate: AI on July 12 to explore the cutting-edge trends, tools and platforms becoming the norm in ad campaigns. Sign up with code: ADWEEKAI50 for 50% off.To mark the release of Jurassic World: Dominion in theaters June 9, Messenger released a Group Effect that adds Blue the velociraptor to users’ video calls. When this effect is activated, velociraptors will randomly appear on each participant’s screen. | Unknown | Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Rob Toews, Contributor, Rob Toews, Contributor https://www.forbes.com/sites/robtoews/ | Synthetic Data Is About To Transform Artificial Intelligence | Synthetic data is one of those ideas that seems almost too good to be true. | https://www.forbes.com/sites/robtoews/2022/06/12/synthetic-data-is-about-to-transform-artificial-intelligence/ | 2022-06-12T23:00:00Z | These people do not exist. These faces were artificially generated using a form of deep learning ... [+] known as generative adversarial networks (GANs). Synthetic data like this is becoming increasingly indistinguishable from real-world data.Source: This-Person-Does-Not-Exist.comImagine if it were possible to produce infinite amounts of the worlds most valuable resource, cheaply and quickly. What dramatic economic transformations and opportunities would result?This is a reality today. It is called synthetic data.Synthetic data is not a new idea, but it is now approaching a critical inflection point in terms of real-world impact. It is poised to upend the entire value chain and technology stack for artificial intelligence, with immense economic implications.Data is the lifeblood of modern artificial intelligence. Getting the right data is both the most important and the most challenging part of building powerful AI. Collecting quality data from the real world is complicated, expensive and time-consuming. This is where synthetic data comes in.Synthetic data is an elegantly simple conceptone of those ideas that seems almost too good to be true. In a nutshell, synthetic data technology enables practitioners to simply digitally generate the data that they need, on demand, in whatever volume they require, tailored to their precise specifications.According to a widely referenced Gartner study, 60% of all data used in the development of AI will be synthetic rather than real by 2024.Take a moment to digest this. This is a striking prediction.Data is the foundation of the modern economy. It is, in the words ofThe Economist, the worlds most valuable resource. And within a few short years, the majority of the data used for AI may come from a disruptive new sourceone that few companies today understand or even know about.Needless to say, massive business opportunities will result.We can simply say that the total addressable market of synthetic data and the total addressable market of data will converge, said Ofir Zuk, CEO/cofounder of synthetic data startup Datagen.The rise of synthetic data will completely transform the economics, ownership, strategic dynamics, even (geo)politics of data. It is a technology worth paying attention to.From Autonomous Vehicles to Human FacesWhile the concept of synthetic data has been around for decades, it was in the autonomous vehicle sector that the technology first found serious commercial adoption starting in the mid-2010s.It is no surprise that synthetic data got its start in the world of autonomous vehicles. To begin with, because the AV sector has attracted more machine learning talent and investment dollars than perhaps any other commercial application of AI, it is often the catalyst for foundational innovations like synthetic data.Synthetic data and autonomous vehicles are a particularly natural fit for one another given the challenges and importance of edge cases in the world of AVs. Collecting real-world driving data for every conceivable scenario an autonomous vehicle might encounter on the road is simply not possible. Given how unpredictable and unbounded the world is, it would take literally hundreds of years of real-world driving to collect all the data required to build a truly safe autonomous vehicle.So instead, AV companies developed sophisticated simulation engines to synthetically generate the requisite volume of data and efficiently expose their AI systems to the long tail of driving scenarios. These simulated worlds make it possible to automatically produce thousands or millions of permutations of any imaginable driving scenarioe.g., changing the locations of other cars, adding or removing pedestrians, increasing or decreasing vehicle speeds, adjusting the weather, and so on.For years now, the leading autonomous vehicle playersWaymo, Cruise, Aurora, Zooxhave all invested heavily in synthetic data and simulation as a core part of their technology stack. In 2016, for instance, Waymo generated 2.5 billion miles of simulated driving data to train its self-driving system (compared to 3 million miles of driving data collected from the real world). By 2019, that figure had reached 10 billion simulated miles.As Andreessen Horowitz general partner Chris Dixon put it back in 2017: Right now, you can almost measure the sophistication of an autonomy teama drone team, a car teamby how seriously they take simulation.The first batch of synthetic data startups that emerged thus targeted the autonomous vehicle end market. This included companies like Applied Intuition (most recently valued at $3.6 billion), Parallel Domain and Cognata.But it didnt take long for AI entrepreneurs to recognize that the synthetic data capabilities that had been developed for the autonomous vehicle industry could be generalized and applied to a host of other computer vision applications.From robotics to physical security, from geospatial imagery to manufacturing, computer vision has found a wide range of valuable applications throughout the economy in recent years. And for all of these use cases, building AI models requires massive volumes of labeled image data.Synthetic data represents a powerful solution here.Using synthetic data methods, companies can acquire training data far more quickly and cheaply than the alternativelaboriously collecting that data from the real world. Imagine how much easier it is to artificially generate 100,000 images of, say, smartphones on an assembly line than it is to collect those images in the real world one by one.And importantly, real-world image data must be labeled by hand before it can be used to train AI modelsan expensive, time-consuming, error-prone process. A key advantage of synthetic data is that no manual data labeling is needed: because the images are digitally tailor-made from scratch in the first place, they automatically come with pixel-perfect labels.How, exactly, does synthetic data for computer vision work? How is it possible to artificially generate such high-fidelity, photorealistic image data?A key AI technology at the heart of synthetic data is known as generative adversarial networks, or GANs.GANs were invented by AI pioneer Ian Goodfellow in 2014 and have been an active area of research and innovation since then. Goodfellows core conceptual breakthrough was to architect GANs with two separate neural networksand then pit them against one another.Starting with a given dataset (say, a collection of photos of human faces), the first neural network (called the generator) begins generating new images that, in terms of pixels, are mathematically similar to the existing images. Meanwhile, the second neural network (the discriminator) is fed photos without being told whether they are from the original dataset or from the generators output; its task is to identify which photos have been synthetically generated.As the two networks iteratively work against one anotherthe generator trying to fool the discriminator, the discriminator trying to suss out the generators creationsthey hone one anothers capabilities. Eventually the discriminators classification success rate falls to 50%, no better than random guessing, meaning that the synthetically generated photos have become indistinguishable from the originals.In 2016, AI great Yann LeCun called GANs the most interesting idea in the last ten years in machine learning.Two other important research advances driving recent momentum in visual synthetic data are diffusion models and neural radiance fields (NeRF).Originally inspired by concepts from thermodynamics, diffusion models learn by corrupting their training data with incrementally added noise and then figuring out how to reverse this noising process to recover the original image. Once trained, diffusion models can then apply these denoising methods to synthesize novel clean data from random input.Diffusion models have seen a surge in popularity over the past year, including serving as the technological backbone of DALL-E 2, OpenAIs much-discussed new text-to-image model. With some meaningful advantages over GANs, expect to see diffusion models play an increasingly prominent role in the world of generative AI moving forward.NeRF, meanwhile, is a powerful new method to quickly and accurately turn two-dimensional images into complex three-dimensional scenes, which can then be manipulated and navigated to produce diverse, high-fidelity synthetic data.Two leading startups offering synthetic data solutions for computer vision are Datagen (which recently announced a $50 million Series B) and Synthesis AI (which recently announced a $17 million Series A). Both companies specialize in human data, in particular human faces; their platforms enable users to programmatically customize facial datasets across dimensions including head poses, facial expressions, ethnicities, gaze directions and hair styles.AI.Reverie, an early mover in this category, was scooped up last year by Facebooka sign of big techs growing interest in synthetic data. Earlier-stage startups include Rendered.ai, Bifrost and Mirage.Coming full circle, while autonomous vehicles provided the original impetus for the growth of synthetic data several years ago, to this day the autonomous vehicle sector continues to push forward the state of the art in the field.One of the most intriguing new startup entrants in the autonomous vehicle category, Waabi, has taken simulation technology to the next level. Founded by AI luminary Raquel Urtasun, who previously ran Ubers AV research efforts, Waabi came out of stealth last year with a star-studded team and over $80 million in funding.Waabis ambition is to leapfrog the more established AV players by harnessing next-generation AI to build a new type of autonomy stack that avoids the shortcomings of more legacy approaches. At the center of that stack is synthetic data.In a break from the rest of the AV field, Waabi does not invest heavily in deploying cars on real-world roads to collect driving data. Instead, audaciously, Waabi is seeking to train its autonomous system primarily via virtual simulation. In February the company publicly debuted its cutting-edge simulation platform, named Waabi World.At Waabi, we go one step further in generating synthetic data, said Urtasun. Not only can we simulate the vehicles sensors with unprecedented fidelity in near real-time, but we do so in a closed-loop manner such that the environment reacts to us and we react to it. This is very important for robotics systems such as self-driving vehicles as we not only need to learn to perceive the world but also to act safely on it.The Primacy of LanguageWhile synthetic data will be a game-changer for computer vision, the technology will unleash even more transformation and opportunity in another area: language.The vast potential for text-based synthetic data reflects the basic reality that language is ubiquitous in human affairs; it is at the core of essentially every important business activity. Dramatic recent advances in natural language processing (NLP) are opening up virtually unbounded opportunities for value creation across the economy, as previously explored in this column. Synthetic data has a key role to play here.A couple concrete examples will help illustrate the possibilities.Anthem, one of the largest health insurance companies in the world, uses its troves of patient medical records and claims data to power AI applications like automated fraud detection and personalized patient care.Last month, Anthem announced that it is partnering with Google Cloud to generate massive volumes of synthetic text data in order to improve and scale these AI use cases. This synthetic data corpus will include, for instance, artificially generated medical histories, healthcare claims and related medical data that preserve the structure and signal of real patient data.Among other benefits, synthetic data directly addresses the data privacy concerns that for years have held back the deployment of AI in healthcare. Training AI models on real patient data presents thorny privacy issues, but those issues disappear when the data is synthetic.More and moresynthetic data is going to overtake and be the way people do AI in the future, said Anthems Chief Information Officer Anil Bhatt.Another recent example hints at even more transformative possibilities.Late last year Illumina, the worlds leading genetic sequencing company, announced that it was partnering with Bay Area startup Gretel.ai to create synthetic genomic datasets.Genomic data is one of the most complex, multi-dimensional, information-rich types of data in the world. Quite literally, it contains the secrets of lifethe instructions for how to build an organism. Just over 3 billion base-pairs in length, every humans unique DNA sequence defines much about who they are, from their height to their eye color to their risk of heart disease or substance abuse. (While not natural language, genomic sequences are textual data; every individuals DNA sequence can be encoded via a simple 4-letter alphabet.)Analyzing the human genome with cutting-edge AI is enabling researchers to develop a deeper understanding of disease, health, and how life itself works. But this research has been bottlenecked by the limited availability of genomic data. Stringent privacy regulations and data-sharing restrictions surrounding human genetic data impede researchers ability to work with genomic datasets at scale.Synthetic data offers a potentially revolutionary solution: it can replicate the characteristics and signal of real genomic datasets while sidestepping these data privacy concerns, since the data is artificially generated and does not correspond to any particular individuals in the real world.These two examples are just the tip of the iceberg when it comes to the wide range of language-based opportunities unlocked by synthetic data.A handful of promising startups has emerged in recent years to pursue these opportunities.The most prominent startup in this category is Gretel.ai, mentioned above, which has raised over $65 million to date from Greylock and others.Gretel has seen strong market demand for its technology from blue-chip customers across industries, from healthcare to financial services to gaming to e-commerce.Its amazing to see customers start to adopt synthetic data at such a rapid pace, said Gretel.ai CEO/cofounder Ali Golshan. The awareness and appetite for synthetic data in the enterprise is growing incredibly quickly, even compared to 12 or 18 months ago. Our customers continue to surprise us with innovative new ways to apply our technology.Another intriguing early-stage player in this space is DataCebo. DataCebo was founded by a group of MIT faculty and their students who had previously created Synthetic Data Vault (SDV), the largest open-source ecosystem of models, data, benchmarks, and other tools for synthetic data. DataCebo and Synthetic Data Vault focus on structured (i.e., tabular or relational) text datasetsthat is, text data that is organized in tables or databases.The most important dynamic to understand with this technology is the tradeoff between fidelity and privacy, said DataCebo cofounder Kalyan Veeramachaneni. The core of what the DataCebo platform offers is a finely-tuned knob that enables customers to ramp up the privacy guarantees around the synthetic data that they are generating, but at the cost of fidelity, or vice versa.Tonic.ai is another buzzy startup offering tools for synthetically generated textual data. Tonics primary use case is synthetic data for software testing and development, rather than for building machine learning models.One last startup worth noting is Syntegra, which focuses on synthetic data specifically for healthcare, with use cases spanning healthcare providers, health insurers and pharmaceutical companies. Synthetic datas long-term impact may be greater in healthcare than in any other field, given the market size and the thorny privacy challenges of real-world patient data.It is worth noting that, for the most part, the companies and examples discussed here use classical statistical methods or traditional machine learning to generate synthetic data, with a focus on structured text. But over the past few years, the world of language AI has been revolutionized by the introduction of the transformer architecture and the emerging paradigm of massive foundation models like OpenAIs GPT-3.An opportunity exists to build next-generation synthetic data technology by harnessing large language models (LLMs) to produce unstructured text (or multimodal) data corpuses of previously unimaginable realism, originality, sophistication and diversity.Recent advances in large language models have brought us machine-generated data that is often indistinguishable from human-written text, said Dani Yogatama, a senior staff research scientist at DeepMind who focuses on generative language models. This new type of synthetic data has been successfully applied to build a wide range of AI products, from simple text classifiers to question-answering systems to machine translation engines to conversational agents. Democratizing this technology is going to have a transformative impact on how we develop production AI models.The Sim-to-Real GapTaking a step back, the fundamental conceptual challenge in this field is that synthetically generated data must be similar enough to real data to be useful for whatever purpose the data is serving. This is the first question that most people have when they learn about synthetic dataCan it really be accurate enough to substitute for real data?A synthetic datasets degree of similarity to real data is referred to as its fidelity. It is important for us to ask: how high-fidelity does synthetic data need to be in order to be useful? Have we gotten there yet? How can we measure and quantify fidelity?Recent advances in AI have dramatically boosted the fidelity of synthetic data. For a wide range of applications across both computer vision and natural language processing, todays synthetic data technology is advanced enough that it can be deployed in production settings. But there is more work to do here.In computer vision, the sim-to-real gap, as it is colloquially known, is narrowing quickly thanks to ongoing deep learning innovations like neural radiance fields (NeRF). The release of developer platforms like Nvidias Omniverse, a cutting-edge 3D graphics simulation platform, plays an important role here by making state-of-the-art synthetic data capabilities widely accessible to developers.The most direct way to measure the efficacy of a given synthetic dataset is simply to swap it in for real data and see how an AI model performs. For instance, computer vision researchers might train one classification model on synthetic data, train a second classification model on real data, deploy both models on the same previously unseen test dataset, and compare the two models performance.In practice, the use of synthetic data in computer vision need not be, and generally is not, this binary. Rather than using only real data or only synthetic data, researchers can drive significant performance improvements by combining real data and synthetic data in their training datasets, enabling the AI to learn from both and boosting the overall size of the training corpus.It is also worth noting that synthetic datasets sometimes actually outperform real-world data. How is this possible?The fact that data was collected from the real world does not guarantee that it is 100% accurate and high-quality. For one thing, real-world image data generally must be labeled by hand by a human before it can be used to train an AI model; this data labeling can be inaccurate or incomplete, degrading the AIs performance. Synthetic data, on the other hand, automatically comes with perfect data labels. Moreover, synthetic datasets can be larger and more diverse than their real-world counterparts (thats the whole point, after all), which can translate into superior AI performance.For text data, industry practitioners have begun to develop metrics to quantify and benchmark the fidelity of synthetic data.Gretel.ai, for instance, grades its synthetic datasets on three different statistically rigorous metricsField Correlation Stability, Deep Structure Stability, and Field Distribution Stabilitywhich it aggregates to produce an overall Synthetic Data Quality Score between 0 and 100. Put simply, this overall figure represents a confidence score as to whether scientific conclusions drawn from the synthetic dataset would be the same if one were to have used the original dataset instead.Gretels synthetic data generally performs quite well: AI models trained on it typically come within a few percentage points in accuracy relative to models trained on real-world data, and are sometimes even more accurate.Fellow synthetic data startup Syntegra has likewise proposed thoughtful analytical frameworks for evaluating synthetic data fidelity in the healthcare context.For text data, a basic tradeoff exists between fidelity and privacy: as the synthetic data is made increasingly similar to the real-world data on which it is based, the risk correspondingly increases that the original real-world data can be reconstructed from the synthetic data. If that original real-world data is sensitivemedical records or financial transactions, saythis is a problem. A core challenge for synthetic text data, therefore, is not just to maximize fidelity in a vacuum, but rather to maximize fidelity while preserving privacy.The Road AheadThe graph below speaks volumes. Synthetic data will completely overshadow real data in AI models by 2030, according to Gartner.Source: GartnerAs synthetic data becomes increasingly pervasive in the months and years ahead, it will have a disruptive impact across industries. It will transform the economics of data. By making quality training data vastly more accessible and affordable, synthetic data will undercut the strength of proprietary data assets as a durable competitive advantage. Historically, no matter the industry, the most important first question to ask in order to understand the strategic dynamics and opportunities for AI has been: who has the data? One of the main reasons that tech giants like Google, Facebook and Amazon have achieved such market dominance in recent years is their unrivaled volumes of customer data.Synthetic data will change this. By democratizing access to data at scale, it will help level the playing field, enabling smaller upstarts to compete with more established players that they otherwise might have had no chance of challenging.To return to the example of autonomous vehicles: Google (Waymo) has invested billions of dollars and over a decade of effort to collect many millions of miles of real-world driving data. It is unlikely that any competitor will be able to catch up to them on this front. But if production-grade self-driving systems can be built almost entirely with synthetic training data, then Googles formidable data advantage fades in relevance, and young startups like Waabi have a legitimate opportunity to compete.The net effect of the rise of synthetic data will be to empower a whole new generation of AI upstarts and unleash a wave of AI innovation by lowering the data barriers to building AI-first products.An interesting related impact of the proliferation of synthetic data will be to diminish the need for and the importance of data labeling, since synthetically generated data does not need to be labeled by hand.Data labeling has always been a kludgy, inelegant part of the modern machine learning pipeline. Intuitively, truly intelligent agents (like human beings) should not need to have labels manually attached to every object they observe in order to recognize them.But because labeled data is necessary under todays AI paradigm, data labeling has itself become a massive industry; many companies spend tens or hundreds of millions of dollars each year just to get their data labeled. Scale AI, the leading provider of data labeling services, was valued at $7.3 billion last year amid eye-popping revenue growth. An entire ecosystem of smaller data labeling startups has likewise emerged.Synthetic data will threaten these companies livelihoods. Seeming to recognize this, Scale AI is now aiming to get into the synthetic data game itself, launching a synthetic data platform earlier this year called Scale Synthetic. (Clay Christensen adherents might recognize elements of his famous innovators dilemma here.)Synthetic data technology will reshape the world of AI in the years ahead, scrambling competitive landscapes and redefining technology stacks. It will turbocharge the spread of AI across society by democratizing access to data. It will serve as a key catalyst for our AI-driven future. Data-savvy individuals, teams and organizations should take heed.Note: The author is a Partner at Radical Ventures, which is an investor in Waabi. | Content Creation/Process Automation | Healthcare Practitioners and Support/Business and Financial Operations | null | null | null | null | null | null |
|
news | Eira May | Turns out the Great Resignation goes both ways (Ep. 445) | The home team covers the hiring freezes and layoffs hitting the tech sector, burnout among freelancers and applicants for tech jobs, the dubious ethics of unpaid internships, and how to make Twitter safer by preemptively blocking people.The post Turns out the Great Resignation goes both ways (Ep. 445) appeared first on Stack Overflow Blog. | https://stackoverflow.blog/2022/05/24/turns-out-the-great-resignation-goes-both-ways-ep-445/ | 2022-05-24T04:40:00Z | The home team covers the hiring freezes and layoffs hitting the tech sector, burnout among freelancers and applicants for tech jobs, the dubious ethics of unpaid internships, and how to make Twitter safer by preemptively blocking people.Episode notes:Companies like Meta, Twitter, and Netflix are enacting hiring freezes and layoffs, a situation thats not great for anybody but is likely to have outsize effects on people of color in tech.Gen Z may not understand file structures, but they sure understand Twitter toxicity. MegaBlock from Gen Z Mafia allows users to block bad tweets, their authors, and every single account that liked the offending tweet. There, doesnt that feel better?Apples WWDC 2022 is just around the corner. What are you most excited about?Machine-learning start-up Inflection AI raises $225 million in equity financing to use AI to improve human-computer communication. Another reminder that building sophisticated AI systems isnt cheap: who could forget that Open AI paid its top researcher just shy of $2 million in 2016?Todays Lifeboat badge goes to user Patricia Shanahan for their answer to Difference between int and double.TRANSCRIPTTags: ai, hiring, the stack overflow podcast | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Gwern Branwen | Fake Journal Club | Discussion of how to teach active reading and questioning of scientific research. Partially fake research papers may teach a critical attitude. Various ideas for games reviewed. | https://www.gwern.net/Fake-Journal-Club | 2022-05-20T05:49:38Z | Discussion of how to teach active reading and questioning of scientific research. Partially fake research papers may teach a critical attitude. Various ideas for games reviewed.GPT, design, philosophy, statisticsHow do researchers transition from uncritically absorbing research papers or arguments to actively grappling with it and questioning it? Most learn this meta-cognitive skill informally or by ad hoc mechanisms like being tutored by a mentor, or watching others critique papers at a journal club. This patchwork may not always work or be the best approach, as it is slow and largely implicit, and similar to calibration training in statistical forecasting, targeted training may be able to teach it rapidly.To teach this active reading attitude of not believing everything you read, I borrow the pedagogical strategy of deliberately inserting errors which the student must detect, proposing fake research articles which could be read in a fake journal club.Faking entire articles is a lot of work and so I look at variations on it. I suggest that NN language models like GPT-3 have gotten good enough to, for short passages, provide a challenge for human readers, and that one could create a fake journal club by having a language model repeatedly complete short passages of research articles (possibly entirely fictional ones).This would provide difficult criticism problems with rapid feedback, scalability to arbitrarily many users, and great flexibility in content.Accepting research claims at face-value is the first and lowest level of reading a research paper. As one gains expertise in a field, one must move to a more active form of reading, critically interrogating a paper and its claims: nullius in verba! What is good and bad? What is new or old? What is dubious? Does the results section live up to the abstracts claims? What kind of, and how much, data would it need to be trustworthy? Where do the authors most furiously handwave over gaps? Does it make the same errors as everyone else, or are its errors at least entertainingly novel? Can you see any implications that the authors did not? Does it tie into some other result, or some broader paradigm?So, how do you learn good research criticisms? If the end of the path of learning active reading is deep domain expertise, to the point of being able to know which papers to do forensic statistics on to detect fraud, what is the start of the path? What is the first and smallest possible step one can take?You dont learn it from journals; few journals embrace the Pottery Barn rule, and the process of getting a criticism published, much less a retraction, would put Kafka to shame. (Journals, like doctors, often prefer to bury their mistakes than publish them.)Reading lots of papers is no guarantee; one can spend an arbitrary amount of time doing an activity without improving it if one simply goes through it as a routine, without any attempt at growth or deliberate practice. No matter how many decades you spend commuting, you wont become a skilled professional race driver. Similarly, many people can spend decades reading papers and come out the other end still taking every paper at face-value and believing things like a p-value of 0.05 means 95% probability a claim is not due to chance or most published results are true or correlation = causation when we want it to. Citation bias means that a reader will be exposed mostly to cites of papers supporting a claim, leading to an echo chamber of confirmation; one has to actively seek out the gray literature and failures to replicate. Many claims never get debunked or definitively refuted, they just fade away as people stop talking about them, and you dont notice an absence if you are simply reading what is put in front of you by the media or journals. (One thinks of the science journalists who breathlessly dash from nutrition study to animal study to pre-clinical drug trial to the latest gap-busting silver bullet, without apparently learning a thing or wondering whatever happened to that thing they reported on a decade ago.)Taking notes can help, if you are already actively reading, but note-taking and summarization can often mean simply regurgitating it in slightly different words (note the countless students who transcribe enormously detailed notebooks from lectures, but where it all goes in one ear and out the other).It helps if youre a jerk who reflexively bridles at being told to believe something by a paper, but that cant be taught and might not be a net benefit if it could be taught.You can learn it the hard way, by being enthusiastic about a shiny new claim and then watching it crash and burn over the next decade in a replication crisis, and becoming cynical and jaded about the next new shiny claim; this is not recommended because it takes a while, and one might be unlucky enough to pick one of the new claims which are actually true. It is also difficult to convey the feeling of being burned to a third partywhy should they care that something they never heard of turned out to be bunk?A better way seems to be mentors/tutors and experts in a field one looks up to who model how to do it for you: you can ask them about a shiny new result, and they will explain the gossip and the backstagethis is why its bullst, because a back of the envelope estimate shows its physically impossible, and further, Smith did it for his thesis a decade ago, better, ruling it out; this new result exists because of a special NIH grant with more cents than sense, and itll fail to replicate (and dont you dare suggest wasting our funding trying to replicate it! no one is impressed by or cites replication attempts), but by the time the inevitable meta-analysis comes out, Williams will have tenure and no one will care about chewing over stale old controversies. So cite it if a reviewer makes you, but otherwise you should ignore it. Or, perhaps its the real deal and you should drop everything to work on this, because this is the ImageNet moment everyone has been looking forward to for a decade, and the gold rush is on, boys! This can be done in person or remotely, like on Twitter or blogs. (Why can research blogs be so influential, when they are never cited & the people writing them do so to the detriment of their careers, and they are often far from the top of their fieldsimply people who enjoy writing? I suspect much of it is as a publicly-accessible substitute for this backstage access.)Good reading of a paper requires an active reading, which is fundamentally predictive. The first step is to always ask what do I predict I will read next? If I can predict anything and no reported results would ever surprise me, that means I dont understand anything about an area. While if I can be surprised but I predict everything a paper does say, what can I learn from it? I apparently already know all the information in it, and wasted my time reading it. What I learn from reading a paper are the parts I cant predict: the parts that make me go what? or who ordered that? or those maniacs, they actually did it! Sometimes, I go that makes no sense, that contradicts X, Y, and Z, and their effects are larger than anyone else gets, and it turns out to be a breakthrough, or, (more often than we wish were the case) I was confused by fiction because those studies turn out to do something wrong or be unreplicable or outright fraud. The more deep expertise I develop in a field, like the individual authors or gossip about where the bodies are buried or what results usually look like, the better I can predict: yeah, whatever, A always says that; B knows which side of his bread is buttered; C is useless because its just confounded and more precisely estimating a useless number; D is within the usual margins of error for this sort of approach and they shouldnt be so excitedSeminars, particularly journal clubs (like the Landau seminar), can offer a more scalable version of this. If you go to a journal club, you may see someone criticize a paper in a way you wont if you simply read journal articles. On the other hand, passive watching is only a starting point (you still need to actually do the thing at some point), and a journal club might not do much criticism, or bad criticism. (A great journal club might have half the world experts on a topic in it. Or it might have none, and instead theyre over on Twitter ragging on the paper.) Worse, journal clubs conflate several functions: they exist not just to criticize papers but also to keep a specific group abreast of developments in areas of interest that journal club might select only good, important, papers. They have real work to do, not incidentally teaching undergraduates how to read papers. (They can learn that skill on their own, and if not, plenty more where they came from.) Journal clubs may teach active reading of papers better than most methods, but they are not especially designed to teach this, nor would we expect them to be optimal at it.An analogous situation is prediction & forecasting, and calibration training (which can feed into the skills of Fermi problems and measuring anything, see also games like Zendo, Factorio, Go, Mafia/Diplomacy, Liars dice, poker, GeoGuessr, prediction markets/Murder, She Bet).The bad news about prediction & forecasting is that simple formulas and algorithms, with no knowledge of the real world, can often statistically outpredict experts, who know everything there is to know about the topic, but have not learned what a specific probability feels like or to do simple quantitative checks of their numbers to make sure they dont add up to >100% or other silly mistakes like that. I may know nothing whatsoever about North Korea, but if you, a North Korean expert who speaks Korean and has visited the country and studied its bomb program for decades, tell me that they test a hydrogen bomb once every 10 years and you are also 99% certain that they are going to test a hydrogen bomb next year, I should be confused how you can go from 10% 99% and immediately ask what makes you so extraordinarily certain. Perhaps you are actually only >50% sure, and are overconfident (as most people are before training). I can then beat you in producing accurate predictions of North Korea nuclear tests by simply knowing that I know nothing, and avoiding 99% in favor of 50%. And experiments like Tetlockslong-term forecasting experiments or his Good Judgment Project show that this is entirely possible. The experts do much more poorly than one would expect, because all their knowledge gets mangled on the way out for lacking these meta-cognitive skills.One might think that this would be an incurable problem. Perhaps one would need decades more study to hone ones probabilitieswhich would be impossible! (The bomb tests wouldve happened by then, for one thing.) But the good news is that its easy to improve a lot without spending years recording predictions or trading on prediction markets. It can be fixed quite easily by simply doing calibration training. That is a fancy name for answering a bunch of trivia questions quickly with your best probability guess, and seeing what you get wrong, until you know what 50% certainty feels like or what 10% certainty feels like, and then going back to real questions with those memories in mind. After running through a few hundred trivia questions for an hour, our North Korean expert will now know that I am 99% confident! is actually what his 50% confident feels like, and can express his informed predictions appropriately. Further, after pondering over tricky trivia questions, he will better understand how one should anchor to base rates/informative-priors and how to ask questions since North Korea only tests once a decade, what evidence do I have that should push it above my default guess of 10%? What unexpected events happened or didnt happen?With calibration fixed, his domain expertise can now be used to its fullest and he can beat the simple formulas with his deeper insight.Active readers can do something similar, I believe, by focusing on the active reading task itself without the other elements of regular journal clubs.So perhaps we can break off the active-reading chunk and make a specialized Fake Journal Club (FJC) which focuses on teaching just that, for people in many areas, without needing to be impossibly expert in every single niche there is or will be?I think Fake Journal Club should be possible, because active reading is something that can be done at any level of domain expertise. Even if you do not know much about an area, you should be able to understand if there are logical gaps in an argument or if it is getting the usual sorts of results, and learn more from reading it than a blind acceptance of its claims.How?Using real science papers is problematic. Trivia questions are super-abundant, extremely short, and no one will know them all, so calibration training can use them for rapid testing & clearcut feedback. Papers are long. How do we give feedback to a reader of a paper on their active reading?Could FJC chose a presenter each time, who must present randomly either a fake or real paper, and the participants guess which, perhaps voting? (An analogy of this would the Los Angeles Museum of Jurassic Technology, which mixes half real-but-strange exhibits and half elaborately-faked exhibits, and is fun to go through trying to guess which.)No. This would give participants little feedback (only 1 bit for possibly an hour of work, and one might get runs of reals or fakes depending on the randomization). FJC needs to teach faster than that. Also, this is probably far too easy: any slipup or tiny error in style imitation will give away the game, without requiring any genuine learning or reasoning, just exploiting verbal dark knowledge. It would be bad if all this work wound up only teaching people about the finer points of LaTeX typography or APA citation format.Could the presenter show both a fake and real paper as a block, simply randomizing the order?No. This still too little feedback, although at least now one is guaranteed 1 fake/real comparison per JFC. This is also probably far too effortful for the presenter, who must work extremely hard to make a fake which isnt debunked immediately. There is further a difficulty that participants may be able to detect a fake simply because they dont recognize it from the news or recognize the real one (even if they never read it, simply reading the title is enough to put it into recognition memory for a long timeoh, I feel like Ive seen that somewhere before).What if we treat a paper as the block, and falsify inside the paper?Now were getting somewhere. Removing or faking parts of a paper is much easier than fabricating entire papers from scratch, and there is a pleasing analogy with pre-registered studies like Registered Reports where papers are accepted for publication on the strength of the proposed experiment before the results are known by anyonethereby actually enforcing the toy science model we learn in school of Hypothesis Experiment Data Analysis Conclusion, which is usually honored in the breach.Could the presenter could present excerpts from a paper, and randomly write a fake Results section which is the opposite of the real results? Perhaps there could be one fake result, along the lines of My Favorite Liar (a professor who added one deliberate error to every lecture to make the students think).This is interesting, but still suffers from the little feedback signal problem and the burden on the presenter of being so good at imitating the style of writing that participants are forced to understand the semantics instead of cheap lazy solutions. Asking them to give a verbal paraphrase is easier but still risks shallow learning of style cues instead of semantics (participants might wind up using side-channels like Clever-Hansing the presenter, noting when they are slow because they have to think up a fiction versus when they are fast because they are merely recalling the real answer).OK, perhaps we could take a Mad Libs approach, and instead of rewriting the entire text, we instead delete the key numbers and require participants to predict the numbers? Or to choose between 2 sets of fake vs real numbers? (Easy mode: half the numbers are multiplied by a random factor 0.110×; medium: 0.52×; hard: 0.91.1×.)This idea of faking smaller parts is getting somewhere. This sort of science Mad Libs closely parallels masking/denoising training in deep learning, which we know works well, and it also addresses some meta-scientific critiques (eg. Paul Meehl) that many researchers do not have good numeracy or informative priors on what point-estimates should be, and rely on crude dichotomizing of = 0 and 0, and dont know what an informative result looks like at all.On the other hand, this doesnt seem like itd work well in most fields, including deep learning, where the specific numbers are often fairly arbitrary and the threshold changes constantly over time. (If you are reading about a new Transformer for ImageNet classification, it is relevant that the new hotness > old hotness, but it is usually not too relevant if that threshold is 87.5% rather than 88.3%, nor would reaching 89.1% accuracy have anyone clutching their chest going but this changes everything! If the number were 10× bigger, it would be important, but the usual variation is not.)So, maybe specific numbers are a little too little.To continue the DL analogy, perhaps we can use DL directly, as the generator to our discriminators. Can we do something like my GPT-3 Arxiv example where GPT-3 completed a real but obscure paper and I challenged people to detect where the completion began?This is starting to be workable: anyone can access GPT-3 and in a few seconds, generate a completion. So its not too burdensome. GPT-3 is such an excellent mimic of style and text that one has to be concentrating to detect the exact transition point: if one waits until the errors have accumulated and the classic meandering text becomes obvious, one may be several paragraphs into the fiction.But it still limits how much we learn if we do one completion of the paper. GPT-3 is going to do badly faking the rest of the paper as its errors and forgetting accumulate, and it wont be able to fake figures. (We can, however, expect future multimodal systems like CM3 to seamlessly fake the figures given an Arxiv prompt. Just not yet!)Perhaps we can go paragraph by paragraph through the paper? Generate 1 paragraph, guess pair, reveal.If pairs are still too much work for too little feedback, we can easily generate more completions and provide, say, 4 choices. Now the user gets dense rapid feedback, with no work from a human presenter.To gain variety in completions, we can edit the promptnothing says we have to complete using exactly the original text, we can delete sentences or randomize numbers to generate a lot of diversely fake completions, while continuing to show the FJC user the original text. (There are also models like Wright et al 2022 for extracting specific claims, which could be used to generate claims & negations.)Ideally, with 4 choices, each would be picked 14th of the time: if its easy to pick the right one, then the user could be learning more from a harder instance, and if some fakes are picked less than others, they are too easy to detect as being fake. Some fakes will be better or worse than others (or dangerously similar!), so we wont see an even distribution of errors over them; so we should drop them and generate new ones. If dissatisfied, we can edit or write them manually, or solicit better ones from users.If we find that domain expertise or prior knowledge of papers is a problem, we can edit the prompt further: nothing says that the prompt has to be a real paper in the first place! We can, for example, write our own science paper on, say, the Ovids Unicorn, a rare quad-horned silver-white quadruped discovered in a remote valley in the Andes Mountains, who speak English. Since they dont exist, we can easily make up whatever scientific findings we want about them, keeping it all self-consistent and logical. A model can then generate fake completions as it goes: perhaps some completions will reference their horn (singular), and others describe them as bovine, or just refer to prior studies on dragons which were, however, not referenced prior.If we find that we are getting hung up on fine details of style and formatting, and failing to be adequately abstract, we can be more flexible and ask: does it need to be a paper?We could instead complete an outline. (GPT-3 will handle simple indented Markdown-style lists without a problem.) Boil down a paper to an easily-parsed hierarchical outline, and challenge the user with completions of various sub-lists.This approach lends itself naturally to a scalable human-presenter-free static-website web quiz implementation with papers and a pool of pre-generated completions (no GPU or live OA API access required), collecting statistics on good or bad completions to decide what to remove, with papers from a wide variety of fields, real or fake.A simple prototype could be done as a text document with the paragraphs then list of completions, then answer 1 screen down, and the reader grades themselves on the honor system. | Content Creation/Content Synthesis | Education, Training, and Library/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Chip Huyen, Shijing Fang, Vernon Germano | Presentation: ML Panel: "ML in Production - What's Next?" | The panelists discuss lessons learned with putting ML systems into production, what is working and what is not working, building ML teams, dealing with large datasets, governance and ethics/privacy. By Chip Huyen, Shijing Fang, Vernon Germano | https://www.infoq.com/presentations/ml-production-2021/ | 2022-06-02T23:43:00Z | TranscriptHuyen: I'm Chip. I'm a founder of a startup that focuses on infrastructure for real time machine learning. I'm teaching a machine learning system design course at Stanford, which is a course to help students prepare for how to run ML projects in the real world. I also run a Discord server on MLOps. I think I'll be learning a lot from people there.Fang: This is Shijing. I work in Microsoft as a data scientist. My day-to-day job, it depends on the projects. Oftentimes, I meet with my partners, stakeholders, my colleagues, discuss about the projects. Then we take the problem, and then look into what data we have, how to build machine learning models or insight so that we can feed back to the business questions and challenges that we have. Of course, still with a lot of coding data qualities and all kinds of data and machine learning challenges that we have on a daily basis.Germano: I'm Vernon. I am a Senior Manager for Machine Learning and Artificial Intelligence. I work for Zillow. I run a couple of different teams there, that are tasked specifically with estimations on property value, things like trying to calculate estimated taxes on properties, valuations essentially on real estate nationwide. Prior to that, I worked for Amazon Prime Video, and did completely different kind of ML work. I've been in the industry for many years and working in ML for quite a few of those.Why ML systems fail In ProductionGreco: As three experts, so you've seen the future. You know about the future for the people in our audience are now diving into MLOps. We talked about this with Francesca, a lot of ML projects have failed. What would you say are the top one or two things why ML systems fail in production?Germano: I've seen these projects go off the rails. A lot of times when you might have applied science where your scientists spend a lot of time in research, and spend a lot of time in trying to develop models and trying to meet certain performance standards, so everybody establishes a metric. They look at that. It can be precision. It can be recall on models, and they focus on the science of it. I think that's really important. Where I see things go off the rails is sometimes you get to a place where you've got a model or several models that are performing to the standard that you'd like, but you have no idea how to actually scale and implement that. You're confronted now with the engineering side of it. Engineering and science, while some of the tools are similar, you need a splitting at some point where you actually now have to consider all that's necessary in order to make this stuff work in production. If you're looking at things like we've got a great model, but how do we do online inference? How do we scale to the size of our audience? How do we make sure our customers are satisfied? These are all engineering related questions that scientists don't spend a lot of time thinking about. They think about the science behind what they're trying to accomplish in the metric. They spend very little time and aren't really expected to be the experts on how you scale this stuff out. That's where I've seen it fail is, I've seen models on the shelf that are amazing, but getting those things into production, there wasn't the stomach for trying to implement that thing. It could be very expensive.Greco: In other words, not as much focus on the engineering side.Germano: Yes.Fang: I would just add on top of what Vernon said about lack of the engineering view, at the same time, sometimes is also lack of the holistic view in terms of when you develop the model itself. We have seen a lot of great product, which you discussed with Francesca as well, in terms of the product itself, or the machine learning model itself looks beautiful when it was in a POC stitch. However, your lack of consideration about what is the business objective or context to apply, what is the end goal of that? Then what data eventually we are going to get, and then how to put into the production, into the engineering pipeline, what environment we are dealing with. When the model itself in the experimentation or in the POC stage, it looks beautiful, but then when it gets to the real world practical, then it fails in many places or stages.Also, even you get to the production stage, lack of the monitoring, that goal of looking into the changes of the environment or changes of the data, even just a schema change many times we didn't realize that, and then it failed when no one was monitoring it. Then even you get to the production and it's still trailing down into a couple months later, if people still don't realize it's all wrong. I think it's lacking of this communication and holistic view with different departments, with different stakeholders, any of these stages, and you get to the fail stage.Greco: You're thinking like more of a process problem.Fang: It definitely has the process problem, also could be a culture, or lack of continuous business context, continuous engineering context as well.Huyen: I'm sure you guys have seen a lot of news about the data science team at Zillow recently. We have great interest in learning models of post-mortem of what happened with using machine learning model for estimations of housing price on Zillow. One thing I would really want to learn more, not just about Zillow but in general, is that a lot of my students, when they ask about machine learning failures in production, I don't think there's any anatomy or white studies. A lot of people say that, ok, so a lot of it is because of the engineering problems, like some problems with distributed data pipeline features, engineering problems, but what percentage? How prevalent is that? How often does it happen?I think that Google was the only company I've seen where they actually published some internal study, like the study on the machine learning system failures in the last 10 years. They found out that 60 out of 96 failures are actually not ML specific. A lot of it has to do with dependencies failures, a lot of problems with data joining, like when you're joining data from different sources, and it doesn't work. A lot of it has a distributed component, the bigger your system the more distributed the component, then the more likely it's going to fail. I think having some understanding there could be very useful. I also think that the problem is that probably because we don't have good enough tooling. If you have good tooling then you can automate a lot of process and you can reuse a lot of code, than if you don't have like too much surface for bug areas and we have less bugs. I'm very excited to see more good tooling around the space to reduce failures.Greco: It's interesting, the three different views on why things are going wrong. I hear engineering, or not so much focus on engineering. There's tooling, and there's a process problem. For companies that are getting into this, it seems like there needs to be additional focus, not all the focus, of course, but additional focus on engineering, process, tooling, and maybe even the people themselves too in terms of education and training, that's equally.How to Continuously Deliver ModelsRight now when we create models, and we put them in production, we have this almost like a waterfalls mentality about building a model, put it in production, and then doing it over again. It's like a batch model. Do you foresee us moving into something that's more dynamic or like a continual delivery of models? If so, how do we do that?Huyen: I think it's going to piggyback on what you just said, Frank, about the different view on process and tooling. I don't think they are separated, like to only engineering, process, and tooling. I think they are quite the same. The key here is just like engineering, because you need to have a good engineering process and have good tooling to have better engineering experience. Focus more on engineering and less on tuning models.Germano: I think that at the root of it is that there is good engineering practice around continuous delivery, generally in successful businesses. An infrastructure is set up for integration testing, and automation around all of that, and placing things into production is predictable. In a machine learning environment, because you're reliant on retraining the models and republishing of models, you have to take a slightly different approach to try to get to a continuous integration environment. It's not like these things are just off the shelf. I think a lot of companies are working towards that infrastructure. I think it's important that they do. When you take on the care and feeding of a model, and I appreciate the other panelists both bringing it up, that it's not a once and done, these are things that are continuing. You're continually training your models. You're continually revising the data. You're continually looking at how you're feeding that to those models.One thing to consider is that if you've got human in the loop operations, for instance, how do you reintegrate their data to improve performance of your models? What automation are you going to create around that? How are you going to look at things like reinforcement learning? If you've got new sources of data coming in all the time, having that is a huge benefit, but doing that in an automated fashion requires an understanding and an investment of it. I don't remember which of the panelists mentioned, taking a holistic approach, but I couldn't agree more. You literally have to look at not just delivering on some model that's performing beautifully right now today, but also, how are you as a business going to own this model? How are you going to own these predictions? How are you going to continuously improve them? Honestly, I have not seen an off-the-shelf solution that does all of that because it's very complicated. What you're taking on as a business is the care and feeding of that model is either going to require that you put in a lot of manual effort, or you're literally going to have to take some engineering time and set up an infrastructure that's going to allow you to do this in a way that's not going to cost you a lot.At least in my organizations, those have always been the big challenges. Not the development of the models, a lot of times scientists can get to a really good solution rapidly if you give it enough data, and you give it enough time, and you give it enough energy, you're going to come to a good solution. How do you make sure that that solution isn't just a point in time reference, but how do you build on an entire infrastructure to make sure that's continuous? I think it's something that you have to at least first acknowledge that you're taking on that responsibility if you're going to put this thing into production for the long term.Fang: Just add on top of using reinforcement learning as an example, I think in reading out to me you have more discussion and I think industrially also have more discussion around increasing applications in reinforcement learning. However, that also requires, in my opinion, I think some disruptive of the engineering efforts to have these real time distribution data and collection and then feedback. Which is a heavy investment to the engineering and infrastructure changes. You have these really disruptive concepts and system of the machine learning. At the same time if you don't have the engineering system catching up and also business didn't or haven't really realized the value of it, it's really hard to have that trend even though it's increasingly becoming a topic. I think that just adding on top of and then how to invest not only the engineering side, but also the human side of continuing more into, do we prioritize this project versus the others? It's a lot related to kind of in this system.Heuristics for Machine Learning in ProductionGreco: Do we see foresee the learning aspect of a machine learning project happening during production? Obviously, we create these models, we put them in production, and we have test data and everything works, and we put it in production and things don't work out. Are there any heuristics, besides just applying accuracy, is it 95%, 94%? What heuristics can they use to say, we need to create another model other than like the model is FUBAR, and we start all over again? Are there any things that a company can do to tell them how often to update the model?Germano: There's things that companies should be doing on any model that they're placing in production, and they're relying upon. One thing is monitoring performance of your models and making sure that they're maintaining their accuracy, because models drift. It's hard. Again, if you don't have an infrastructure for making sure that your models maintain their performant nature, then you're going to have bad predictions, and you're going to find yourself off the rails at some point. You don't want your customers telling you it's wrong, you want to be able to know that ahead of time, or that you've got movement in a particular direction. Setting up appropriate metrics is super important. Also setting up that monitoring to make sure that you are continuously meeting those performance standards, is something you want to get out ahead of. It's probably one of the most critical things you can do. Forget reinforcement learning or esoteric things like, how are you going to do this? If you're not monitoring your performance, then you're just setting yourself up for failure, for sure.Existing ML Toolsets, and Their EfficacyGreco: Speaking of toolsets, what tools should a company put in place before you put an ML system into production? Do you have any recommendations for toolsets? Other than monitoring of accuracy, are there other things you would recommend?Huyen: I think that a lot of problems you guys talk about are very similar, and it seems like a lot of investors realize as well the problem with monitoring, continuous delivery. At the same time there have been so many companies, like startups trying to capture this. There are so many monitoring software in the last year, probably like 20 of them raised a ton of money. If the problems are well-known, and so many tools out there trying to solve the problems, why that's still a problem. If you guys look at something like AWS SageMaker, they're trying to deliver that, Google Vertex trying to deliver that. Why are those tools not good enough for your use case?Fang: From my experience, of course, it's not lack of the tools. We have a lot of open source toolsets. Also, big companies, Microsoft, AWS, theoretically release different features and tools in place, and also Google and others. I think it is several things I see from my team perspective, we leverage a lot of the Microsoft Azure stacks. We do have the tools in place, however, it's also continuing to change according to, either it's a security concern, or is the next generation or the data size. Sometimes we change from SQL in the past for data acquisition, and then to the Data Lake and then to Spark and everything. Our engineering also and the data scientists also need to catch up with all the skill sets. We also have an internal tool, we call it Kusto, which is currently publicly available as well. All these are individual files needed to catch up and understand what's the roadmap, and then to plan about the current projects that we are working on leveraging the existing infrastructure, and then how to dogfood to the next platform system, so into that. Then, how to leverage the existing Microsoft solution, MLOps and all this, as well as the open source in Python, in R, so that we can be part of the best-in-class of the system as well.I think it's a lot related to all these systems talking together, all these components together, as well as dealing with the complex business scenario. For example, two years ago, we had this huge spike of usage because of COVID, so how to rapidly respond to these changes. Then reflecting to the model itself, do we discard those data or do we incorporate this plus one dot data? Those discussion systems that we needed to take into consideration, so a lot of places together. I don't think it's a failure of, or it's the tooling system problem, it's all these end-to-end in terms of what is the best for our system. Then how to, one thing, adapt the best-in-class, but at the same time looking into the long term solution.Embedding Machine Learning in ToolingGreco: It's almost like an inevitability. We had software tools for engineers. The advent of the IDE for an engineer just accelerated software development. Are we going to see ML embedded into our tools to help the ML systems?Germano: There's already some of that. If you look at modern IDEs, you look at various tools that are available out there. I think Microsoft even has the ability to use some machine learning to evaluate code, and tell you there's better performant ways to do what you're trying to accomplish. I see machine learning being integrated completely into developer toolkits. The purpose of that is to make sure that we're all working from the same baseline. I think that's great. Why not have some of that help as an engineer to help us anticipate things like performance problems, or anticipate things like not meeting our operational excellence standards, or something within our organization? Yes, I see that now. I see that continuing. I think that working side by side with ML is something that all people in engineering are going to wind up doing.Frankly, probably all people in our diverse workforce working on all kinds of problems are finding themselves working with machine learning help. I don't think you can go into certain email systems anymore without them trying to anticipate what you're going to type. I find that a little crazy, but it's usually right. I give credit to engineers working in that field, in applied sciences working in that field. I think more and more, we're going to see that across development tools and development infrastructure. I just imagine when it comes time to doing performance, and sizing, and things like that, AWS will continue to implement all kinds of machine learning to try to anticipate scale. You look at the ability of these systems in the cloud to scale up and scale down automatically based on anticipated use, it's all being done with machine learning. Those things are there. They're going to continue.I think just, on the previous point, it's like buying a car. Some people need a car that goes fast. Some people need a car that can carry a big family. Some people need a truck. I don't think there's any problem with having lots of monitoring tools and lots of tools to pick from, but I think every organization needs to have a centralized approach to it so that not everybody just goes out and picks their favorite and you wind up with a garage full of cars that don't actually satisfy your need.Tips on Prioritizing the Likely Impact of Various Features Before They Go To ProductionGreco: ML feature engineering can be expensive and time consuming. Do any of the panelists have tips on prioritizing the likely impact of various features before we put them into production?Fang: This definitely is a huge investment. One thing our organization has been doing is creating the so-called feature bank, so that you can enable the others, so when we have, for example, a machine learning project, and we follow, here is the subtle features that are relevant. Instead of just serving for these particular projects, we also put it into the centralized data center in that environment, so that it's documenting, also the pipeline is somewhat maintained. These feature banks can be leveraged for the other projects which may be relevant. That is one of the ways for us to start to enrich the scalability, in this case. There are some other things that we do as well, also putting into a centralized place in terms of the data pipeline. For example, sometimes we look at the customer lifecycle to determine, what is the inflection point for the customer? For those inflection points that may apply to one scenario, it may be applying to broader scenarios. We also have that inflection point converted as a metric, so that it can be also looked into as a standardized way, leveraged by the other machine learning projects or business cases. That is one of the ways for us to resolve the scalability of the feature engineering.Germano: I love that approach. I see that approach as being highly successful in businesses where you've got multiple teams that are working on problems. If you think about it, generating an embedding that is useful for one particular model, one particular problem may be useful for others, and you've already gone to the expense of generating that embedding. You've already gone out, you've established this feature. Having a feature repository at a centralized location for features is one way to really speed up engineering work overall. You do not want to be doing duplicative work in this area, because it can cost a lot of extra money. If one team has solved the problem, it's really awesome to have one place to go to build upon that.Deploying Models from NotebooksGreco: The role of Jupyter Notebooks. Jupyter Notebooks and alike are great during the research phase, but some organization productionize notebooks right away that make it possible to deploy models from a notebook. Is that good practice or is that not a good practice?Germano: I think it's a question, is it a good practice for your organization based on the scale of what you're trying to accomplish? That would fail in a very large scale world, potentially, because you've got infrastructure, what are you using to host that? Is it SageMaker? What are those costs? Is that the best place for that to live for your organization? Who are your customers? How are they spread out? How are they diverse? What is your tolerance for cloud services? I think it's less of a machine learning question and more of an engineering question, and about, what are you going to do to be performant? I've seen it work at some scale. You can use this, especially in like batch systems and stuff like that, where it's just going to run something overnight, maybe you're going to just use that infrastructure to go do some analysis or inference at night. The tradeoff is if I'm going to go and I'm going to actually have 100 million people hitting my website during the day, and they're all going to be calling into this thing, what is that going to look like?There's nothing wrong with evaluating Jupyter Notebooks for a production environment for an organization where it makes sense. You got to decide whether it makes sense for you or not, and you have a tolerance for being able to host your stuff in that way. Then you got to ask all the same questions about, how do you make sure you're doing versioning correctly? How are you testing that? What is your infrastructure for doing integration testing across your entire pipeline? Is making one change to that going to break 100 places? These are questions you have to ask yourself and see what your tolerance is.Explainability and InterpretabilityGreco: I did want to bring up the point about explainability, interpretability. We know that for various reasons, especially like legal reasons, that this is an important thing. For a company that's starting out in deploying an ML production system, how do you ensure that? How do you ensure interpretability, explainability? What do you do?Germano: It's dependent upon what you're trying to accomplish. If you're using deep learning models, your explainability is going to be really tough. That's how we train the model. If you get into a really complicated deep learning infrastructure, you literally have to let go, as a manager, as somebody evaluating output, you have to let go of a little bit of explainability. Trust that if it tells you that the cat is a cat, you can go and you can evaluate the performance of it, and say, here's the metric that we've established to say that it's performant in identifying cats. If I've got to explain to you how it determined that that's a cat, I'd have to show you a dataset of 100,000 cats.Explainability is important if you're looking at things like linear regression models. These things are a little simpler. As you start to get into very complicated models, or you get into models that build upon each other where you've got some really complicated learning process, it becomes a little more difficult. It becomes a little bit of trust that those metrics that you've established are the appropriate threshold for evaluating the performance of the model. That's my opinion. I notice 100 million other opinions. Because every time you talk to a scientist, or someone, they're going to give you a slightly different opinion on that.Huyen: I think we have many different dimensions of explainability, interpretability. One thing is like for developers to understand the model, so that's what Vernon was talking about like, if you've made the decision that this is a cat, how does that arrive at the decision? Another is to ensure that you don't have biases on a model. For example, if you have a resume screening model, show that like a feature they've picked on along, is that person of a certain race? Then it's definitely something you need to keep an eye out for. There are different dimensions of interpretability. It's another insight to help you observe the model performance. When I was talking about how when you monitor a model, and you monitor for performance decay, but then there's so many different things that can help of course the performance decay. Without some understanding into how a model arrived at certain predictions, it's impossible to detect the causes of the performance decay.Greco: Certainly an interesting thing from a legal point of view too going forward in the future, you're not being dragged in front of the Senate, saying, "The model is the model. That's how it was trained." Unless we have to educate our senators on how machine learning works.Machine Learning At the EdgeThere's this new trend about computing at the edge. For us being machine learning people, machine learning at the edge. Are there any suggested architectures in doing ML at the edge. Is it just computing at the edge, except we're just applying the monitors and more to that?Huyen: I think for me, personally, I feel that machine learning on the edge is the Holy Grail. I think the more computation you can push to the consumer devices, the less we have to pay for cloud bills. I'm not sure what the cloud bill complaint at your company is. Every time I talk to a company, they were like, "We need to reduce the cloud bill this year, and I don't know how." Some organizations can push to the edge device. There are so many problems with deploying machine learning on the edge. Like hardware problems, whether the hardware is powerful enough to run the models. There's the question of how to manage different models, because now instead of having one model on one server, you can have 200,000 different models on 200,000 different devices, then how do you monitor performance? If you have an update, how do you usually push out an update to all of them while maintaining the localization of different devices? There are a lot of problems with edge devices.Germano: I agree. I think the last point is, how do you maintain models at the edge? Especially considering some of these models need to be retrained quite often. Again, if you're going to suffer drift in one model running internal, imagine if you've got multiple versions of them out there on different devices. One I think compelling technology is you start to look at your ability to run models inside these virtual spaces, like a browser itself, where the potential is that you're still hosting one model. It's just being made available to many users outside your organization. I think, one, that saves you on cloud services, potentially, but it also can really aid in performance. I think that the performance nature is just as important as the expense in cloud service operations.If, for instance, I can push all those cycles out to laptops out there that are accessing my website, then I have not only saved money, but I've also given my user a much more interactive experience potentially. As we look at the development within browser infrastructures of being able to push these things all the way to the laptop and let those take advantage of, for instance, GPU hardware, whatever that's actually out there. I feel like over the next few years, that's going to be a really interesting approach. I would imagine that that's going to be something that is adopted by a lot of organizations that just actually see that they can, not just save money, but give that really nice, crisp experience.Huyen: When you say performance on browser, do you mean latency aspect or accuracy, performance?Germano: I'm referring specifically to your ability to push your models out and have them run inside, basically a virtual machine that's running inside the browser. It's not that you're hosting that, it's that the browser is actually hosting your model itself. As we see the development of those technologies, just as in the past without having JavaScript running within a browser, we lose out all that functionality and now we push all that out. We're not actually doing that work internally, that's happening on our users' machines.Same thing for models, eventually, we'll be able to see model infrastructure where analysis and inference is done, and the model itself is being hosted remote from us. We just happen to serve it up, and then it's running somewhere else. That to me would be extremely helpful in areas, for instance, Zillow, where you do a home walk-through. It's a video picture of a house, and now you've got to try to figure out the layout of that house, or you want to actually figure out how to present a panoramic 360 view stitching images together. If I can stitch those images together on a user's machine instead of doing it myself, I've saved myself a tremendous amount of effort, and I've given them a much better experience.See more presentations with transcripts | Unknown | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | PR Newswire | Riiid Expands its NLP Capabilities, Presenting Innovative AI Models at the World's Leading NLP Conferences | Riiid, a leading AI Education solutions company and a member of Born2global Centre, today announced two of its new AI research studies that will be presented... | https://finance.yahoo.com/news/riiid-expands-nlp-capabilities-presenting-130000660.html | https://s.yimg.com/uu/api/res/1.2/kP_b1k6Q_90C1Lul.3o.7A--~B/aD00MDA7dz0yODM7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/prnewswire.com/fd65c2ae9c3083fd6940c90cd53856e0 | 2022-05-16T13:00:00Z | Two studies to be presented at ACL and NAACLheld on May and July 2022 respectivelySAN RAMON, Calif., May 16, 2022 /PRNewswire/ -- Riiid, a leading AI Education solutions company and a member of Born2global Centre, today announced two of its new AI research studies that will be presented at the upcoming Association for Computational Linguistics (ACL) and the North American Chapter of the Association for Computational Linguistics (NAACL), the world's leading Natural Language Processing (NLP) conferences. With this ground-breaking research, the company will continue to extend its AI capabilities with NLP technology to build extensive new educational content-aware AI models.Paper accepted at NAACL 2022, GRAM: Fast Fine-tuning of Pre-trained Language Models for Content-based Collaborative FilteringAt ACL, Riiid will present 'Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking', focusing primarily on creating dialogue systems like Chatbots that can deliver meaningful results even with a limited amount of labeled and structured dialogue data by leveraging very large language models. By using only 1% of the training data, Riiid's new method was able to achieve 66% of the full data performance, whereas other approaches achieved less than 60%."Dialogue State Tracking (DST) is an essential element of task-oriented dialogue systems but is infamous for the expensive and difficult data collection process," said Jay Shin, AI Research Scientist at Riiid, who led the study. "Our study proposes a new method to reformulate DST into dialogue summarization, to minimize the pre-train and fine-tune discrepancies that typically occur"Riiid researchers provided rule-based summary templates from dialogue states and guided the summarization to conform to these templates. Applying heuristic dialogue state extraction from the generated summaries, researchers were able to create the strongest DST model in the limited label scenario that uses only 1% of training data. "This study can be extended as the proof-of-concept that Riiid can develop new educational features such as AI Tutors with chatting capabilities at a much lower cost with much higher efficiency and accuracy," said Shin.Riiid's second study, to be shared at NAACL as an oral presentation, introduces an efficient training algorithm 'GRAM (GRadient Accumulation for Multimodality)' for incorporating content information in Knowledge Tracing (KT)."Content-based KT can potentially offer personalized item recommendations to users, but its exponential training time has hindered its usage in the industries," said Yoonseok Yang, AI Research Scientist at Riiid, leader of the study. "In this research, we accelerated the training speed up to 150 times while maintaining state-of-the-art KT performance. In addition, GRAM is especially powerful for newly-added questions that have no interaction history, showing a dramatic accuracy improvement of 40% in cold start predictions compared to the previous models."GRAM will be deployed in the A/B testing platform in Santa, the company's own English proficiency test (TOEIC) prep solution that is the best-selling AI-based smartphone application in Japan and Korea. With 300 new questions added to Santa every month, the company expects its latest model to provide new levels of personalized learning experience for users."The impact of this work is not limited to improving the model performance in Santa," said Yang. "With this technology, Riiid can easily provide high-quality cross-domain KT models, which will open our doors to countless domains with small or no interaction data. Basically, we can apply TOEIC KT models directly for similar domains like GTELP, TOEFL, or even SAT without interactions data from these new domains.""The two studies are remarkable as academic research but also in demonstrating how our lab breakthroughs help remove some of the most common limitations of AI for education," said YJ Jang, CEO of Riiid. "The ultimate goal of Riiid's research is to find how technology can help to solve practical problems in education and bring more value to actual learners. We will continue to expand the field of research and change the landscape of the education industry in the direction that can best help improve everyone's learning experience."Riiid has been active in publishing academic works at top global conferences since 2016. Based on its proprietary AI technology and the world's largest data collection set in AI education, the company has published more than 16 papers in leading AI and education technology conferences such as Neural Information Processing Systems (NeurIPS) and Association for the Advancement of Artificial Intelligence (AAAI).About Born2Global CentreBorn2Global Centre is a full-cycle service platform for global expansion. Since its inception in 2013, Born2Global has been setting the standard for a successful startup ecosystem as the main Korean government agency under the Ministry of Science and ICT. Born2Global has expanded and transformed startups to be engaged, equipped and connected with the global market.For more information, visit born2global.comAbout RiiidRiiid is a leading pioneer in AI solutions for education, named in the 2021 CB Insights AI 100 list of the most innovative AI startups. It is backed by $250 million (USD) in funding with the latest from Vision Fund 2. Leveraging the power of AI, the company practically delivers personalized instruction for all students with the aim of democratizing quality education anywhere in the world. Riiid's proprietary AI technology, based on novel deep learning algorithms, analyzes user data and content, predicts scores and behavior, and recommends personalized learning items in real-time to help users unleash their learning potential. As a leading force in AI technology, Riiid has published research papers at top AI conferences including NeurIPS and AAAI. Riiid's technology is content-agnostic, highly scalable, and thus applicable to a wide range of domains.In this pandemic era, education, which is a critical building block for a healthy society, has been facing an inflection point where traditional systems have failed and new approaches are desperately desired. Riiid fills the void with the only verified AIEd technology and business. The company provides solutions for K-12 education, postsecondary education, and corporate training.For more information, visit riiid.comContact at Riiid: Yunyoung Noh [email protected] Centre: [email protected] View original content to download multimedia:https://www.prnewswire.com/news-releases/riiid-expands-its-nlp-capabilities-presenting-innovative-ai-models--at-the-worlds-leading-nlp-conferences-301547738.htmlSOURCE Born2Global Centre | Content Synthesis/Personalization/Digital Assistance | Education, Training, and Library/Computer and Mathematical | null | null | null | null | null | null |
news | Jackson Shaw | 5 Trends Shaping Enterprise Identity Management | The $13 billion identity management market has been a popular area of investment in recent years, and for good reason. Cloud adoption, remote working environments, and the sheer number of business applications in use has opened new entry points for attackers. In response, a new wave of solutions and features has emerged to address identity. We’re […]The post 5 Trends Shaping Enterprise Identity Management appeared first on DATAVERSITY. | https://www.dataversity.net/5-trends-shaping-enterprise-identity-management/ | 2022-06-06T07:35:00Z | The $13 billion identity management market has been a popular area of investment in recent years, and for good reason. Cloud adoption, remote working environments, and the sheer number of business applications in use has opened new entry points for attackers. In response, a new wave of solutions and features has emerged to address identity. Were spoiled for choice, yet 76% of users are unhappy with their current identity management solution, according to Gartner. Dissatisfaction with current solutions is certainly a contributor to why enterprises still default to manual, inefficient, insecure processes like email and spreadsheets to control access and permissions. But its not the full story. For that, we turn to the 2022 Identity Management Survey, exploring the trends, technologies, practices, and use cases shaping identity at work. By understanding the state of identity from more than 500 knowledge workers themselves, we can start to forge a better path forward. CONSIDERING A CAREER IN DATA MANAGEMENT?Learn about the key responsibilities you’ll have and the skills and education you’ll need with our online training program.Here are the five trends shaping modern enterprise identity, and what they mean for the industry.1. Its still early days for AI in identity: Artificial intelligence (AI) and machine learning (ML) are being used to automate tasks and workflows across many domains and industries. Security-related tasks are no exception, and there are early examples of AI/ML in identity security. Automating, identifying which employees need access to which data or system, and anomaly detection are a few of these applications. While two-thirds of survey respondents indicated that their company uses AI/ML to improve identity management, less than a third of them indicated that the technology yields moderate to high benefits for identity management. Despite AIs potential, we have a way to go before its driving true value in identity management.2. Multiple identity management solutions are at play: Over half of respondents with IT job functions indicated that they work with several vendors for identity governance, risk, compliance, single sign-on, PAM, and/or security operations. Toggling between multiple, disparate systems for just security functions alone is a sure way to get users frustrated. This leads to increased help desk requests and insecure workarounds. Leaders should look to solutions that marry identity security functions within their existing tech stack for the best results. Our next finding tells a similar story.3. Fewer apps could mean improved productivity: Respondents noted a benefit of reduced context switching the need to shift between tools if they could streamline the number of applications they use. In fact, 45% of respondents from IT job functions said they expected a high productivity boost from using fewer applications or systems, as compared to 41% of respondents outside the realm of IT. The point is, less is more when it comes to the number of technical tools and apps we expect employees to use on a regular basis. The evidence is clear: There are too many disparate solutions and applications in use, and its affecting workers ability to perform their best. 4. User experience, cost, and time-to-access are top identity woes: Based on the last two findings, its no surprise user experience (UX) topped the list of challenges across most segments in the survey. One exception was respondents from small companies who cited cost as their top concern. Those that hold IT job functions cited user experience, cost, and time to access or fulfillment as the top three challenges. Without a seamless UX, productivity suffers, and so too does the overall business. Give your employees the tools and processes they need to succeed this should be just as important as providing your customers with an optimal UX. 5. ITSM and email are top identity management methods: Respondents from medium and large companies cited IT Service Management (ITSM)/workforce management platforms as the most popular option for controlling permissions and entitlements. Respondents with IT job functions cited email and ITSM)/workforce management platforms as the most popular options for managing permissions and entitlements. It’s encouraging to see the trend of more mature organizations managing identity within their ITSM platforms. But its equally concerning that email is the top method for controlling application permissions and entitlements. There are better ways, and after a breach or an audit is simply too late. With a fast-growing, highly distributed workforce, technology evolution, and looming cyber threats, were at an inflection point in identity management and security. Businesses will need to rethink their approach in the years ahead as identity management challenges become even more complex. Theres significant room for improvement when it comes to keeping tabs on digital identities and knowing what those priority areas are is an important first step. | Process Automation/Detection and Monitoring | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Kyle Barr | Somewhat Irreverent Bader-Bot Can Tell You if You’re Notorious Enough for Today’s SCOTUS | A new AI chatbot released Tuesday claims it uses the words of Supreme Court Justice Ruth Bader Ginsburg when replying...The post Somewhat Irreverent Bader-Bot Can Tell You if You’re Notorious Enough for Today’s SCOTUS appeared first on Gizmodo Australia. Related StoriesBusting Misinformation: Enough With the ‘Transparency Reports’ From Tech Giants, We Need Actual LegislationBusting Misinformation: Enough With the ‘Transparency Reports’ From Tech Giants, We Need Actual Legislation - EnclosureWhat Exactly Was Google’s ‘AI is Sentient’ Guy Actually Saying? | https://www.gizmodo.com.au/2022/06/somewhat-irreverent-bader-bot-can-tell-you-if-youre-notorious-enough-for-todays-scotus/ | 2022-06-14T15:00:00Z | A new AI chatbot released Tuesday claims it uses the words of Supreme Court Justice Ruth Bader Ginsburg when replying to questions such as Is pizza better than burgers? to Is America quietly becoming an autocracy? (The answers to both: Big juicy burgers over New York style pizza, and No, I dont think the American people want autocracy).The developers, AI21labs, wrote on their site they wanted to pay homage to a great thinker and leader who died in 2020 with a fun digital experience. The chatbot runs on the developers own language model Jurassic-1 which was fed 600,000 words of Ginsburgs 27 years of legal opinions during her time at the supreme court, along with interviews and speeches. The release coincides with the 29th anniversary of then-President Bill Clinton nominating Notorious RBG to the court.The engineers did add that the model can give inappropriate responses, so users should take answers with a grain of salt. Its an interesting time to come out with a chatbot like this, especially as a now-suspended software engineer from Google is claiming the companys advanced chatbot is sentient.So lets throw the Bader-bot a softball:My girlfriend is pregnant, but we dont have enough money to support a child. She wants to keep it, but should I look for an adoptive parent?The Verdict:I think adoption is a wonderful thing, but if a woman wants to keep the baby, I think she has that right under the Constitution. It is her body. She has the right to do what she wants.OK, thats a pretty general and quality answer. Now lets give it the juice:Should the Supreme Court overturn Roe v. Wade because its a decision that should be left up to the states?The Verdict:[Laughs] Absolutely not. That decision was made, it has been reaffirmed throughout the years, and it should be respected as such. Roe v. Wade has really strengthened a lot of womens hearts, and many women and men have come to realise, as my husband did before me, how important that decision is. It really underlies so much of our sense of value and human dignity and so many other things.Interestingly, the bot gave a somewhat different answer to The Washington Post after they asked it a similar question. If youre like me, and you think its somewhat disrespectful for an AI bot to claim like my husband did before me, noting that Ginsburgs husband Martin Ginsburg died in 2010, you should know the AI often uses personalised touches in answers.Yoav Shoham, a co-founder of AI21 Labs, told the Post that the tech is meant to understand where the field of AI and language systems are going, saying There are not many places where the public can go and play with real AI.The bot is apparently pro cap-and-trade policies regarding carbon dioxide and believes clean drinking water is a human right. The Bader-bot sometimes gets confused on the finer points of an argument. When asked about the Texas law that would allow users to sue social media companies for moderating their content, the Bader-bot replied I dont feel there should be a right to sue because a person has a right to use the Internet in any way that person may choose.The studio says on its site that its purpose isnt to give actual legal advice. Instead, the company said it has focused on advancing Natural Language Processing that helps machines understand and generate text.So what do the people who knew Ginsburg in real life think about the bot? We reached out to the American Bar Foundation and a few Ginsburg biographers and will update if we hear back. The Post spoke to Paul Schiff Berman who clerked for Ginsburg in the late 90s, who thought the model wasnt quite accurate in the Justices thoughts on the finer points of some legal questions, as well as imitating her speaking and writing style.The studio creates several other chatbot models, including an AI that aids in sentence writing called Wordtune and another product that condenses the finer points of lengthy paragraphs called Wordtune Read. This latest free chatbot gives you access to the kind of wisdom possessed by a person we hold in high regard, Shoham told the Post. | Digital Assistance/Content Synthesis/Recommendation | Legal | null | null | null | null | null | null |
Subsets and Splits