source
stringclasses
2 values
author
stringlengths
0
824
title
stringlengths
0
475
description
stringlengths
0
32.8k
url
stringlengths
0
713
urlToImage
stringlengths
0
2k
publishedAt
stringlengths
20
20
content
stringlengths
0
32.8k
category_nist
stringlengths
5
160
category
stringlengths
5
239
id
stringlengths
6
7
subreddit
stringlengths
3
21
score
int64
0
30.2k
num_comments
int64
0
2.27k
created_time
timestamp[ns]
top_comments
stringlengths
1
25.4k
news
Jean-Luc Aufranc (CNXSoft)
Rockchip RKLLM toolkit released for NPU-accelerated large language models on RK3588, RK3588S, RK3576 SoCs
Rockchip RKLLM toolkit (also known as rknn-llm) is a software stack used to deploy generative AI models to Rockchip RK3588, RK3588S, or RK3576 SoC using
https://www.cnx-software.com/2024/07/15/rockchip-rkllm-toolkit-npu-accelerated-large-language-models-rk3588-rk3588s-rk3576/
https://www.cnx-software…ckchip-RKLLM.png
2024-07-15T11:00:32Z
Rockchip RKLLM toolkit (also known as rknn-llm) is a software stack used to deploy generative AI models to Rockchip RK3588, RK3588S, or RK3576 SoC using the built-in NPU with 6 TOPS of AI performance.We previously tested LLM’s on Rockchip RK3588 SBC using the Mali G610 GPU, and expected NPU support to come soon. A post on X by Orange Pi notified us that the RKLLM software stack had been released and worked on Orange Pi 5 family of single board computers and the Orange Pi CM5 system-on-module.The Orange Pi 5 Pro‘s user manual provides instructions on page 433 of the 616-page document, but Radxa has similar instructions on their wiki explaining how to use RKLLM and deploy LLM to Rockchip RK3588(S) boards.The stable version of the RKNN-LLM was released in May 2024 and currently supports the following models:TinyLLAMA 1.1BQwen 1.8BQwen2 0.5BPhi-2 2.7BPhi-3 3.8BChatGLM3 6BGemma 2BInternLM2 1.8BMiniCPM 2BYou’ll notice all models have between 0.5 and 3.8 billion parameters except for the ChatGLM3 with 6 billion parameters. By comparison, we previously tested Llama3 with 8 billion parameters on the Radxa Fogwise Airbox AI box with a more powerful 32 TOPS AI accelerator.The screenshot above shows the TinyLLMA 1.1B running on the Radxa ROCK 5C at 17.67 token/s. That’s fast but obviously, it’s only possible because it’s a smaller model. It also supports Gradio to access the chatbot through a web interface. As we’ve seen in the Radxa Fogwise Airbox review, the performance decreases as we increase the parameters or answer length.Radxa tested various models and reported the following performance on Rockchip RK3588(S) hardware:TinyLlama 1.1B – 15.03 tokens/sQwen 1.8B – 14.18 tokens/sPhi3 3.8B – 6.46 tokens/sChatGLM3 – 3.67 tokens/sWhen we tested Llama 2 7B on the GPU of the Mixtile Blade 3 SBC, we achieved 2.8 token/s (decode) and 4.8 tokens/s (prefill). So it’s unclear whether the NPU does provide a noticeable benefit in terms of performance, but it may consume less power than the GPU and frees up the GPU for other tasks. The Orange Pi 5 Pro’s user manual provides additional numbers for performance, CPU and NPU loads, and memory usage.While the “reasoning” (decoding) performance may not be that much better than on the GPU, it looks like pre-fill is significantly faster. Note that this was all done on the closed-source NPU driver, and work is being done for an open-source NPU driver for the RK3588/RK3576 SoC for which the kernel driver was submitted to mainline last month.Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress
Unknown
Unknown
null
null
null
null
null
null
news
Nick Evanson
I downloaded Intel's new AI Playground beta and may have finally found something Arc is honestly good at
It turns out that Intel's graphics cards are better at doing AI than they are at churning out ultra-high frame rates in games.
https://www.pcgamer.com/hardware/graphics-cards/i-downloaded-intels-new-ai-playground-beta-and-may-have-finally-found-something-arc-is-honestly-good-at/
https://cdn.mos.cms.futu…fJX9-1200-80.png
2024-07-23T12:08:05Z
Proponents of generative AI would often have you believe that such systems are all about making serious improvements to your productivity and work. But for owners of Arc graphics cards, Intel believes you should just have fun and mess around with it. To that end, it has released a beta version of AI Playground, a free and open-source application that lets you use your Alchemist GPU to generate and edit AI images and talk nonsense with a chatbot.Intel first teased AI Playground at Computex, earlier this year, and now a public beta version of the app is available to download for Windows. You'll need an Arc graphics card with 8 GB or more of VRAM—it won't run on any other GPU, unfortunately.The installation will pull a copy of Python off the Internet if you don't have it already installed. If you do have Python on your gaming PC and AI Playground crashes when you run it, it's probably a version conflict and the only option to resolve the problem, at the moment, is to remove all instances of Python you currently have.AI Playground's main features are the text-to-image and image-to-image generative tools that use Stable Diffusion 1.5 as the primary model of choice. You can try that system without using Intel's app, of course, via Hugging Face's online generator but the idea behind AI Playground is that it's all run locally—slower than a dedicated server perhaps but certainly a lot more private and secure.I gave it a go using an Arc A770, which has 16 GB of VRAM, and the first thing I noticed was the installation process is pretty long, as it needs to pull all the files and models it needs off various servers. If you have a slow net connection, be prepared to give some time to finish the whole process. Once done, you can fire up the app, wait a few seconds for it to initialise, and then you're free to mess about.Well, not immediately. The moment I entered some text to generate an image, the app spat out a message saying I didn't have the right AI model installed. Another download (this one is a bit quicker than the others) and bingo! I had my first piece of generative AI art nonsense.The result from just using 'an astronaut riding a horse' wasn't too bad but the real surprise was just how fast it was—less than four seconds to generate the 512 x 512 image. Interestingly, Intel's system automatically includes two negative prompts, bad hands and nfsw, so you won't be churning unsavoury images without being highly creative with the prompts.Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.From here, you can pass your creation onto the image-to-image generator, which offers upscaling, inpainting, and outpainting to enhance what you've made (or any other picture, for that matter). Each system requires a further model to be downloaded, which got a tad annoying after a while—it would be much better to have these as options during the main, first installation procedure.These tools are a little slower than the standard image generator but I was still impressed by how well the Arc A770 breezed through the tasks. Intel's Alchemist GPUs aren't the fastest in gaming but thanks to being packed to the hilt with matrix units, they're cracking little AI machines.Lastly, there's the chatbot, which uses Microsoft's Phi 3 model to handle all the generative malarky. It's a little buggy and takes a couple of goes before it actually works and spits out a reply. I'm not especially interested in such systems—I can admire the technical wizardry behind them all, but using them doesn't do anything for me.But the fact that this is all free, local, and fast (with the right Arc graphics card) makes Intel's AI Playground a nice little tool to mess around with for a while. For the right person, though, it could be useful and it's nice to give Intel a bit of praise for something. Beep bop, I am not a robot.
Content Creation/Image Analysis
Arts, Design, Entertainment, Sports, and Media/Computer and Mathematical
null
null
null
null
null
null
news
Suri Nuthalapati
Exploring the Evolution of Transformers: From Basic To Advanced Architectures
In their seminal 2017 paper, "Attention Is All You Need," Vaswani et al. introduced the Transformer architecture, revolutionizing not only speech recognition technology but many other fields as well. This blog post explores the evolution of Transformers, tracing their development from the original design to the most advanced models, and highlighting significant advancements made along the way.The Original TransformerThe original Transformer model introduced several groundbreaking concepts:
https://dzone.com/articles/exploring-the-evolution-of-transformers
https://dz2cdn1.dzone.co…804919-thumb.jpg
2024-07-25T19:00:00Z
In their seminal 2017 paper, "Attention Is All You Need," Vaswani et al. introduced the Transformer architecture, revolutionizing not only speech recognition technology but many other fields as well. This blog post explores the evolution of Transformers, tracing their development from the original design to the most advanced models, and highlighting significant advancements made along the way.The Original TransformerThe original Transformer model introduced several groundbreaking concepts:Self-attention mechanism: This lets the model determine how important each component is in the input sequence.Positional encoding: Adds information about a token's position within a sequence, enabling the model to capture the order of the sequence.Multi-head attention: This feature allows the model to concurrently focus on different parts of the input sequence, enhancing its ability to understand complex relationships.Encoder-decoder architecture: Separates the processing of input and output sequences, enabling more efficient and effective sequence-to-sequence learning.These elements combine to create a powerful and flexible architecture that outperforms previous sequence-to-sequence (S2S) models, especially in machine translation tasks.Encoder-Decoder Transformers and BeyondThe original encoder-decoder structure has since been adapted and modified, leading to several notable advancements:BART (Bidirectional and auto-regressive transformers): Combines bidirectional encoding with autoregressive decoding, achieving notable success in text generation.T5 (Text-to-text transfer transformer): Recasts all NLP tasks as text-to-text problems, facilitating multi-tasking and transfer learning.mT5 (Multilingual T5): Expands T5's capabilities to 101 languages, showcasing its adaptability to multilingual contexts.MASS (Masked sequence to sequence pre-training): Introduces a new pre-training objective for sequence-to-sequence learning, enhancing model performance.UniLM (Unified language model): Integrates bidirectional, unidirectional, and sequence-to-sequence language modeling, offering a unified approach to various NLP tasks.BERT and the Rise of Pre-TrainingBERT (Bidirectional Encoder Representations from Transformers), launched by Google in 2018, marked a significant milestone in natural language processing. BERT popularized and perfected the concept of pre-training on large text corpora, leading to a paradigm shift in the approach to NLP tasks. Let's take a closer look at BERT's innovations and their impact.Masked Language Modeling (MLM)Process: Randomly masks 15% of tokens in each sequence. The model then attempts to predict these masked tokens based on the surrounding context.Bidirectional context: Unlike previous models that processed text either left-to-right or right-to-left, MLM allows BERT to consider the context from both directions simultaneously.Deeper understanding: This approach forces the model to develop a deeper understanding of the language, including syntax, semantics, and contextual relationships.Variant masking: To prevent the model from over-relying on [MASK] tokens during fine-tuning (since [MASK] does not appear during inference), 80% of the masked tokens are replaced by [MASK], 10% by random words, and 10% remain unchanged.Next Sentence Prediction (NSP)Process: The model receives pairs of sentences and must predict whether the second sentence follows the first in the original text.Implementation: 50% of the time, the second sentence is the actual next sentence, and 50% of the time, it is a random sentence from the corpus.Purpose: This task helps BERT understand relationships between sentences, which is crucial for tasks like question answering and natural language inference.Subword TokenizationProcess: Words are divided into subword units, balancing the size of the vocabulary and the ability to handle out-of-vocabulary words.Advantage: This approach allows BERT to handle a wide range of languages and efficiently process morphologically rich languages.GPT: Generative Pre-Trained TransformersOpenAI's Generative Pre-trained Transformer (GPT) series represents a significant advancement in language modeling, focusing on the Transformer decoder architecture for generation tasks. Each iteration of GPT has led to substantial improvements in scale, functionality, and impact on natural language processing.GPT-1 (2018)The first GPT model introduced the concept of pre-training for large-scale unsupervised language understanding:Architecture: Based on a Transformer decoder with 12 layers and 117 million parameters.Pre-training: Utilized a variety of online texts.Task: Predicted the next word, considering all previous words in the text.Innovation: Demonstrated that a single unsupervised model could be fine-tuned for different downstream tasks, achieving high performance without task-specific architectures.Implications: GPT-1 showcased the potential of transfer learning in NLP, where a model pre-trained on a large corpus could be fine-tuned for specific tasks with relatively little labeled data.GPT-2 (2019)GPT-2 significantly increased the model size and exhibited impressive zero-shot learning capabilities:Architecture: The largest version had 1.5 billion parameters, more than 10 times greater than GPT-1.Training data: Used a much larger and more diverse dataset of web pages.Features: Demonstrated the ability to generate coherent and contextually relevant text on a variety of topics and styles.Zero-shot learning: Showed the ability to perform tasks it was not specifically trained for by simply providing instructions in the input prompt.Impact: GPT-2 highlighted the scalability of language models and sparked discussions about the ethical implications of powerful text generation systems.GPT-3 (2020)GPT-3 represented a huge leap in scale and capabilities:Architecture: Consisted of 175 billion parameters, over 100 times larger than GPT-2.Training data: Utilized a vast collection of texts from the internet, books, and Wikipedia.Few-shot learning: Demonstrated remarkable ability to perform new tasks with only a few examples or prompts, without the need for fine-tuning.Versatility: Exhibited proficiency in a wide range of tasks, including translation, question answering, text summarization, and even basic coding.GPT-4 (2023)GPT-4 further pushes the boundaries of what is possible with language models, building on the foundations laid by its predecessors.Architecture: While specific architectural details and the number of parameters have not been publicly disclosed, GPT-4 is believed to be significantly larger and more complex than GPT-3, with enhancements to its underlying architecture to improve efficiency and performance.Training data: GPT-4 was trained on an even more extensive and diverse dataset, including a wide range of internet texts, academic papers, books, and other sources, ensuring a comprehensive understanding of various subjects.Advanced few-shot and zero-shot learning: GPT-4 exhibits an even greater ability to perform new tasks with minimal examples, further reducing the need for task-specific fine-tuning.Enhanced contextual understanding: Improvements in contextual awareness allow GPT-4 to generate more accurate and contextually appropriate responses, making it even more effective in applications like dialogue systems, content generation, and complex problem-solving.Multimodal capabilities: GPT-4 integrates text with other modalities, such as images and possibly audio, enabling more sophisticated and versatile AI applications that can process and generate content across different media types.Ethical considerations and safety: OpenAI has placed a strong emphasis on the ethical deployment of GPT-4, implementing advanced safety mechanisms to mitigate potential misuse and ensure that the technology is used responsibly.Innovations in Attention MechanismsResearchers have proposed various modifications to the attention mechanism, leading to significant advancements:Sparse attention: Allows for more efficient processing of long sequences by focusing on a subset of relevant elements.Adaptive attention: Dynamically adjusts the attention span based on the input, enhancing the model's ability to handle diverse tasks.Cross-attention variants: Improve how decoders attend to encoder outputs, resulting in more accurate and contextually relevant generations.ConclusionThe evolution of Transformer architectures has been remarkable. From their initial introduction to the current state-of-the-art models, Transformers have consistently pushed the boundaries of what's possible in artificial intelligence. The versatility of the encoder-decoder structure, combined with ongoing innovations in attention mechanisms and model architectures, continues to drive progress in NLP and beyond. As research continues, we can expect further innovations that will expand the capabilities and applications of these powerful models across various domains.
Content Synthesis/Discovery
Unknown
null
null
null
null
null
null
news
Pablo Junco, Forbes Councils Member, Pablo Junco, Forbes Councils Member https://www.forbes.com/sites/forbestechcouncil/people/pablojunco/
The Power Of Diversity And Inclusion In Tech
Diversity and inclusion are not just ethical imperatives but strategic advantages that drive innovation in the tech industry.
https://www.forbes.com/sites/forbestechcouncil/2024/07/22/the-power-of-diversity-and-inclusion-in-tech/
https://imageio.forbes.c…=1600&fit=bounds
2024-07-22T13:45:00Z
Pablo Junco is Senior Director, Data, Analytics and AI at Microsoft, representing the LatinX leaders in Tech.gettyIn today's rapidly evolving technological landscape, the importance of diversity and inclusion (D&I) has never been more critical. Diverse teams bring a multitude of perspectives, experiences and ideas, fostering an environment where creativity and innovation thrive. As the tech industry continues to shape our world, ensuring that all voices are heard, including those from underrepresented groups like Hispanics and Latinos, becomes essential to drive groundbreaking advancements. The advent of generative AI (GenAI) offers unprecedented opportunities to amplify our creativity and productivity, enabling us to achieve more with less.The Role of Diversity And Inclusion In Accelerating InnovationDiverse Perspectives Fuel Creativity: Innovation often stems from the ability to view problems from various angles and devise unique solutions. Diverse teams naturally come equipped with various perspectives shaped by diverse backgrounds, cultures and experiences. This diversity of thought can lead to more creative solutions and innovative breakthroughs that might not emerge in a more homogeneous environment.Enhanced Problem-Solving Abilities: Inclusive teams perform better when tackling complex problems. Research shows that diverse groups excel at problem-solving and decision making because they bring together various viewpoints and cognitive approaches. This collaborative effort can uncover new insights and drive more effective, comprehensive solutions.Greater Market Reach: A diverse workforce can better understand and cater to a global customer base. By reflecting the diversity of their audience, organizations can develop products and services that resonate with a broader range of consumers. This inclusivity not only enhances customer satisfaction but also opens new market opportunities.Fostering A Culture Of Innovation: An inclusive environment where all team members feel valued and respected encourages participation and the free flow of ideas. When individuals believe their contributions are recognized and appreciated, they are more likely to engage actively and share innovative concepts. This culture of openness and respect is the breeding ground for innovation.How GenAI Technology Helps UsGenerative AI involves algorithms that can create content, designs and solutions by learning from existing data. Its a powerful technology that can be a game-changer, increasing the tools designed to enhance creativity and productivity in several ways.Boosting Creativity: GenAI can assist in brainstorming and ideation processes by generating relevant ideas based on discussions and given instructions. This helps individuals and teams explore a wide range of possibilities quickly, stimulating creative thinking and innovation.Automating Repetitive Tasks: GenAI frees up valuable time for more strategic and creative endeavors by automating routine and repetitive tasks. This increased efficiency allows us to focus on higher-value activities, driving productivity and innovation.Enhancing Learning And Skill Development: GenAI can provide personalized learning experiences, helping individuals acquire new skills and knowledge more effectively. For women and Latinos looking to advance in tech, this means access to tailored educational resources that can accelerate professional growth.Enabling Resource Optimization: With GenAI, organizations and individuals can optimize resources by making data-driven decisions. This capability is particularly beneficial for startups and smaller enterprises looking to maximize their impact with limited resources.Leveling The Playing Field For The Advantage Of The Underserved CommunitiesOne of GenAI's most transformative aspects is its democratizing effect. Access to innovative tools is no longer limited to large organizations, leveling the playing field and providing equal opportunities to innovate and succeed. This equal access is a significant change, allowing individuals and smaller organizations to compete effectively in the tech landscape.Underserved communities such as Hispanic and Latino have a unique edge, having been trained to deliver results with resilience and limited resources. Combined with access to powerful AI tools, these communities are positioned to leverage GenAI in exceptionally creative and efficient ways. The ability to maximize resourcefulness is ingrained in many underrepresented groups, making them naturally adept at harnessing AI to its full potential.Encouraging Participation In AI Development: Encouraging participation in developing and deploying AI technologies ensures that the unique perspectives and needs of various communities and cultures are considered, leading to more inclusive and effective AI solutions.Fostering Community Collaboration: Building networks and communities around GenAI can provide support, mentorship and collaborative opportunities. Communities such as Indigenous, Women in Tech or Latinx can be incubators for innovative ideas and ventures, leveraging collective knowledge and resources.Promoting Education And Training: It is crucial to invest in education and training programs focused on AI and technology. Scholarships, grants and partnerships with educational institutions can help members of underrepresented groups gain the skills needed to excel in the tech industry.Showcasing Success Stories: Highlighting the achievements of Women, LGBTQ+ and Latinos in Tech who have successfully leveraged generative AI can inspire others and demonstrate the potential of this technology. Success stories can serve as powerful motivators, showing that significant advancements are possible with the right tools and mindset.Enhancing Digital Collaboration And Inclusivity: Advanced technologies like OpenAI's GPT-4, Microsoft Phi-3, Databricks DBRX or Mistral offer significant advantages for enhancing digital collaboration and creating inclusive spaces. Real-time translation that maintains intonation and accents can facilitate communication across diverse teams, ensuring that language barriers do not hinder collaboration. This technology can help Latinos and other non-native English speakers in the U.S. participate more fully in global tech environments, fostering inclusivity and enriching the collaborative process.ConclusionDiversity and inclusion are not just ethical imperatives but strategic advantages that drive innovation in the tech industry. By embracing these principles, we can create a more dynamic and creative environment where all voices, including those of Hispanics and Latinos, contribute to groundbreaking advancements. Generative AI offers a unique opportunity for underrepresented groups to enhance creativity, productivity and impact, allowing them to achieve more with less and leading the way to a more inclusive and innovative future. With the resilience and resourcefulness that have long been hallmarks of these communities, coupled with equal access to advanced AI tools, the potential for transformative innovation is boundless.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Process Automation/Content Creation/Content Synthesis
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
WOW! eBook
Mastering Transformers, Second Edition
eBook Details: Paperback: 462 pages Publisher: WOW! eBook; 2nd edition (June 3, 2024) Language: English ISBN-10: 1837633789 ISBN-13: 978-1837633784 eBook Description: Mastering Transformers, Second Edition: The Journey from BERT to Large Language Models and Stable Diffusion. Explore transformer-based language models from BERT to GPT, delving into NLP and computer vision tasks, while tackling challenges effectively. Transformer-based language models such as BERT, T5, GPT, DALL-E, and ChatGPT have dominated NLP studies and become a new paradigm. Thanks to their accurate and fast fine-tuning capabilities, transformer-based language models have been able to outperform traditional machine learning-based approaches for many challenging natural language understanding...
https://www.wowebook.org/mastering-transformers-second-edition/
null
2024-07-06T10:33:12Z
eBook Details:Paperback: 462 pagesPublisher: WOW! eBook; 2nd edition (June 3, 2024)Language: EnglishISBN-10: 1837633789ISBN-13: 978-1837633784eBook Description:Mastering Transformers, Second Edition: The Journey from BERT to Large Language Models and Stable Diffusion. Explore transformer-based language models from BERT to GPT, delving into NLP and computer vision tasks, while tackling challenges effectively.Transformer-based language models such as BERT, T5, GPT, DALL-E, and ChatGPT have dominated NLP studies and become a new paradigm. Thanks to their accurate and fast fine-tuning capabilities, transformer-based language models have been able to outperform traditional machine learning-based approaches for many challenging natural language understanding (NLU) problems.Aside from NLP, a fast-growing area in multimodal learning and generative AI has recently been established, showing promising results. Mastering Transformers will help you understand and implement multimodal solutions, including text-to-image. Computer vision solutions that are based on transformers are also explained in the book. Youll get started by understanding various transformer models before learning how to train different autoregressive language models such as GPT and XLNet. The Mastering Transformers, 2nd Edition book will also get you up to speed with boosting model performance, as well as tracking model training using the TensorBoard toolkit. In the later chapters, youll focus on using vision transformers to solve computer vision problems. Finally, youll discover how to harness the power of transformers to model time series data and for predicting.Focus on solving simple-to-complex NLP problems with PythonDiscover how to solve classification/regression problems with traditional NLP approachesTrain a language model and explore how to fine-tune models to the downstream tasksUnderstand how to use transformers for generative AI and computer vision tasksBuild transformer-based NLP apps with the Python transformers libraryFocus on language generation such as machine translation and conversational AI in any languageSpeed up transformer model inference to reduce latencyBy the end of this Mastering Transformers, Second Edition book, youll have an understanding of transformer models and how to use them to solve challenges in NLP and CV.DOWNLOAD
Content Synthesis/Recommendation/Discovery
Unknown
null
null
null
null
null
null
news
Yann Collette
Using Artificial Intelligence to set a guitar sound
Introduction A lot of tools have appeared for sound creation since the recent developments in artificial intelligence. In this presentation, I will show how to create a sound using NAM files and some plugins available in Guitarix. To manage guitar sound, today, we have several supported formats: AIDA-X [1], [19], [20], NAM and RTNeural. Guitarix […]
https://fedoramagazine.org/using-artificial-intelligence-to-set-a-guitar-sound/
https://fedoramagazine.o…guitar_sound.jpg
2024-07-29T08:00:00Z
A lot of tools have appeared for sound creation since the recent developments in artificial intelligence. In this presentation, I will show how to create a sound using NAM files and some plugins available in Guitarix.To manage guitar sound, today, we have several supported formats: AIDA-X [1], [19], [20], NAM and RTNeural. Guitarix [6] proposes a module to be able to use NAM or RTNeural amplifier models.If someone prefers to use the AIDA-X plugin, it’s still possible to add this plugin into Guitarix. This plugin is available in the Audinux COPR repository and also in the MOD effect pedal [18].The version of Guitarix we will use is the one from the Audinux COPR repository (see [10] and [11]) where you will find some other AI oriented plugins, too.To be able to use Guitarix in real time to process the sound of your guitar (with a latency below 10 ms), some steps are required:Add ‘preempt=full threadirqs‘ as options on the kernel command line (via grub just before booting a kernel). Check [22] for how to set up kernel optionsIn the file /etc/security/limits.d/25-pw-rlimits.conf (you must be a super user to be able to edit this file), replace the line “@pipewire   – rtprio  …” by “@pipewire   – rtprio  95“Add your user to the pipewire group. As a super user, enter the command “usermod -a -G pipewire <USERNAME>“Disconnect from your current window manager session and re-login to your session (so the last two steps take effect)Now, you are ready to use Guitarix in real-time mode.Connect your guitar to the input of your USB audio interface and your monitoring speakers to the output of your USB audio interface and that’s all.The remainder of this article will be software manipulations.Guitarix’s first commit dates back to November 2008. Today, with version 0.46.0, various plugins have been added. There is also a really good tube simulation module. All these are integrated into a main window. Guitarix also has a great tool which allows the translation of electronics schematics into LV2 plugins (only available in the Git repository). For example, the Tube Screamer plugin is a direct conversion of the Tube Screamer schematics into a LV2 plugin.Guitarix exists also in a VST3 version [15].On the screenshot above, you can see a plugin navigator on the left, a preset management section below and an area where plugins may be dropped. On the left of that area you have the mono plugins and on the right, you have the stereo plugins.Guitarix is represented as this rectangular block in Ray Session [7]. The blue connectors are related to the audio signal (one mono input and one stereo output) and the pink connectors are related to the MIDI signal. There is one MIDI input which allows connection to a MIDI pedal board and switch presets or to control some plugins like the wah wah plugin. When you click on a mouse center button on a Guitarix GUI element, a MIDI learn window pops up and allows you to define an association between MIDI event and the GUI element. There is also one MIDI output which is a MIDI through and which also handles a MIDI note generated by the Guitarix audio tuner.Guitarix works with Jack and works fine with Pipewire Jack too.The figure above represents a more or less standard way to organize effect pedals.First there is a sequence of pre-amplification/pitch shifting/compressor which prepares the “clean” sound. A noise gate can also be added to remove the noise generated by the guitar pickup but here the noise gate is placed after the distortion/saturation because these effects produce a lot of noise too, if a tube amplifier is used.After that, there is a sequence of distortion/saturation/wah wah effects. Some people prefer to put the wah wah pedal earlier but this pedal is a filter and a filter has less effect on a clean sound than on a saturated one.Next, there is a modulation section composed of flanger/chorus/delay/echo/equalization. Some equalization can be added at the beginning of the chain, as well.Last there is the volume reverberation and cabinet emulation/amplifier.Now, let’s define a saturated sound without using AI.A quick description of this stack of plugins:at the top, there are two plugins: GxDrop (LV2) (check the mod-gxpitchshifter package in the Audinux COPR repository) and Overdrive. This is mostly to detune the guitar for metal (some songs are played with guitars tuned down five semi-tones). I added a small distortion because the down pitch shifting leaves a hole in the high frequencies and the small distortion helps filling this hole. This part is optional for the sound;a compressor. This can be optional too but for some techniques, it’s a plus (tapping, sweeping);a Rat distortion;the tube emulator with some drive to reach a good distortion sound. Adding a good distortion before this amplifier helps to shape the sound;the next three plugins are for the solo sound and are controlled by a MIDI pedal board (like the Behringer Fcb1010 or the Line5 FBV Express MKII);a reverb plugin to add some “space” to the sound;and finally, a cabinet emulation. This is an emulation of speaker. Here, we use the 4 x 12” speakers. This part gives a good identity to your sound, too.With such a preset, we already have a good sound for a large set of music styles. Here is an example on Mercyful Fate / Lady In Black:Mercyful Fate / Lady In Black with a standard Guitarix presetWe will now use a Guitarix plugin which loads a NAM (Neural Amplifier Modeler – [8]) model. This project was created around the begining of 2019. As you can see on the web page of the project [8], this project is now supported by many companies.We will create a new sequence of plugins.The tube emulator has been put into pass through mode and is now a pure linear amplifier;we added a tube screamer (like the RAT distortion but with another style of distortion);and we have the NAM loader plugin. We used some pre-trained models available from ToneHunt [9];a deactivated set of delay / volume for the solo sound;a last set of cabinet / reverb plugin.The sound used for the amplifier emulation is Marshall_JCM_800_2203. This may be downloaded from ToneHunt. The archive contains several NAM files. Each file corresponds to a model for a given setting of knobs (Pre-Amp, Bass, Medium, Treble, Master Volume, Gain). So for the selected NAM archive, we have 30 NAM files.‘JCM800 2203 – P5 B5 M5 T5 MV5 G10 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G1 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G2 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G3 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G4 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G5 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G6 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G7 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G8 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV5 G9 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G10 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G1 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G2 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G3 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G4 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G5 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G6 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G7 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G8 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV6 G9 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G10 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G1 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G2 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G3 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G4 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G5 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G6 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G7 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G8 – AZG – 700.nam’‘JCM800 2203 – P5 B5 M5 T5 MV7 G9 – AZG – 700.nam’With the NAM plugin, some Jack settings may be required. A NAM file is around 300 ko and requires many computations. So, the Jack audio buffer must be adjusted to avoid Xruns (audio cracks during playback). We can use the following commands to adjust the sampling frequency and the buffer size:$ pw-metadata -n settings 0 clock.force-quantum 256$ pw-metadata -n settings 0 clock.force-rate 44100To check if you have some Xruns with this setting, use the command pw-top. This is a Pipewire command displaying a screen similar to top, but related to audio streams. The screen provides a lot of interesting information.Let’s test this new preset on the same song as before:Mercyfule Fate / Lady In Black with a Guitarix preset using a NAM model.To create your own NAM model, you will need a data set which is composed of an input signal that will go through your gear and an output file which is the recording of the gear’s output.The plugin neuralrecord[4] allows the automation of this process. This plugin is available in the Audinux COPR repository.Once you have your data set ready, it’s time to use the Python package neural-amp-modeler [12].$ pip install neural-amp-modelerOnce installed, follow the documentation to produce your model [13].A drawback of using a neural model is that the model is only available for a configuration of the amplifier. If you want to have a sound which will go from clean to distortion, you will need to load all the intermediate models to have the corresponding sound.A great tool which allows mixing some neural models exists: Ratatouille [21]. It’s a GUI which allows you to load two models and blend them using a knob.It supports various file formats and is available as a LV2 plugin.Guitarix provides other tools that let you try to mimic some sounds. For example, you have the good old IR (Impulse Response) model which can produce some interesting sounds.You can find these plugins in the “Reverb” section of Guitarix. There are two convolvers available:a mono convolver;a stereo convolver (the one with the icon next to it).Drag and drop one of the two plugins in the plugin are and then click on setup. The following window pops up:If you click on file, you will be able to select either a file to replicate the sound of an amplifier or a file to replicate the sound of a given guitarist.For the amplifier, these impulse responses are available:Blackat Leon S7Cicognani Imperivm LuxuryDV Mark Triple 6Engl Retro TubeEngl Special EditionEVH 5150 IIFortin NatasKaos Sludge 15Krank KrankensteinLaney ironheartMakosampCustomHatredMarshall JMP 2203  Jose Arredondo modMarshall MG 15Mesa Boogie Mark VPeavey Vypyr 15Randall SatanRandall thrasherSplawn NitroSplawn Quick RodTaurus StompheadAnd for the bands, the following impulse responses are available:Arch Enemy Doomsday MachineAs I lay dying AwakenedAsking Alexandria Stand up and screamAvenged sevenfold NightmareBelphegor Bondage goat zombieBlack dahlia murder DeflorateBlack label Society order of the BlackBullet for my Valentine Temper TemperCannibal Corpse VileCarnifex Until I feel nothingChildren of Bodom Are You Dead YetChimaira RessurrectionDAATH The HinderersDark Tranquility CharacterDecapitated Carnival Is ForeverDeicide The Stench Of RedemptionDevildriver The last kind wordsDimmu Borgir Enthrone Darkness TriumphantDisturbed IndestructibleDivine Heresy bringer of PlaguesDown NOLADream Theater Black Clouds & Silver LiningsDying Fetus War of AttritionEvile FIVE SERPENTS TEETHExodus Exhibit B The Human ConditionExploited beat the BastardsFear Factory DemanufactureFirewind Days of DefianceGodsmack The OracleGojira LEnfant Sauvagegreen day DookieHamlet Amnesiahatebreed The divinity of purposeHelloween better than RawHexen Being and NothingnessHora Zulu SiempreImpending Doom The Serpent ServantIn Flames WhoracleKillswitch Engage As Daylight DiesKreator Violent RevolutionLamb of God WrathLegion of the Damned Descent Into ChaosMachine head Through the ashes of empiresManowar Battle Hymns MMXIMastodon Curl of the BurlMegadeth YouthtanasiaNapalm Death Time Waits For No SlaveNEWSTEDOverkill The Years of DecayPan de HigoPantera Vulgar display of powerPapa roach Getting away with murderParadise lost Tragic IdolRed Fang Murder the MountainsSepultura AriseSlayer Christ illusionSlypknot IowaSoulfly EnslavedStone Sour House of Gold and BonesStratovarius legionsSymphony X IconoclastSystem of a Down ToxicityTestament dark Roots of EarthTextures SilhouettesThe Sorrow Misery EscapeTrivium In WavesVader NecropolisWarcry RevolucionWayne Static PighammerWhite Chappel Hate CreationThe use of an impulse response is far less computer intensive than the computation of a NAM model. So, if you can do it with an impulse response, do it with an impulse response!If you want to compute your own impulse response, check SpecMatch[14]. This tool (by the developer of Guitarix) helps you to compute this.Here is the result on the song Mercyful Fate / Lady In Black:Mercyful Fate / Lady In Black played with Guitarix and the Mastodon Curl of the Burl impulse responseA first way to fine tune your sound: your ear … But you can also rely on visualizing the spectrum of your signal. The are several plugins available to display a spectrum in Audinux. A nice one is LSP Spectrum Analyzer [17].You can also visualize the temporal representation of your signal via an oscilloscope. Guitarix has a plugin to do that but LSP Oscilloscope is a little bit more powerful for that [17].To be able to use such a tool, you first have to start carla which is a plugin rack compatible with LADSPA / LV2 / VST / VST3 plugins. Once carla is started, you have to load the ‘LSP Spectrum Analyzer x1‘ (with one signal displayed) into carla. And last, you have to connect the Guitarix output (blue ports in the Guitarix block) to the carla input (blue ports in the Carla block).This representation is copied from the Ray Session display. Ray Session is a nice tool. It allows starting a session which is composed of applications (started automatically by Ray Session) and Jack Audio connection (reconnected automatically by Ray Session once a session has been saved). Ray Session saves a lot of time because it sets everything for you.In this graphic, some other tools are interesting:the fluidsynth block corresponds to the TuxGuitar application;the timemachine block is a really simple Jack Audio recorder (records the audio input to a wav file in the user directory);the jack_mixer block is an audio mixer used here to balance the audio level between Guitarix and TuxGuitarthe ‘Audio interne stéréo analogique‘ is the audio inputs from the computer (one block for the inputs the other for the outputs);the ‘Midi Bridge‘ are the MIDI connections coming from ALSA and from Jack Audio (one block for the inputs the other for the outputs);the ‘Scarlett 18i8 Pro‘ block are the audio inputs from the USB audio interface (a Scarlett 18i8 USB interface) (one block for the inputs and one block for the outputs);the ‘Webcam 270 Mono‘ block is the mono audio input from the webcam.This is one of the big advantages of the Pipewire-Jack solution with respect to the standard Jack Audio (which was only able to see the inputs / outputs of the selected audio interface).As you can see, the tools we have at our disposal to generate guitar sounds are quite effective. We focused on the saturated sound of a guitar, but the different tools we covered allow you to model/imitate amplifiers, guitarist sounds, cabinets and reverberations. All of these tools offer great flexibility in constructing the sound we are looking for. These tools will be used more and more in the future. But let’s keep in mind that certain solutions (such as impulse responses) offer good performance/sound quality tradeoff and that the appearance of neural networks should not make us forget this technique. [1] https://github.com/AidaDSP/AIDA-X[2] https://github.com/mikeoliphant/neural-amp-modeler-lv2[3] https://github.com/brummer10/neural-amp-modeler-ui[4] https://github.com/brummer10/neuralrecord[5] https://github.com/brummer10/Ratatouille.lv2[6] https://github.com/brummer10/guitarix[7] https://github.com/Houston4444/RaySession[8] https://www.neuralampmodeler.com[9] https://tonehunt.org/all[10] https://audinux.github.io[11] https://copr.fedorainfracloud.org/coprs/ycollet/audinux[12] https://github.com/sdatkinson/neural-amp-modeler[13] https://neural-amp-modeler.readthedocs.io/en/latest[14] https://github.com/brummer10/SpecMatch[15] https://github.com/brummer10/guitarix.vst[16] https://github.com/falkTX/Carla[17] https://github.com/lsp-plugins/lsp-plugins[18] https://mod.audio[19] https://aida-x.cc[20] https://cloud.aida-x.cc/all[21] https://github.com/brummer10/Ratatouille.lv2[22] https://docs.fedoraproject.org/en-US/fedora/latest/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader
Content Creation/Process Automation
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Mike Wheatley
Google rolls out a host of new features for AI developers
Google LLC today announced a host of new, developer-focused updates at its I/O Connect Bengaluru event in India today, including a new integration between Project IDX and Android Studio, so developers can build their apps using a web browser on any device. It also introduced Firebase AI Monitoring, a new dashboard that shows real-time insights […]The post Google rolls out a host of new features for AI developers appeared first on SiliconANGLE.
https://siliconangle.com/2024/07/17/google-rolls-host-new-features-ai-developers/
https://d15shllkswkct0.c…4/07/unnamed.png
2024-07-17T05:40:14Z
Google LLC today announced a host of new, developer-focused updates at its I/O Connect Bengaluru event in India today, including a new integration between Project IDX and Android Studio, so developers can build their apps using a web browser on any device.It also introduced Firebase AI Monitoring, a new dashboard that shows real-time insights into generative artificial intelligence performance and behavior, and Project Oscar, an AI-agent for maintaining open-source software projects.In a blog post, Google Indias developer relations lead Karthik Padmanabhan pointed out that Indias community of 4 million developers is at the forefront of AI development today, applying the technology to a wide range of problems, particularly those related to language. Google intends to provide them with all of the resources they need to aid in those efforts.For instance, the Google DeepMind India team today revealed a new initiative called Project Vaani, which is a collaboration with the Indian Institute of Science that aims to capture the diversity of the hundreds of spoken languages in that country.The project led to the creation of IndicGenBench, which is a benchmark developers can use to evaluate the capabilities of large language models trained on indic languages. In addition, it has also open-sourced the Composition of Language Models framework, known as CALM, which is for developers who want to combine specialized LLMs with Googles Gemma models.In the latest update to Project IDX, online integrated development environment based on Visual Studio Code, Padmanabhan said the company is launching an early preview of Android Studio on Project IDX that can run in any browser. In doing this, developers can quickly fire up a development workspace that runs entirely in their web browser, enabling them to get started with coding, building and testing native Android applications on any device and in any location.As for Firebase, Google’s mobile and web application development platform, its being improved with the introduction of support for the Go programming language in alpha test mode. The idea is to simplify the development of more efficient and scalable Go-based apps by bringing them into the more intuitive Firebase platform, Google explained.With the preview of the Firebase AI Monitoring dashboard, developers will be able to monitor the performance of their LLMs and the features they support, in production. The dashboard will provide alerts on how those features are being used, their behavior, as well as any inaccuracies and inconsistencies, Padmanabhan said.Meanwhile, Project Oscar is designed to help open-source project maintainers to keep tabs on the latest status of the projects they run. Padmanabhan pointed out that maintaining open-source projects can be a lot of work, with some of the most popular attracting thousands of contributors who publish hundreds of new commits on a daily basis.Project Oscar is a reference for an AI Agent that can keep track of all these updates, and is initially targeted at Go, which currently has more than 2,000 contributors and over 93,000 commits. However, Project Oscar can easily be adapted to support other open-source projects, Padmanabhan said.The Flutter software development kit for creating application user interfaces is also getting a dose of generative AI. The new AI Generated UI feature is able to dynamically compose and personalize UI elements based on natural language inputs, and is launching in early access preview today, the company said.Finally, Padmanabhan announced that its partnering with the Indian Ministry of Electronics and Information Technology to support 10,000 Indian generative AI startups. The support extends to Google Cloud credits, and an updated startup program curriculum to equip founders with the skills and knowledge they need to develop with AI. In addition, Google will also host an annual nationwide hackathon contest to identify Indias next generation of AI innovators.Image: GoogleYour vote of support is important to us and it helps us keep the content FREE.One click below supports our mission to provide free, deep, and relevant content.  Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well – Andy JassyTHANK YOU
Unknown
Unknown
null
null
null
null
null
null
news
Elastic DevRel team
DevRel newsletter — July 2024
Hello from the Elastic DevRel team! In this newsletter blog, you’ll find information on upcoming events in your region, catch up on the latest content, and get a deep dive into the Elastic distributio...
https://www.elastic.co/blog/devrel-newsletter-july-2024
https://static-www.elast…s-04_(1)_(1).png
2024-07-18T00:00:00Z
semantic_text: Learn about the new semantic_text field type and semantic query with Carlos Delgado and Mike Pellegrini making semantic search in Elasticsearch just as simple as keyword search.Playground: Follow Joe McElroy and Serena Chou as they explain how to use Kibanas new Playground to experiment with retrieval augmented generation (RAG) applications using Elasticsearch. GPT models and Azure OpenAI consumption:Gain insight into OpenAI model inference and track metrics, such as token count and total request costs, from David Hope. This data is gathered via the OpenAI API and the OpenTelemetry Python library, which instrument external calls.Semantic and geo search: Learn from Madhusudhan Konda on how to combine multiple sparse, dense, and geo fields to build more expressive queries and the best way to do it.Elasticsearch vs. OpenSearch: Ugo Sangiorgi compares Elasticsearch and OpenSearch for vector search performance. Spoiler alert: The results show that Elasticsearch is up to 12x faster than OpenSearch for vector search.OpenTelemetry cloud resource attributes: Sylvain Juge explains how cloud resource attributes can be used and configured in Java applications using OpenTelemetry SDK/API and instrumentation agents.Data denormalization, semantic search, and generative AI:Check out Tim Brophys and Dani Aschwandens online banking prototype repo, which demonstrates how Elastic can improve both customer-facing and internal bank visibility and analytics.Elasticsearch basics: Jessica Garson wrote some thoughts on when to choose between an index-level API, a query, an aggregation, or ES|QL in Elasticsearch.RAG: Lloyd Chan contributed a Jupyter Notebook showing how to build a RAG application with Gemma, Elasticsearch, and Hugging Face to the Hugging Face generative AI cookbook.
Information Retrieval Or Search/Content Synthesis
Unknown
null
null
null
null
null
null
news
Evan Huang and TechNode Staff
Landing AI | In the AI era, where is venture capital headed?
Note: The article was first published on TechNode China written by Evan Huang and translated by Zinan Zhang. In the dynamic AI era, venture capital is increasingly attuned to the transformative potential of this technology. As generative AI advances in creating text, images, and videos, a plethora of opportunities and challenges are emerging. This article explores the pivotal role of the […]
http://technode.com/2024/07/22/landing-ai-in-the-ai-era-where-is-venture-capital-headed/
https://technode.com/wp-…07/WAIC-2024.jpg
2024-07-22T08:33:31Z
Every Wednesday and Friday, TechNodes Briefing newsletter delivers a roundup of the most important news in China tech, straight to your inbox.Note: The article was first published on TechNode China written by Evan Huang and translated by Zinan Zhang.In the dynamic AI era, venture capital is increasingly attuned to the transformative potential of this technology. As generative AI advances in creating text, images, and videos, a plethora of opportunities and challenges are emerging. This article explores the pivotal role of the Scaling Law, the emergence of super apps, and the promising future of AI-driven innovations. Highlighting insights from industry leaders, it underscores the potential for AI to revolutionize various sectors and entrepreneurial ventures, providing valuable directions for future venture capital investments.The training and inference stages of large models demand substantial computational resources. The Scaling Law suggests that significant advancements in intelligence are achieved through consistent investment in vast amounts of data and powerful computing, provided the algorithmic architecture remains stable.OpenAI, a strong proponent of the Scaling Law, has showcased the potential of generative AI across various fields by leveraging transformer architecture, extensive training data, and considerable computational resources.Recently, Kevin Scott, Microsoft CTO, mentioned in an interview with Pat Grady and Bill Coughran of Sequoia Capital that they have yet to observe diminishing returns from scaling. He announced that the next generation of OpenAI models would soon be available, offering cheaper, more powerful solutions capable of tackling more complex problems. This is the story with each generation of models as we scale up, he remarked.On May 18, Yang Zhilin, founder of Moonshot AI, discussed the computational aspects of the Scaling Law. He noted that initial improvements in model performance are driven by enhanced computational power and efficiency. However, further advancements require increased computational investment and ensuring that this investment effectively translates into intelligence. This involves two issues: sustaining computational investment and maximizing the intelligence output of each computation unit, he explained.On May 18, Yang Zhilin, founder of Moonshot AI, discussed the computational aspects of the Scaling Law. Credit: Moonshot AiIn an interview with TechNode, Wu Yunsheng, vice-president of Tencent Cloud, shared his perspective. Currently, there are different viewpoints, including realistic and idealistic views. Some believe the Scaling Law has reached a plateau, where continued investment yields diminishing returns. Others argue it is still in a phase of rapid development. He emphasized that the Scaling Law remains significant, citing rapid progress in multimodal research over the past year. In this field, various capabilities improve significantly with added data or computing power. We will continue to explore and observe its development and changes across different scenarios and technologies, he added.As of March 28, 2024, there are 117 large models registered with the Cyberspace Administration of China, including Baidus ERNIE Bot, Alibabas Tongyi Qianwen, and the open-source ChatGLM. The rapid development of AI large models is becoming a key driver of innovation and breakthroughs in super applications. As these large model technologies mature and improve, they are gradually permeating various industries, sparking a range of entrepreneurial opportunities. From healthcare to fintech, from smart manufacturing to cultural creativity, the application potential of AI is limitless. Zhou Zhifeng, Managing Partner of Qiming Venture Partners, pointed out at the  World Artificial Intelligence Conference in Shanghai that compared to the timeline of application deployment during the internet wave, he predicts that the explosion of applications in the current AI wave will occur significantly earlier. Currently, generative AI is gaining substantial user favor in three C fields Copilot, Creativity, and Companionship showing a development trajectory similar to internet applications and transitioning from efficiency-enhancing applications to those aimed at providing enjoyment. He noted that the internet reduced the marginal cost of information distribution to almost zero, while the core of generative AI is to reduce the marginal cost of digital content creation to nearly zero, indicating that AI technology is bound to release enormous value.When discussing the future of AI-driven super apps, Zhang Fan, COO of Zhipu AI, expressed optimism, arguing that although creating super apps is not easy, the AI era will see many unimaginable applications emerge. This process requires advancements in computing power, networks, hardware levels, and user habits, following the principle of gradual development from small-scale applications. Zhang emphasizes that by embracing and utilizing existing AI technologies to gradually transform current applications and products, the future will undoubtedly usher in super apps in the AI era.Regarding the challenges of implementing generative AI applications, Zhou Zhifeng believes that reducing the cost of model usage necessary for the widespread adoption of generative AI, improving the effectiveness of large models, and enhancing user retention rates of generative AI applications are crucial. Since the growth period from zero to one for generative AI application companies is longer than in other fields, they need to overcome both TPF (Technology-Product Fit) and PMF (Product-Market Fit) challenges simultaneously. Therefore, the founding team needs greater patience, determination, and understanding of the technology, the product, and the world.There were 45 intelligent robots, including 25 humanoid robots, showcased at WAIC this year. Credit: Evan HuangThere were 45 intelligent robots, including 25 humanoid robots, showcased at WAIC this year. A video of a humanoid robot walking on the Great Wall was repeatedly played at the event. The humanoid robot L2 in the video has successfully conquered the steep slopes of the famous structure, achieving steady walking on it.At the recent Huawei Developer Conference 2024, Zhang Pingan, Executive Director and CEO of Huawei Cloud, unveiled the Pangu Model 5.0. During the introduction of the Pangu model for embodied AI, he showcased the broad potential of the KUAVO humanoid robot, equipped with the Pangu model, in both industrial and household scenarios, attracting widespread attention.Chen Jianyu, an assistant professor at Tsinghua University and founder of the humanoid robot company Robot Era, believes that humanoid robots will be the ultimate form of general-purpose robots. This is not only because the pure humanoid form with two legs and two arms is more compatible with existing environments, but also because its easier to transfer training data from the human world. Technically, an end-to-end integration of the brain and cerebellum will be a crucial research direction in the future. Using human language as the interface between the brain and cerebellum is limited, and it is better to borrow from the end-to-end joint training process of autonomous driving, where physical layer data is directly fed back to the text and image models, significantly enhancing overall model performance.Last week, Tencent, in collaboration with Shanghai Jiao Tong University, released the Top Ten Trends of Large Models 2024: Entering the Era of Machine External Brain report, which pointed out that the combination of robot technology and large models provides a body for the machines external brain. In the future, humanoid robots will not only be able to perform physical tasks but also interact with humans more naturally and intuitively, endowing physical products with intelligent brains.The report states that the development of humanoid robots relies on two major technical pillars: motion control and task training. The application of large models has greatly improved the robots learning efficiency and ability to execute complex tasks. The integration of these technologies not only drives technological innovation in humanoid robots but also opens possibilities for their widespread deployment in practical applications. This also heralds a future of human-machine symbiosis, where humanoid robots will play increasingly important roles in various industries, from household services to high-risk industrial operations, showcasing their efficiency and safety. Through continuous technological innovation and application expansion, humanoid robots will play a key role in improving the quality of life and work efficiency, further integrating into human daily life as indispensable assistants and the ultimate carriers of artificial intelligence.In conclusion, the era of AI is not just a technological revolution but a transformative force that is redefining the landscape of innovation and investment. As we look to the future, the challenges of implementing generative AI applications remain significant.      The need to reduce costs, improve effectiveness, and enhance user retention rates is crucial for the widespread adoption of these technologies. However, the potential rewards are immense, offering a glimpse into a world where AI is not just a tool but an integral part of our daily lives, from household services to high-risk industrial operations.In summary, the dynamic AI era presents a wealth of opportunities for venture capital and entrepreneurial ventures. As we continue to explore and invest in AI-driven innovations, the future holds huge promise for transforming industries, enhancing human-machine interactions, and ultimately, improving the quality of life for all.With a small team, TechNode provides timely news and thoughtfully researched articles for worldwide readers interested in learning more about the Chinese tech industry.
Content Creation/Decision Making/Discovery/Prediction/Recommendation
Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Subhankar Ghosh
Addressing Hallucinations in Speech Synthesis LLMs with the NVIDIA NeMo T5-TTS Model
NVIDIA NeMo has released the T5-TTS model, a significant advancement in text-to-speech (TTS) technology. Based on large language models (LLMs), T5-TTS produces more accurate and natural-sounding…
https://developer.nvidia.com/blog/addressing-hallucinations-in-speech-synthesis-llms-with-the-nvidia-nemo-t5-tts-model/
https://developer-blogs.…lm-composite.png
2024-07-01T20:35:38Z
NVIDIA NeMo has released the T5-TTS model, a significant advancement in text-to-speech (TTS) technology. Based on large language models (LLMs), T5-TTS produces more accurate and natural-sounding speech. By improving alignment between text and audio, T5-TTS eliminates hallucinations such as repeated spoken words and skipped text. Additionally, T5-TTS makes up to 2x fewer word pronunciation errors compared to other open-source models such as Bark and SpeechT5. Listen to T5-TTS model audio samples.NVIDIA NeMo is an end-to-end platform for developing multimodal generative AI models at scale anywhereon-premises and on any cloud.LLMs have revolutionized natural language processing (NLP) with their remarkable ability to understand and generate coherent text. Recently, LLMs have been widely adopted in the speech domain, using vast amounts of data to capture the nuances of human speech patterns and intonations. LLM-based speech synthesis models produce speech that is not only more natural, but also more expressive, opening up a world of possibilities for applications in various industries.However, similar to their use in text domain, speech LLMs face the hallucinations challenges, which can hinder their real-world deployment.The T5-TTS model leverages an encoder-decoder transformer architecture for speech synthesis. The encoder processes text input, and the auto-regressive decoder takes a reference speech prompt from the target speaker. The auto-regressive decoder then generates speech tokens by attending to the encoders output through the transformers cross-attention heads. These cross-attention heads implicitly learn to align text and speech. However, their robustness can falter, especially when the input text contains repeated words.Figure 1. Overview of the NVIDIA NeMo T5-TTS model and its alignment processHallucination in TTS occurs when the generated speech deviates from the intended text, causing errors ranging from minor mispronunciations to entirely incorrect words. These inaccuracies can compromise the reliability of TTS systems in critical applications like assistive technologies, customer service, and content creation.The T5-TTS model addresses this issue by more efficiently aligning textual inputs with corresponding speech outputs, significantly reducing hallucinations. By applying monotonic alignment prior and connectionist temporal classification (CTC) loss, the generated speech closely matches the intended text, resulting in a more reliable and accurate TTS system. For word pronunciation, the T5-TTS model makes 2x fewer errors compared to Bark, 1.8x fewer errors compared to VALLE-X (open-source implementation), and 1.5x fewer errors compared to SpeechT5 (Figure 2).Figure 2. The intelligibility metrics of synthesized speech using different LLM-based TTS models on 100 challenging text inputsThe release of the T5-TTS model by NVIDIA NeMo marks a significant advancement in TTS systems. By effectively addressing the hallucination problem, the model sets the stage for more reliable and high-quality speech synthesis, enhancing user experiences across a wide range of applications.Looking forward, the NVIDIA NeMo team plans to further refine the T5-TTS model by expanding language support, improving its ability to capture diverse speech patterns, and integrating it into broader NLP frameworks.The T5-TTS model represents a major breakthrough in achieving more accurate and natural text-to-speech synthesis. Its innovative approach to learning robust text and speech alignment sets a new benchmark in the field, promising to transform how we interact with and benefit from TTS technology. To access the T5-TTS model and start exploring its potential, visit NVIDIA/NeMo on GitHub. Whether youre a researcher, developer, or enthusiast, this powerful tool offers countless possibilities for innovation and advancement in the realm of text-to-speech technology. To learn more, see Improving Robustness of LLM-based Speech Synthesis by Learning Monotonic Alignment.We extend our gratitude to all the model authors and collaborators who contributed to this work, including Paarth Neekhara, Shehzeen Hussain, Subhankar Ghosh, Jason Li, Boris Ginsburg, Rafael Valle, and Rohan Badlani.
Content Synthesis/Content Creation
Unknown
null
null
null
null
null
null
news
Pradeep Viswanathan
Microsoft now allows developers to fine-tune Phi-3-mini and Phi-3-medium models
Microsoft announced the availability of Phi-3-mini and Phi-3-medium small language models for fine-tuning on Azure, allowing developers to customize these models for various use cases. Read more...
https://www.neowin.net/news/microsoft-now-allows-developers-to-fine-tune-phi-3-mini-and-phi-3-medium-models/
https://cdn.neowin.com/n…models_story.jpg
2024-07-25T18:06:01Z
In April of this year, Microsoft first announced the Phi-3 family of SLMs (Small Language Models), which offer great performance at a low cost and with low latency. The Phi-3-mini is a 3.8B language model available in two context-length variants, 4K and 128K tokens. The Phi-3-medium is a 14B language model, also available in the same two context-length variants.Microsoft today announced that both Phi-3-mini and Phi-3-medium are available for fine-tuning on Azure. Fine-tuning allows developers to improve the base model's performance for different use cases. For example, one could fine-tune the Phi-3-medium model for student tutoring, or someone could build a chat app based on a particular tone or style of response. Leading organizations like Khan Academy are already using the Phi-3 model in real-world AI applications.Microsoft also announced the general availability of Models-as-a-Service (serverless endpoint) capability. As expected, the Phi-3-small model is now available via a serverless endpoint, allowing anyone to quickly develop AI applications without worrying about the underlying infrastructure. Phi-3-vision, the multi-modal model available through the Azure AI model catalog, will soon be available via a serverless endpoint as well.Last month, Microsoft updated the Phi-3-mini model to offer significant improvements. As per industry benchmarks, Phi-3-mini-4k now scores 35.8 (previously 21.7) with the June 2024 update, and Phi-3-mini-128k scores 37.6 (previously 25.7).Microsoft also highlighted the new models that were recently made available on Azure:The new Azure AI Content Safety features including prompt shields and protected material detection are now enabled by default for Azure OpenAI Service. Developers can use these features as content filters for any foundation model including Phi-3, Llama, Mistral and more.With these updates and expansions, Microsoft is clearly demonstrating its commitment to advancing AI capabilities on Azure. By continuously investing in state-of-the-art AI models availability on Azure and providing accessible tools for fine-tuning and deployment, Microsoft is enabling developers to easily create AI solutions.Source: Microsoft
Personalization/Content Creation
Unknown
null
null
null
null
null
null
news
r on Everyday Is A School Day
Llama, Llama, Oh Give Me A Sign. What’s In The Latest IDSA Guideline?
Wow, what a journey, and more to come! We learned how to perform simple RAG with an LLM and even ventured into LangChain territory. It wasn’t as scary as some people said! The documentation is fantastic. Best of all, we did it ALL in R with Reticulate...Continue reading: Llama, Llama, Oh Give Me A Sign. What’s In The Latest IDSA Guideline?
https://www.r-bloggers.com/2024/07/llama-llama-oh-give-me-a-sign-whats-in-the-latest-idsa-guideline/
https://www.kenkoonwong.…-rag/gpt4all.gif
2024-07-21T00:00:00Z
Wow, what a journey, and more to come! We learned how to perform simple RAG with an LLM and even ventured into LangChain territory. It wasnt as scary as some people said! The documentation is fantastic. Best of all, we did it ALL in R with Reticulate, without leaving RStudio! Not only we can read IDSA Guidelines, we can use LLM to assist us with retrieving information!TL;DRI think everyone should learn the intuition of LLM, prompt engineering, RAG, agents, etc. The concept itself and with some trial and error will provide users a renewed perspective of how these things work, how helpful and beneficial it can be for us, how it serves as a tool for learning and not a replacement. The simplest & most straightforward way of learning is using GPT4All GUILangChains tutorial, use a Local LLM, and then give it a go! Yes most of these are in python but it shouldnt prevent R user like me to use it in R via reticulate!DisclaimerThis is mainly for data science educationa purpose only. This is NOT a medical advice, nor is it a medical education. Some medical statements here may be inaccurate. If you find any error in this article, please feel free to educate me.Objectives:MotivationI have been wanting to learn LLM for a while now. My first attempt was an utter failure. Not being proficient in python, needing to install a bunch of python packages, different error lingo, were quite discouraging. That said, its always good to try something and fail, and then pick it up again some other time and invest more time chunks into learning the individual portions of it. This is exactly what happened! Im glad that it did and now I have a better understanding of it.What really catalyzed this learning process was the recent AI Summit Conference. They provided a Prompt Enigeering beginner session and that really helped me to want to learn more about LLM. However, most LLM sessions, books, all involve paid version of API such as OpenAI GPT3.5/4o, Claude, Azure, etc, I wanted something local and does not involve paying for each token, in my case erroneous token , sent to the API and get charged for my mistakes. Then, we stumbled upon GPT4All. This was really something that kicked start the process of learning without requiring ANY codes! Just download it, and it has a GUI, attach local files, and then chat away, without internet!Below is a GIF of the website and a snapshot of the GUIWe wont be going through the details of using GPT4All here, it is quite intuitive. They also have a discord channel if you have questions. Very nice and helpful people. I recently learnt that GPT4All does not have embedding implemented yet on the python SDK through the channel. The most straightforward way to learn how to run LLM locally is this in 3 simple ways:Download the appOpen the app, select model to downloadAttach folder on LocalDoc (this can be tricky if document is too large/long, but if you use nomic.ais embedding, which requires sign up and API key, its very fast).Then start chat, attach the LocalDoc folder of interest, and start chatting!You will also need to change setting of n_ctx (tokens that can be sent) and max_tokens (tokens that LLM returns response) if your question is long or want longer answer. I did not tweak other things in here much, I didnt find the response was much accurate, though its quick. Since it doesnt allow a whole lots of customization, I went to LangChain instead since mainstream uses this a lot and I can find more tutorial in that setting. However, this really got me started and am forever grateful I found this and its community. Another thing that the community pointed out was that if the prompt template has something like### Human:%1### Assistant:Thats not the right template, more so a placeholder and youd have to enter the model specific prompt template for it to work, such as for Llama 3 system prompt template.<|im_start|>systemWrite your system prompt here<|im_end|>With LangChain, you dont need the above, most of them are done for you! More to come.LangChainThe reasons I chose LangChain to learn were because of standardization of the lingo, functions, workflow etc. It reminds me a whole lot of TidyModels. It has a specific workflow, but incorporates all the cool engines (ML methods) into the workflow. It doesnt matter whether youre using Llama 3, WizardLM, Gemma2 etc, the workflow, functions are all the same. Also, there are lot of resources out there that uses this and that is extremely helpful for me to start. Not to mention, the documentation of LangChain is fantastic! Because this field is evolving so swiftly some of the tutorial codes are deprecated but the documentations offer the current function and points you to the right direction. Definitely enjoyed reading through it when I stumbled upoen problem. That said, it does not come without LOTS and LOTS of trial and error. Below Ill try to document the things I need for this to run on R. It may not be extensive, but if you are stuck in any of the steps below please let me know, Ill try to see if I can reproduce it and help you troubleshoot if I can. Here, I am using LlamaCpp because it is an efficient, open-source C++ implementation of Metas LLaMA language model, designed for CPU-based inference. It allows users to run large language models on consumer-grade hardware with relatively low memory requirements, thanks to its support for various quantization levelsPrerequisiteI assume you have python and reticulate installed, and your reticulate is pointing towards the python you use to install the following packages in pythonpip install --upgrade langchain langchain-community langchain_core langchain_huggingface llama-cpp-python faiss-cpu sentence-transformersSorry if I missed anything. If when you run the code you noticed some error where packages not found, you can use that to troubleshoot. Let me know if I missed anything, Ill modify.Code In Action – ExplainedLoad Packageslibrary(reticulate)library(tidyverse)# load moduleslangchain_community <- import("langchain_community")langchain <- import("langchain")langchain_core <- import("langchain_core")langchain_huggingface <- import("langchain_huggingface")# load functions### DocumentsDirectoryLoader <- langchain_community$document_loaders$directory$DirectoryLoaderPyPDFLoader <- langchain_community$document_loaders$PyPDFLoaderRecursiveCharacterTextSplitter <- langchain$text_splitter$RecursiveCharacterTextSplitterHuggingFaceEmbeddings <- langchain_huggingface$HuggingFaceEmbeddings### Embedding / Vectorstorage / RetrieverFAISS <- langchain_community$vectorstores$FAISS### ModelLlamaCpp <- langchain_community$llms$LlamaCppCallbackManager <- langchain_core$callbacks$CallbackManagerStreamingStdOutCallbackHandler <- langchain_core$callbacks$StreamingStdOutCallbackHandler### TemplatePromptTemplate <- langchain$prompts$PromptTemplateChatPromptTemplate <- langchain_core$prompts$ChatPromptTemplate### Chaincreate_retrieval_chain <- langchain$chains$create_retrieval_chaincreate_stuff_documents_chain <- langchain$chains$combine_documents$create_stuff_documents_chainThis is quite self-explainatory. If you have questions, copy and paste on LLM and have it explain. Make sure to get your copy of pdf hereLoad & Embed Documentloader = PyPDFLoader("amr-guidance-4.0.pdf")documents = loader$load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)docs = text_splitter$split_documents(documents)vectorstore = FAISS$from_documents(documents=docs, embedding=HuggingFaceEmbeddings())retriever = vectorstore$as_retriever()Explaination:Loads the contents of the PDF file named amr-guidance-4.0.pdf.Extracts the text content from the loaded PDF and stores it in the documents variable.Creates a text splitter that will divide the text into chunks of approximately 1000 characters each, with an overlap of 200 characters between adjacent chunks (to maintain context).Applies the splitter to the documents variable, breaking the text into smaller chunks stored in the docs variable.Initializes an embedding model from the Hugging Face library. Embeddings are numerical representations of text that capture semantic meaning.Creates a FAISS vector store (vectorstore). It takes the split text chunks (docs) and converts them into embeddings using the specified HuggingFaceEmbeddings model. These embeddings are then stored in the vector store. HuggingFaceEmbeddings model when not specified will return sentence-transformers/all-mpnet-base-v2 model.Creates a retriever object from the vectorstore. This retriever allows you to efficiently search the vector store for text chunks that are semantically similar to a given query.Embedding, huh?Embedding is a technique used in natural language processing (NLP) to represent words, sentences, or documents as numerical vectors. These vectors capture the semantic meaning of the text and can be used for various NLP tasks, such as similarity search, text classification, and language generation. In this case, we are using the Hugging Face library to generate embeddings for the text chunks extracted from the PDF document.This tutorial has a great description of what embedding model -> vector storage means.Example of embedding vectorembedding = HuggingFaceEmbeddings()embedding$embed_query(text="can i use gentamicin for pseudomonas infection")Then this embedding vector will be compared with all of the chunk vectors using squared L2 distance (Euclidean distance):\(d(a,b) = \sum(a_i-b_i)^2\)The actual euclidean distance we need to square-root it but here FAISS has omitted it for computational efficiency. The lower the number the more similar the 2 vectors are, it works the same regardless of square-rooting or not. More details on the proofReturn the Best (lowest squared L2) Similarityvectorstore$similarity_search_with_score(query = "can i use gentamicin for pseudomonas infection", k=as.integer(1))Now lets calculate it by hand!query_v <- embedding$embed_query(text="can i use gentamicin for pseudomonas infection") page53_v <- embedding$embed_query(text=docs[[167]]$page_content) sum((query_v - page53_v)^2)## [1] 0.7473773YES !!! Same number, awesome!!! Modelllm = LlamaCpp( model_path="wizardlm-13b-v1.2.Q5_K_M.gguf", streaming=TRUE, callback_manager=CallbackManager(handlers = list(StreamingStdOutCallbackHandler())), n_ctx = as.integer(2048), max_tokens = as.integer(1024), temperature = 0, verbose = F)Explaination:Initializes a LlamaCpp object from the langchain.llms moduleselect path of GGUF model (see below how to download)Enables streaming output, allowing the model to generate text incrementally and send it back as its produced, rather than waiting for the entire generation to finish.Creates a CallbackManager object, which allows you to register callbacks (functions) to be executed during the text generation process. StreamingStdOutCallbackHandler. This handler prints the generated tokens directly to the standard output (your console) as they are produced, providing a real-time view of the generation.Sets the maximum context window size to 2048 tokens. The context window is the amount of text the model can remember and use to generate its output.Limits the maximum number of tokens in the models output to 1024. This prevents the model from generating overly long responses.Controls the creativity or randomness of the models output. A temperature of 0 makes the model deterministic, always choosing the most likely next token.Disables verbose logging from the LlamaCpp library. I disabled this to make it more aesthetically nice for the blog, you should set this to TRUE to see detailed output during the generation process.How to Download GGUF models?Go to Hugging Face, here I have pre-selected text-generation model and GGUF library for youSelect a model that piqued your interest (try 7-15B param)Select Files and versionsSelect a model to download and download. If the gguf contains part 1 of 2, make sure to download both parts and select the first part when youre assigning the model.Please note that certain LLM such as Llama, Gemma etc, requires you to request permission to use their models. It is quite straightforward, read through their policy, request it by filling out information, wait for approval then youre in!Promptsystem_prompt = " You are an expert for question-answering tasks. Use the following pieces of retrieved context to answer the question. When answering, also make sure to state the reason or rationale for that answer. Then question that reason or rationale critically once with a sentence. Then provide an answer to the critical appraisal. If you don't know the answer, say that you don't know.{context}"prompt = ChatPromptTemplate$from_messages( list( tuple("system",system_prompt), tuple("user",paste0("Question: ","{input}")), tuple("assistant","")))promptMake sure the system prompt goes to SystemMessage. I had to debug this for sometime and finally realized that the ChatPromptTemplate$from_messages function takes a list of tuples in order for it to work. Found this out by reading LangChain documentation. To be quite honest, I really find their documentation to be very helpful for me!Chain or Runnables question_answer_chain = create_stuff_documents_chain(llm, prompt)rag_chain = create_retrieval_chain(retriever, question_answer_chain)Explaination:create_stuff_documents_chain(): This function from LangChain creates a chain specifically designed for question-answering tasks. Your model llm will be used to generate answers. prompt: A PromptTemplate that guides the LLM on how to structure its response. It sets up a chain that takes a question and some documents as input. It passes the question and documents to your LLM. The LLM uses the prompt to generate an answer based on the given information.create_retrieval_chain()·: This function from LangChain creates a RAG chain. retriever: Your retriever object (e.g., the one you created from your vector store). This is responsible for fetching relevant documents based on a query. question_answer_chain: The chain you just created in the previous line, which will be used to generate answers from the retrieved documents.What this line does:Then, we are ready to ask our questions!result = rag_chain$invoke(dict("input"= "What guideline are we looking at today?"))resultThe output was saved in result as a list. As you can see, the first output was the question, 2nd,3rd,4th were the context that our retriever found most similar to our question. Lastly we have our answer Guidance on the Management of Antimicrobial Resistance Though it seemed to have missed Gram Negative Create A Vector of QuestionsLets create a vector of questions and then run pass our LLM and see what are the responses.questions = c("What is the preferred treatment of CRE?", "What is the preferred treatment of ESBL-E?", "Can we use fosfomycin in ESBL Klebsiella?", "Can we use fosfomycin in ESBL Ecoli?", "What is the preferred treatment of stenotrophomonas?", "What is the preferred treatment of DTR Pseudomonas?", "Which organisms require two active agent when susceptibility is known?", "Can we use gentamicin in pseudomonas infection?", "Can we use tobramycin to treat pseudomonas infection?", "Why is there carbapenemase non-producing organism?", "Can we use oral antibiotics for any of these MDRO?", "What is the preferred treatment of MRSA?", "What is the preferred treatment of CRAB?", "Can fosofmycin be used for pyelonephritis?", "Is IV antibiotics better than oral antibiotics?")response = vector(mode = "character", length = length(questions))source = vector(mode = "character", length = length(questions))for (i in 1:length(questions)) { cat(rep("\n",100)) print(paste0("Question: ", questions[i])) result = rag_chain$invoke(dict("input" = questions[i])) response[i] = result$answer source[i] = map_chr(.x=result$context,.f=~paste(.x)) |> paste(collapse = "\n\n##########\n\n") Sys.sleep(10)}Explaination:questions variable contains a vector of the questions were interested in askingcreate response and source empty vectorscreate a for loop:clear console create by creatining lots of new linesprint the questioninvoke the i-th questionsave the ith response to ith response vectorsave the ith source to ith source vector (this will tell us where the RAG got the source from our documents), separate each source with \n\n##########\n\nsleep for 10s, otherwise our machine will heat up quite fast; this also will help us capture screen of the responseReady for the response? Lets go! Questions to Our LLMAnything in baby blue highlight is LLM answer output, light green highlight is the source RAG found top 4 similarity to the question. If the GIF is too fast to follow, you can look at the actual output. I didnt return all sources, unless necessary to investigate. The first question will have everything outputted.Below might be a bit clinical, please feel free to glance through the first few examples and move on if youre not interested.What is the preferred treatment of CRE?Response:The preferred treatment for infections outside of the urinary tract caused by Carbapenem-Resistant Enterobacteriaceae (CRE) that produce Klebsiella pneumoniae carbapenemase (KPC) depends on several factors, including the specific type of infection, the patients clinical status, and the susceptibility patterns of the CRE isolate. According to the guidelines provided, the preferred antibiotics for such infections include:1. Ceftazidime-avibactam, which inhibits KPC enzymes and other -lactamases.2. Meropenem-vaborbactam, where vaborbactam stabilizes meropenem against certain -lactamases.3. Imipenem-cilastatin-relebactam, which combines imipenem with the beta-lactamase inhibitor relebactam to protect against KPC and other carbapenemases.4. Cefiderocol, a siderophore cephalosporin that can access bacterial iron centers and maintain activity against many Gram-negative bacteria, including those producing KPC.5. Aminoglycosides, which can be considered as alternative options for certain types of infections.It is important to note that the choice of antibiotic should be guided by susceptibility testing results, and the treatment regimen should be tailored to the individual patients needs. The continued use of a second agent beyond the initial therapy should be evaluated for additional benefit and potential risks such as increased likelihood of antibiotic resistance development.Critical Appraisal:The answer provided is based on current guidelines and evidence-based recommendations for treating CRE infections, particularly when KPC production is present. The rationale includes the use of antibiotics that have been shown to be effective against KPC-producing CRE, such as ceftazidime-avibactam, meropenem-vaborbactam, imipenem-cilastatin-relebactam, cefiderocol, and aminoglycosides. These recommendations are supported by clinical studies and expert opinion.However, it is crucial to consider that the treatment landscape for CRE infections may evolve with new data and emerging resistance patterns. Therefore, healthcare providers should always consult the most current guidelines and local antibiotic resistance profiles when making treatment decisions. Additionally, individual patient factors such as renal function, allergies, and concomitant medications must be taken into account to tailor the treatment plan appropriately.Answer:The preferred treatment for infections outside of the urinary tract caused by CRE that produce KPC, based on current guidelines and evidence, includes:1. Ceftazidime-avibactam, which inhibits KPC enzymes and other -lactamases.2. Meropenem-vaborbactam, where vaborbactam stabilizes meropenem against certain -lactamases.3. Imipenem-cilastatin-relebactam, which combines imipenem with the beta-lactamase inhibitor relebactam to protect against KPC and other carbapenemases.4. Cefiderocol, a siderophore cephalosporin that can access bacterial iron centers and maintain activity against many Gram-negative bacteria, including those producing KPC.5. Aminoglycosides, which can be considered as alternative options for certain types of infections.It is essential to perform susceptibility testing to guide the choice of antibiotic and to monitor for potential resistance development during treatment.Source:page_content=therapeutic agent for patients at risk for CRE infections is being administered, data do not indicate thatcontinued combination therapy once the -lactam agent has demonstrated in vitro activity offers anyadditional benefit379. Rather, the continued use of a second agent increases the likelihood of antibiotic - metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 52}##########page_content=alternative agents for the treatment of CRE bloodstream infections352. Tigecycline or eravacycline can beconsidered as alternative options for intra -abdominal infections, skin and soft tissue infections,osteomyelitis, and respiratory infections when optimal dosing is used ( Table 1 ). Nausea and emesis arereported in as many as 20-40% of patients receiving tetracycline -derivatives353-355. Of note, CLSI metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 50}##########page_content=Last updated December 31, 2023, and posted online at https://www.idsociety.org/practice -guideline/amr -guidance/ . Please check website for most updated version of this guidance.43CRE infections when susceptibility to meropenem or imipenem has not been demonstrated252,253. It isplausible that the addition of vaborbactam or relebactam may decrease MICs of meropenem orimipenem even in isolates without a carbapenemase because of other -lactamases (e.g., ESBLs) thatmay be overproduced253.Tigecycline or eravacycline are alternative options for the treatment of CRE infections notinvolving the bloodstream or urinary tract (Question 3.8). Their activity is independent of the presenceor type of carbapenemase.Question 3.4: What are the preferred antibiotics for the treatment of infections outside of the urinarytract caused by CRE if KPC production is present? metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 42}##########page_content=pyelonephritis or cUTI caused by CRE, if susceptibility is demonstrated. Ceftazidime -avibactam,meropenem -vaborbactam, imipenem -cilastatin -relebactam, and cefiderocol are also preferredtreatment options for pyelonephritis or cUTIs. Aminoglycosides are alternative options for thetreatment of pyelonephritis or cUTI caused by CRE.Rationale metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 39}Looking at output, its not too shabby. I did not specify exactly what condition and it picked out non-urinary source. When I saw aminoglycoside, it sounded wrong but it did say in certain condition, which it can be used in UTI/pyelo according to the guideline if susceptible, which also does not make sense in the LLM output as it already states not urinary source. As you can see from the last source, it did include cUTI/pyelo context.Note to self, More fine-tuning of the search_type might be helpful. Maybe set k=3 ? Lets continue to see the other responses.What is the preferred treatment of ESBL-E?The preferred treatment for Extended-Spectrum Beta-Lactamase (ESBL)-producing Enterobacterales (E) based on the provided context is carbapenem therapy. Specifically, meropenem or imipenem -cilastatin are recommended as first-line treatments. Ertapenem is also an option, particularly for patients who need to continue therapy in the outpatient setting when oral treatment options are not available. After clinical response is achieved and if susceptibility tests confirm it, a transition to oral therapies such as trimethoprim - sulfamethoxazole (TMP - SMX), ciprofloxacin, or levofloxacin may be considered.Critical Appraisal:The recommendation for carbapenem therapy is based on a large clinical trial that showed a notable direction of risk difference in 30-day mortality favoring carbapenems over piperacillin - tazobactam. However, the absolute risk difference was attenuated and no longer statistically significant after reanalyzing the data with more stringent inclusion criteria (clinical isolates with piperacillin - tazobactam MICs 16 µg/mL). This suggests that while carbapenems are currently preferred, further research may be needed to confirm their superiority, especially when considering the cost and potential for resistance development with widespread carbapenem use.Answer to Critical Appraisal:The critical appraisal raises a valid point about the reliance on a single clinical trial for the recommendation of carbapenems. While the initial trial showed promising results, the fact that the difference in mortality was not statistically significant after reanalysis suggests that there may be variability in outcomes. This highlights the importance of considering multiple studies and data sources when formulating treatment guidelines. Additionally, the potential for resistance development with carbapenems is a concern that must be balanced against the benefits of their use. Therefore, while carbapenems are currently recommended as first-line treatment, ongoing monitoring of clinical outcomes and resistance patterns is essential to ensure the most effective and safe treatment options for ESBL-E infections.page_content=hours or meropenem 1 g IV every eight hours, both as standard infusions (i.e., over 30 minutes). Theprimary outcome of 30 -day mortality occurred in 12% and 4% of patients receiving piperacillin -tazobactam and meropenem, respectively60. Trial data were subsequently reanalyzed only includingpatients with clinical isolates against which piperacillin -tazobactam MICs were 16 µg/mL by brothmicrodilution, the reference standard for AST72. Reanalyzing the data from 320 (82%) patients withclinical isolates available for retesting, 30 -day mortality occurred in 9% versus 4% of those in thepiperacillin -tazobactam and meropenem arms, respectively. Although the absolute risk difference wasattenuated and no longer significant in the reanalysis (i.e., the 95% confidence interval ranged from 1%to 11%)72, the panel still suggests carbapenem therapy as the preferred treatment of ESBL -producing metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 15}##########page_content=Suggested approach: Cephamycins are not suggested for the treatment of ESBL -E infections until moreclinical outcomes data using cefoxitin or cefotetan are available and optimal dosing has been defined.RationaleThe cephamycins are cephalosporins that are generally able to withstand hydrolysis from ESBLenzymes114,115. The cephamycins available in the United States are cefoxitin and cefotetan which areboth IV agents. At least ten observational studies have compared the clinical outcomes of patients with metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 20}##########page_content=to 11%)72, the panel still suggests carbapenem therapy as the preferred treatment of ESBL -producingbloodstream infections due to the notable direction of the risk difference. Limitations of piperacillin - metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 15}##########page_content=experiencing hypoalbuminemia, meropenem or imipenem -cilastatin are the preferred carbapenems.After appropriate clinical response is achieved, transitioning to oral TMP -SMX, ciprofloxacin, orlevofloxacin should be considered, if susceptibility is demonstrated.RationaleA carbapenem is recommended as first -line treatment of ESBL -E infections outside of theurinary tract, based primarily on data from a large clinical trial, as described below60. Meropenem,imipenem -cilastatin, or ertapenem are preferred agents; ertapenem offers a more convenient option forpatients needing to continue carbapenem therapy in the outpatient setting when oral treatment optionsare not available.For patients who are critically ill and/or experiencing hypoalbuminemia, meropenem orimipenem -cilastatin are the preferred carbapenems. Ertapenem, in contrast to meropenem and metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 14}Again, quite interesting, I did not provide the condition of treatment and it picked the broadest / most severe condition, and correctly de-esccalate to oral antibiotics if susceptible. It also mentioned about the piptazo susceptible ESBL and questioned itself.Can we use fosfomycin in ESBL Klebsiella?Based on the context provided, fosfomycin can be considered an alternative treatment option for ESBL-producing E. coli (EPEC) infections, particularly when preferred options such as carbapenems, trimethoprim-sulfamethoxazole (TMP-SMX), or fluoroquinolones are not tolerated or do not show susceptibility. However, the data for fosfomycins use against ESBL-producing Klebsiella (ESBL-K) is less clear. The context specifically mentions that fosfomycin is not suggested for prostatitis caused by gram-negative organisms other than E. coli due to the likely presence of the fosA gene, which can inactivate this agent. While fosfomycin has activity against some ESBL-producing Gram-negative bacteria, its use in ESBL-K infections should be approached with caution, and clinical outcomes data specific to this organism are lacking.Critical Appraisal:The rationale provided is based on the available evidence for fosfomycins use against EPEC rather than ESBL-K. The context suggests that while fosfomycin can be used for EPEC, its efficacy against ESBL-K is not well-established. The presence of the fosA gene in organisms other than E. coli may reduce the effectiveness of fosfomycin against these strains. Additionally, the clinical outcomes data specifically for ESBL-K are lacking, which means that any use of fosfomycin in this context would be off-label and should be carefully considered within the context of individual patient circumstances and in consultation with an infectious disease specialist.Answer to Critical Appraisal:The critical appraisal raises a valid point that the evidence for using fosfomycin against ESBL-K is indeed less robust compared to its use against EPEC. The presence of the fosA gene in ESBL-K, which can render fosfomycin ineffective, is a significant concern. Therefore, while fosfomycin may be an alternative for EPEC when other options are not suitable, its use in ESBL-K should be approached with caution and only after considering the potential for reduced efficacy due to resistance mechanisms like the fosA gene. Clinicians should exercise judgment and consider consulting with an infectious disease specialist when treating ESBL-K infections with fosfomycin, especially in the absence of strong clinical data supporting its use.page_content=of the additional days of oral fosfomycin or ciprofloxacin is unclear. Another clinical trial randomized 51patients with cUTI to 3 g of fosfomycin daily or 750 mg of levofloxacin daily for 5-7 days, after up to twodays of IV therapy33. Clinical cure at the end of therapy was similar in both treatment groups (69%versus 68%). In this study, 63% of infections were caused by E. coli but only one isolate in each arm wascaused by an ESBL -producing isolate.IV fosfomycin is not clinically available in the United States. Although some data suggest IVfosfomycin may have activity against organisms beyond E. coli , it is difficult to translate data from IVfosfomycin to oral fosfomycin given the limited oral bioavailability and lower daily dosages with oralfosfomycin52. Transitioning to daily oral fosfomycin needs further investigation before suggesting for or metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 13}##########page_content=fosfomycin52. Transitioning to daily oral fosfomycin needs further investigation before suggesting for oragainst this practice for the treatment of ESBL -E pyelonephritis or cUTI; however, it may be a reasonableoption when other preferred or alternative oral options are not available.Fosfomycin is an alternative option for the treatment of prostatitis caused by ESBL -producing E.coli when preferred options (i.e., carbapenems, TMP -SMX, or fluoroquinolones) cannot be tolerated ordo not test susceptible53-59. In an observational study, fosfomycin, dosed at 3 g orally daily for one week,followed by 3 g orally every 48 hours for 6 to 12 weeks, was associated with clinical cure in 36 (82%) of44 males with chronic bacterial prostatitis53. Fosfomycin is not suggested for prostatitis caused by gram -negative organisms other than E. coli due to the likely presence of the fosA gene and its ability toinactive this agent ( Question 1.1 ). metadata={source: llm-langchain/idsa_data/amr-guidance-4.0.pdf, page: 13}##########page_content=Last updated December 31, 2023, and posted online at https://www.idsoc
Content Synthesis/Decision Making
Education, Training, and Library/Life, Physical, and Social Science
null
null
null
null
null
null
news
Daniel Bryant
Podcast: Platforms, People and Process for Great Developer Experience
In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Daniel Bryant, the News Manager at InfoQ, about engineering culture and developer experience. By Daniel Bryant
https://www.infoq.com/podcasts/platforms-people-process-devex/
https://res.infoq.com/po…20449171598.jpeg
2024-07-12T09:00:00Z
Wherever you work in the organization, you are trying to sustainably deliver business valueThe gap between the idea experiment hypothesis and running in an observable way in production is developer experienceCognitive overload is a real issue for developers todayGreat culture emergent and requires intentional behaviours and leadershipThe more context you have across the whole the organization, the better software you will deliverTranscriptShane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I have a fascinating and fun experience. I'm sitting down with Daniel Bryant. Now, Daniel is the News Manager at InfoQ. He's been track host at innumerable QCon conferences, and is, and here is my bias, an all around great guy. Daniel, welcome. Thanks for taking the time to talk to us.Daniel Bryant: Thank you very much, Shane. It's interesting to be on the other side and I very much appreciate that humbling introduction. Thank you.Shane Hastie: I'm going to go with how I normally open up. So who's Daniel? Tell us your background, what brought you to where you are today?Introductions [00:48]Daniel Bryant: Yes, sure thing, Shane. So I think the thing I always say in the professional context is, there's a few things that underpin all the choices in my career. I've always loved technology. So my first computer, for the folks listening, Amstrad CPC 464, back in the day, eight bits, 64K of RAM. Always loved programming, but I always enjoyed teaching other people even more, so bridging the people and the tech.I was never going to be the best programmer, though. It blew my mind when I discovered BBC BASIC and I discovered assembly, I could build games, but I enjoyed teaching my friends just as much as I did coding the things myself. So all throughout my career, I nearly became an academic. I really got into research. I did a PhD in AI, which I don't talk about too much these days, but with AI coming back probably should put that bit higher up on the resume. But it wasn't LLMs, it was diffusible logics I was studying.I nearly became a professor, that teaching aspect I've always enjoyed. But along the way of doing my PhD, I discovered coding in Java professionally, and that was fantastic. Faster feedback loops than the theoretical work I was doing. And now my career took off from there, from software developer to architect, then I went to platform engineer and a few other things along the way. I just always enjoyed that bigger picture from software to architecture, architecture to platforms. And I always enjoyed bringing folks along on the journey, meeting them where they're at, understanding whether they're junior developers, whether they're business folks as we call some people, and knitting together the value of technology in people.And it's been a lot of fun on that journey. And the journey's only halfway done I guess, right? Hopefully many more years to my career. But I see that theme running throughout the rest of the work I do, whether it's software development, or more product-focused like I am today, but I just love the people and love the technology.Shane Hastie: What got you involved with InfoQ and QCon?Getting involved with InfoQ & QCon [02:33]Daniel Bryant: Don't know if I've told the origin story of those, actually. It was very much motivated by my love of sharing knowledge and learning as well. Because selfishly, I was looking at InfoQ and thinking, "Oh these folks sharing all this knowledge, they must have to learn it of course before they can share it." And having some sort of forcing function if you like, be it writing for InfoQ, be it doing presentations. I thought this would be great to make me learn many different things just out of pure interest and also as I expand my career. So I was reading InfoQ pretty much from when it was created.I will shout out Ben Evans who's a longtime editor at InfoQ. Ben's one of my mentors back from the Java days, 15-plus years ago, I guess, it's been a while now. One day I was just chatting and I was saying, "I love InfoQ, love all the stories being talked about." SOA was the thing back then. SOA, a lot of Ruby I was reading about. And Ben was like, "Hey, I can introduce you to..." I think it was Floyd and a few other folks. So the founders of C4Media, which is the company behind InfoQ. And I'm like, "Yes, Ben, I'd love that." And the sweetener as well was I was just getting into the QCon conferences and I knew the connection between InfoQ and QCon at the time. Again, a lot of early service, I went architecture microservice stuff, this is what I was doing on my day job. And I saw at QCon and there was like Rebecca Parsons, there was Adrian Cockcroft, folks I'd love to speak to. And I thought if I can get in the door with InfoQ and with QCon, maybe it's an excuse, an icebreaker, to chat to these amazing people and learn from them.And that's pretty much what happened, Shane. Let's say Ben introed me, I met Floyd and several other folks, Dio. And then Charles I think was becoming the editor-in-chief at the time, met yourself and a few other folks as well. I just realized that this is a really cool community of folks. That is one of the things, several points in my career I'll shout out other than Java community, the LJC, the InfoQ community, the CNCF community, the QCon community. There's been certain communities that have really leveled me up. Do you know what I mean? Just by being around amazing people saying, "I want to be like you, I want to learn from you."And InfoQ is one of those early experiences of that. And yes, I've not looked back as in it's a fantastic excuse to learn lots of stuff and share stuff on podcasts, on articles, on many different formats and I've thoroughly enjoyed it.Shane Hastie: So let's dig into your last track at QCon London. The platforms, people and process, delivering great developer experiences.I want to go to the theme. What was the driver for this? What's the need?Platforms, people and process for great developer experience [05:04]Daniel Bryant: Yes, interesting. So it came actually I did the QCon San Francisco track, similar track last year. And Justin Cormack, who's the CTO at Docker, was the sponsor of that track. He reached out to me and said, "Hey Daniel, I'd love for you to connect up these folks and put together an interesting program." Because it's all about the speakers, I was just curating these things.But one thing Justin said, Justin and I have known each other for many years, and again, love Docker, so I've been working in Docker, Kubernetes, all that space. And we both love the tech, but over coffee we were like, "You know what, we're all talking about the tech a lot." Really, we all know that is the "easy part" in air quotes. The hard parts are the people, the process, all these things that go with a successful product, a successful project.Justin said to me, "Lean into the people side." So if you look at the QCon San Francisco track, that was some amazing folks on that track and basically the success of that track, I leveraged it into the QCon London track. You never are sure when you're putting together a QCon track, is the audience in the same place you're at? You've got to meet the audience where they are. Are they ready to hear some of these things? Do they like the story we're trying to tell? And they very much did.A few key things popped out, key themes throughout all the talks in QCon SF was empathy, was big picture thinking, clear goal setting, and people were tweeting, people were on LinkedIn, sharing, "Yes, this is great stuff." We were like, "You know what? Let's bring it over to the European market." Because sometimes, the Bay Area is slightly ahead in some of these things. Just it is what it is. And being a proud Londoner so to speak, a proud UK person, I'm always keen to bring a bit of the Bay Area over into London and Europe and beyond as well. And it went down really well, Shane.People in London, same kind of vibe, they totally understand the technology. They realized there's a lot of value in being intentional about cultivating a platform, building a platform. But many of them have tried, or are on the process of building a platform, and the sticking points are all around the people, the process, and these things. They're not so much around the tech. And that's why that track, I think, hits quite nicely in London and hopefully gave a lot of thinking points. Again, I'll shout out the amazing speakers. I do 5% of the work, if that, kind of thing, probably less than that. The amazing speakers are the people that really delivered all the value. And I was just sat there like Jessica, Gemma, Avrin, Anna, Andy, just did amazing work.Shane Hastie: So what is different or special about developer experience for a platform versus just DevX in general?What’s different about DevEx for a platform? [07:32]Daniel Bryant: A great question, Shane. And I do see this one quite a bit. And in reality, I think it's all the same thing. Do you know what I mean? The bottom line is, wherever you work in the organization, you are trying to sustainably deliver business value for want of a better phrase. And definitely, I even forgot that when I was perhaps a developer, I was like, "What cool framework can I play with? What latest code can I do?" But then good managers, good mentors along my journey are always saying to me and focus on delivering that business value.So developer experience for me is that how do I get these fantastic ideas and hypothesis experiments that we've got in our minds or discussing in the company level, how do I get that from delivering observable business value in production. And that gap between those two things, and I did a presentation back at GOTO Amsterdam many years ago on this, but that gap on the idea experiment hypothesis and that gap on running in an observable way in production is developer experience in my mind.And it touches everything, right? It touches the coding, it touches the architecture, touches the platform, it touches CICD, all these good things, observability as well. But it's the way typically we see it as developers, how do they experience all those things? We talk a lot about shifting left, which is great. I love the principle. Thinking about security earlier, thinking about scalability, resilience, observability. But poor developers, I've been there, you get what we now call cognitive overload on trying to balance all these things. Yes, it's a great idea to think about all those things, but without the right tooling, without the right platforms and critically without the right leadership support, you are never going to do all those things as a developer early on. So my focus over the last few years has been creating tools in the cloud native space to help developers in particular work with all these important things for delivering that sustainable, observable business value via coding.Shane Hastie: Let's dig into that cognitive overload because we've heard quite a lot recently and probably the last couple of years this has been bubbling up that the cognitive load on developers today is substantively higher and harder than when I was coding in a assembles and COBOL and C++ in the eighties, nineties, even into the mid two thousands and early two thousand teens. But it seems that there's been, and maybe it's a seem, so to me, am I right, has there been an explosion of complexity that we have to cope with?Cognitive overload in the developer experience [10:10]Daniel Bryant: I think there has Shane, because I can remember I bumped in some of the COBOL stuff and definitely assembler and things and C++ of my early days. I think the trends I'm definitely seeing is one, the audience, and that includes us, are more demanding now. We've been exposed to the iPhone world or the pick your  sort of favorite UX. When things work really well, we like it and we're like, why can't we have that experience in every bit of software?So back when you and I were building these more basic, perhaps, banking apps or whatever, I remember the interface was super clunky and you had to suck it up because you're paying money for this UX or whatever, it's business people that are going to use it, tough luck if you like it or not. You go with that. Nowadays the audience, even B2B software, just the audience is more demanding and that includes not only UX but things like security and the threats have just blossomed. There's a whole cybercriminal market now that wasn't perhaps so big when again, I was doing this in the nineties, two thousands. And I would say the rise of distributed systems has definitely made things more complex.Now the reason we've gone to that sort of distributed mindset, and again, my career started as the internet was really kicking off and it is just incredible. When I was coding on that Amstrad CPC 464 back in the day, it was a terminal in my parents' front room. There was no internet connectivity at all. And as I went to college, I saw the internet blossom and the possibilities, the benefits are innumerable, but there you are inherently dealing with distributed systems.So again, both the good UX, good experiences, the rise of all the challenges around these things, and again, dealing with the rise of distributed computing, I think the combination of those two things just have bumped up the complexity to a higher order and we haven't really got the abstractions or the tooling to push some of that complexity down into the platform, if that makes sense. Because I think where perhaps with C++, you're dealing literally with memory handling. Java came along, got rid of a lot of that for example, and now we're seeing Rust and things like that. We're kind of rising the abstraction levels that helps me as a developer do my job without getting overloaded.But with all these changing requirements and distributed systems, I don't think we've quite caught up with the level of abstractions and therefore a developer coming straight out of college these days has got to learn a myriad of things. Not only are they going to learn the actual programming language, they've got to learn architecture, the fallacies of distributed computing. There's just so much stuff you've got to learn now, which I think if you haven't been in the industry for a few years is a bit overwhelming.Shane Hastie: This touches on something that I would like to dig into a bit. We've got the tooling now, we've got the AI tools coming in, Copilot, that's supposed to be helping us. I'm hearing massive increases in productivity in some reports and others where yes, I might get 10%.How does somebody coming in new learn to be a good programmer today?How do people new to the industry learn to be a good programmer today? [13:11]Daniel Bryant: Yes, that's a fantastic question, You and I touched on this briefly at our end of year or beginning of year summary and as you asked us then about this, I was like, oh, that is a really good point. Because folks that have been in the industry, ourselves included, for a while, having an AI Copilot is like a pair programmer. We know perhaps how to work with a pair. It's different than when you're coding solo. And we also know if you're pairing with an intern, it's very different than pairing with someone of your abilities too. You treat the pair differently.And I think we haven't quite figured that bit out yet in terms of the levels there. And if you are starting from a tablua rasa, like blank slate, knowing what questions to ask is really hard. I can instantly look at a problem and I'm thinking, oh, with my pair I say, "Oh, we need to think about the second order ramifications of this change we're going to make." But that's just because I've got this sort of inbuilt pattern matching and years of just seeing these things. Whereas when you're starting off, you don't have that gut feel, sixth sense, spidey sense, call it what you will.And I'm with you, I genuinely think that's hard without doing some of the things and getting that experience and building that gut feeling, I'm not sure exactly yet if we've got the tooling to do that. I think AI could probably help accelerate some of those things, and maybe my mindset is just a bit stuck in the past. I will put my hands up and say that because I'm almost using my old mental models to this new world, but I wonder if we need to create some kind of training or some kind of support system that bootstraps people from that sort of one to a hundred or zero to a hundred where when they get to the a hundred, they're not necessarily the best programmer in the world for example but they know about fundamental tenets, coupling, cohesion, single responsibility principle, many other patterns that many of us have gone through and they've sort of been exposed to the ramifications of some of these patterns and so forth that they know what kind of questions to ask of their pair programmer.But again, this is a plan. I'm totally conscious that I'm applying my old mental models again, I need to chat to some folks fresh out of college and actually see how they're learning because when I do, I mentor a few folks, and the way they use YouTube shorts and TikTok and things. I'm on those platforms and are playing around with them a little bit, but the way they learn is very different than me. I love a good book and I love getting hands-on with the tech. The book gives me the big picture, I love reading, and the hands-on helps me build a mental model.But junior folks I mentor, "Don't give me the book, just give me the TikTok," or whatever. And I'm like, "But how are you going to get the big picture?" And I wonder as in we're just going to maybe have to update the way we do teach folks to build software.Shane Hastie: That's a challenge for the educational institutions, isn't it?Daniel Bryant: Yes, exactly. I agree.Shane Hastie: What does an intentional culture look like?Great culture emergent and requires intentional behaviors [15:57]Daniel Bryant: Yes, this is great. Again, I've listened to many of your podcasts around this, Shane, and chatted to lots of interesting people at QCons over the years and I think really it is about setting goals and guardrails, primarily. The best cultures I've worked on, they're not the ones with those kind of cheesy things on the wall that says we value all these things, they're somewhat emergent than the cultural norms, but they're emergent from a very clear value of we want to aim for these things, these are our goals. We collectively believe these things to get to this point. And then the guardrails.I definitely worked in some of the cultures where the cliche is culture is the worst behavior the leader tolerates. And I was definitely in a few of those situations where stuff would side and then no one really liked it and then the culture unraveled from there. So I think for me it's one of those things you have to be on it all the time. It's not like I write my values and stick them on the wall, done. We have to be day in, day out monitoring ourselves in terms of are we subscribing to our stated values, our stated culture? Do we need to adapt to the culture?I definitely see, I've worked with folks over years who've worked in say, government organizations or big organizations like that where the culture doesn't change. And actually that's a bad thing. We all think, oh, it's a great culture, but it was great 10 years ago, 20 years ago, not so great with the challenges we've got now. So you sort of need to update these things but I do think, yes, goals, guardrails and that constant awareness of how is the culture interacting with new folks joining the team? Are we looking after everyone on the team from the junior folks, the senior folks? These kind of things.Shane Hastie: Jumping around a little bit, where is platform engineering going?What’s happening with platform engineering? [17:45]Daniel Bryant: Yes, great question Shane. I get all the DMs all the time because I'm on LinkedIn talking about engineering quite a lot. People were like, it's just DevOps. And they're folks that have even been around longer, they're like, it's just infrastructure. And I get it as in I know friends have had four different job titles and basically done the same job over the last 10, 20 years. But I think with a lot of technology we come up with different words, we talk about different sort of approaches, but for me it's kind of like, I don't know, it's something like a pendulum swinging or you can see it like a spiral.My take on it, we are getting better all the time if we're not exactly going in a sort of one straight line to better or success or whatever. So I think for me, platform engineering is a collection of many good things that have come out from building infrastructure, things like site reliability engineering, SRE the Google folks championed, and things like CICD, which I learned a lot from Dave Farley back in the day and Jez Humble, with their classic book. It's mashing all those things together with that cultural sprinkling that you and I talked about a few times today, recognizing hence the QCon track. It's not just about the tech, but if you want to allow developers to go fast, be safe and build systems at scale, you have to provide a framework for them to do that.And framework, I'm deliberately keeping it a bit vague, but that is the culture, the tooling, the practices, all the things, if that makes sense, right? And for me, platform engineering is the label we're putting over that. And I think you and I have definitely seen even with DevOps, the danger once we do have a label, it's like vendors co-opt it and people just misunderstand it. It's just human nature. Certainly, I'm sure I've played my part in that too. There is a bit of a danger of platform engineering going through that kind of hype cycle as the Gartner folks might say, I've been just reading some Gartner reports about this. We are at the peak of inflated expectations according to Gartner, and then we drop down to the trough of disillusionment and I'm with that.I think the Gartner folks are often on something of that and then you have to go through this trough before you come out the other side with productivity. But I like it, again, it's an interesting community of people coming around platform engineering. And that for me is the key thing. And most of us assume good intentions and we're not always making the right noises and always going in the right direction. But I think platform engineering is a way to how do I build these frameworks, foundations, platforms, portals, all the things mashed together to enable developers to have the best experience to go faster, think about being safe, security, all these good things and delivering systems at scale as well.Shane Hastie: Now you mentioned you've been dabbling in product management. Tell us about that.Dabbling in product management [20:15]Daniel Bryant: Yes, so my career, Shane, I really enjoyed software delivery and architecture and platform building, but I fancy doing some product work. So I left the company I was working with, I'll shout out to the OpenCredo  folks. Fancied moving on from there about seven or so years ago and moved into building tools.I worked with a company called Ambassador Labs. When I joined it, it was like seven folks in a Boston office but ultimately we created Telepresence and Emissary Ingress to open source projects that we donated to the CNCF. And now I've moved on from Ambassador Labs and working on Kratix now, which is an open source platform building tool. But along the way of building all these tools and working with the fantastic communities around there, I realized I had to learn some of the basics on product ownership, product management, project delivery, all these good things. Because the fantastic thing with open source is everyone can contribute and literally we found everyone does contribute.When I worked on that Telepresence back a years ago now, we had people wanting to take the project in a different direction and just again, all great, great stuff. I looked at Marty Cagan's work, actually inspired, and a bunch of other books I'm sure your listeners will recognize. But those books really helped me understand how do I build a good product because if you listen to everyone's opinion, the product's going to be like the Swiss Army knife but negative sort of Swiss army knife, if that makes sense in the cloud native world.And yes, I just love learning, Shane, as we've sort of mentioned a few times today. And for me learning these product management skills, I can see how they relate to all the things I've built in my previous careers. It's fascinating looking back sometimes as with age comes wisdom, hopefully, right? And when I look back and I remember thinking, oh, that's why that manager said that, that's why that mentor did that. That's why that CEO was saying we should go in this direction. They were trying to meet business goals, they were sort of obsessing about the customer or we know there's clearly some challenges with building a sustainable product, these kind of things. And as a sort more naive software engineer, I was just like, why are people doing these things?And I get people on my team ask me that these days, "Why are you doing this?" And I'm like, "We're building a product, right? It is not perfect, but we've got to meet some user goals, get some revenue." These kind things opened my eyes. The Marty Cagan stuff in particular, there's many other folks I read their Substacks and listen to podcasts. I've shared Lenny's podcasts. I love the Lenny's stuff. Does fantastic work in the product space. For me it's just opening my eyes up in how I run communities, how I treat some of my career even. That product focus, it's actually a really powerful focus.Shane Hastie: So what should a software engineer understand about product management?What should a software engineer understand about product management? [22:47]Daniel Bryant: I definitely think reading some of the Marty Cagan's work inspired if you like reading or listening to Lenny's podcast is a great way. It is just you can tell folks they're really passionate about what they do and that kind of seeps to many of us. If you're a software engineer, you're probably a systems thinker, you're probably very curious about building mental models and learning. So I think naturally those kind of resources that are out there you would just gravitate to. But I think learning the fundamentals of you are trying to deliver business value. As silly as perhaps that may sound to a bunch of listeners, I guarantee you there's folks out there, because I was certainly one of them at one point where I didn't fully make that connection between we are trying to deliver business value in solving a problem.Sounds really obvious, but I worked on a few government projects, I worked on a few private projects I should say as well where it was not super clear as an engineer what problem we were really solving. We knew we had a spec and we were building the web apps and so forth. But when I actually look back, if I'd known the business problem I was trying to solve, I might've made different suggestions. I might've sort of pushed back or implemented the software a bit differently. I think it was a danger at some points and probably even now where people want to hide the complexity or segment the work. But I think the more context we have throughout the organization, the better things you will deliver. So I think as a software engineer, understanding you are solving business problems and understanding some of the constraints in your organization is really good.There's many analogies with programming here, but just understanding in terms of you can build anything but you can't build everything. You have to prioritize, ruthlessly. I really enjoyed learning about prioritization within the product framework. And the last thing I'd say is running experiments. I think that's sort of more thing has come into vogue over the last few years, but back when I started creating software applications, they were months and years of delivery. I remember being handed a telephone directory of requirements for my team back in the government, UK government, my first gig, and I think that project took 18 months, two years and didn't get deployed while I was actually there. My internship finished, I moved on.Whereas these days on startups we're pushing that code constantly to validate hypotheses like, hey, we think if we had this feature, this small little feature, our customers will get more value from that in this way. I can create an experiment, I push some code out there, I get the metrics back, I look at it, test my hypothesis, validation or not. So I think that's a really key thing, that experimental mindset, which again many of us have as software engineers, but with a slightly different focus towards customer value, business value is a really powerful thing to learn.Shane Hastie: Advice for young players, people who are early in their career and who are inspired by looking at what you've been doing and your journey. What advice would you give the young Daniel today?Advice for young players [25:38]Daniel Bryant: I love these thought-provoking questions, Shane. I would say there's something, I mentor a lot of folks these days and one of the dangers I have and I see other folks have is wanting to do it all. And I've had a very lucky 20 year career so far in tech and I've had some amazing mentors and amazing opportunities along the way and I'm conscious that I've done all these things and people are like, I want to do all the things. You cannot do all the things.Definitely picking the most important thing to you now, whether it's being a better software engineer, learning how to work with AI, understanding product. Do you want to become a startup founder? Do you want to be that CEO of big org? Being super clear on some of your goals and they're guaranteed to change, I could say that, but being clear on your current goals and then laser focusing perhaps an area that you are strong or want to get stronger in or an area where you know it's a weakness, but you have to have that, really investing in that.I did a lot of mentoring last summer. I took a bit, very lucky, took a bit of time out, spent some time with the family and so forth. But as well I also mentored a bunch of folks on Zoom. I opened up my calendar and said, Hey, jump on. And a lot of those conversations were how, I imagine I've got no psychological training, but I imagine how the sort of psychologists go. A lot of it's like, so tell me why you think this is what you want to learn. Tell me what your next career step would be.And the people would often rock up with very clear, I think I need to do this, this, this. And then we actually have a chat around what do you really want to do? What's the most important thing? And break it down and then build it back up with actionable steps. People walked away so much happier than when they rolled into the call, probably a classic case of cognitive overload, right? They were like, I need to read all these books, do all these things, learn all these things. And I'm like, trust me, you can get there but that's a 10-year journey. You need to break it down and have clear smaller goals along the way.So the biggest bit of advice I gave last year I'd give now on the podcast is be super clear on where you want to get to, but break it down to smaller steps and recognize that work-life balance as well because you can almost run yourself ragged. There's so much amazing resource and content out there on the internet these days, but you can consume it 24/7. You shouldn't consume it 24/7, you should definitely balance up life in general. And I think being super clear on the goals will really help you prioritize what to read, what to learn, what to play around with.Shane Hastie: Great conversation. We do this far too infrequently.Daniel Bryant: Indeed. Thank you Shane.Shane Hastie: I would typically at this point ask where would people find you but of course you're on InfoQ.Daniel Bryant: That's it. Come and find me there. Yes, @danielbryantuk on most of the places. Shane. So like I'm on LinkedIn, GitHub, X, formerly Twitter, @danielbryantuk is where folks can find me, but InfoQ is the first place. Rock up, have a chat, find me there.Shane Hastie: Wonderful. Daniel, thanks so much.Daniel Bryant: Thanks a lot Shane.Mentioned:
Unknown
Unknown
null
null
null
null
null
null
news
Dasha Zenkovich
Connect with Microsoft Security at Black Hat USA 2024​​
Join Microsoft Security leaders and other security professionals from around the world at Black Hat USA 2024 to learn the latest information on security in the age of AI, cybersecurity protection, threat intelligence insights, and more.​The post Connect with Microsoft Security at Black Hat USA 2024​​ appeared first on Microsoft Security Blog.
https://www.microsoft.com/en-us/security/blog/2024/07/17/connect-with-microsoft-security-at-black-hat-usa-2024/
https://www.microsoft.co…715-1024x574.png
2024-07-17T16:00:00Z
Black Hat USA 2024 is packed with timely, relevant information for todays security professionals. During the conference this August, well share our deep expertise in AI-first end-to-end security and extensive threat intelligence research. Join us as we present our main stage speaker Ann Johnson, Corporate Vice President and Deputy Chief Information Security Officer (CISO) of Microsoft Security, as she shares threat intelligence insights and best practices from the Office of the CISO in her conversation with Sherrod DeGrippo, Director of Threat Intelligence Strategy at Microsoft Threat Intelligence Center (MSTIC).  Also at Black Hat, our Microsoft AI Red Team will be onsite holding training sessions, briefings, and panel discussions. And today, we’re releasing a white paper to demonstrate the impact of red teaming in practice when incorporated in the AI development life cycle. The paper details our innovative Break-Fix approach to red teaming AI systems and our close collaboration with Microsofts Phi-3 team, which allowed us to reduce the harms by 75% in Microsofts state-of-the-art small language models.1   As a proud sponsor of the inaugural AI Summit at Black Hat, we’re further investing in the community by sharing our learnings in both AI for Security and Securing AI. Well be participating in a panel discussion titled “Balancing Security and InnovationRisks and Rewards in AI-Driven Cybersecurity,” where we’ll debate the trade-offs between innovation in AI and security risks and share strategies to foster innovation while maintaining robust security postures.  Theres also a sponsored session titled Moonstone Sleet: A Deep Dive into their TTPs, presented by Greg Schloemer, Threat Intelligence Analyst at Microsoft, that takes a deep dive into cyber threat actors associated with the Democratic Peoples Republic of Korea (DPRK), as well as educational and engaging theater sessions in our Microsoft booth #1240. With a ton of critical security content to catchall detailed belowwe hope youll make time to connect with us at Black Hat 2024. Join us for core Black Hat sessions, submitted for consideration by Microsoft subject matter experts and selected by the Black Hat content committee to be included in its main agenda.  DATE & TIME SESSION TITLE INFORMATION SPEAKER(S) Saturday, August 3, to Tuesday, August 6, 2024  AI Red Teaming in Practice Hands-on training on how to red team AI systems and strategies to find and fix failures in state-of-the-art AI systems. Dr. Amanda Minnich, Senior Researcher, Microsoft;  Gary Lopez, Researcher, Microsoft; Martin Pouliot, Researcher, Microsoft  Wednesday, August 7, 2024, 10:20 AM PT-11:00 AM PT Breaching AWS Accounts Through Shared Resources  Presenting six critical vulnerabilities that we found in AWS, along with the stories and methodologies behind them. Yakir Kadkoda, Lead Security Researcher, Aqua Security; Michael Katchinskiy, Security Researcher, Microsoft; Ofek Itach, Senior Security Researcher, Aqua Security Wednesday, August 7,2024, 12:40 PM PT-1:50 PM PTHacking generative AI with PyRIT Understand the presence of security and safety risks within generative AI systems with PyRIT. Raja Sekhar Rao Dheekonda, Senior Software Engineer, Microsoft Wednesday, August 7, 2024, 3:20 PM PT AI Safety and You: Perspectives on Evolving Risks and Impacts Panel on the nuts and bolts of AI Safety and operationalizing it in practice. Dr. Amanda Minnich, Senior Researcher, Microsoft;  Nathan Hamiel, Senior Director of Research, Kudelski Security;  Rumman Chowdhury; Mikel Rodriguez, Research Scientist, Google Deepmind Wednesday, August 7, 2024,1:30 PM PT-2:10 PM PT Predict, Prioritize, Patch: How Microsoft Harnesses LLMs for Security Response A crash course into leveraging Large Language Models (LLMs) to reduce the impact of tedious security response workflows. Bill Demirkapi, Security Engineer, Microsoft Security Response Center Wednesday, August 7, 2024, 3:20 PM PT-4:00 PM PTCompromising Confidential Compute, One Bug at a Time Review of methodology and the emulation tooling developed for security testing purposes, and how it influenced our understanding and review strategy. Ben Hania, Senior Security Researcher, Microsoft; Maxime Villard, Security Researcher, Microsoft; Yair Netzer, Principal Security Researcher, Microsoft Thursday, August 8, 2024, 10:20 AM PT-11:00 AM PTOVPNX: 4 Zero-Days Leading to RCE, LPE and KCE (via BYOVD) Affecting Millions of OpenVPN Endpoints Across the Globe Microsoft identified vulnerabilities in OpenVPN that attackers could chain and remotely exploit to gain control over endpoints. Vladimir Tokarev, Senior Security Researcher, Microsoft Thursday, August 8,2024, 1:30 PM PT-2:10 PM PT  Secure Boot, Insecure Reality: Peeling Back Layers of False Security A deep dive into the systemic weaknesses which undermine the security of your boot environment. Bill Demirkapi, Security Engineer, Microsoft Security Response Center At Black Hat 2024, Microsoft Security is here with security leaders and resources that include: Threat researchers and security experts from Microsoft Security, here to connect with the community and share insights. Live demos of Microsoft Copilot for Security, informed by the 78 trillion signals Microsoft processes daily, to help security pros be up to 22% faster.2Theater presentations of Microsofts unified security operations experience, which brings together extended detection and response (XDR) and security information and event management (SIEM), so you get full visibility into cyberthreats across your multicloud, multiplatform environment. Hands-on experience with Microsoft Security solutions to help you adopt AI safely. Connect with Microsoft leaders and representatives to learn about our AI-first end-to-end security for all. Additionally, youll be able to view multiple demonstrations on a wide range of topics including threat protection, securing AI, multicloud security, Copilot for Security, data security, and advanced identity. Youll also be able to connect with our Microsoft Intelligent Security Association (MISA) partners during your visitthe top experts from across the cybersecurity industry with the shared goal of improving customer security worldwide. And if you have specific questions to ask, sign up for a one-on-one chat with Microsoft Security leaders.At the Theater in the Microsoft booth, watch our series of presentations and panels featuring Microsoft Threat Intelligence Center (MSTIC) experts and Microsoft Researchers. Half of the sessions will be presented by the MSTIC Team. The Microsoft booth will also feature sessions from select partners from the Microsoft Intelligent Security Association (MISA). MISA is an ecosystem of leading Security companies that have integrated their solutions with Microsoft Security technology with a goal of protecting our mutual customers from cybersecurity threats. Twelve partners will showcase their solutions at our MISA demo station and five partners will be presenting their solutions in our mini-theater. We would love to see you there. Click here to view our full theater session schedule. The event will be co-hosted by Ann Johnson, Corporate Vice President and Deputy CISO of Microsoft Security, and Aarti Borkar, Vice President of Microsoft Security, Customer Success and Incidents Response, and, we are thrilled to have five MISA partnersAvertium, BlueVoyant, NCC Group, Trustwave, and Quorum Cybersponsoring our Microsoft Security VIP Mixer. The mixer is a great time to connect and network with fellow industry experts, and grab a copy of Security Mixology, a threat intelligence-themed cocktail and appetizer cookbookyoull be able to meet some of the contributors! Drinks and appetizers will be provided. Reserve your spot to join us at this exclusive event.On Tuesday, August 6, 2024, from 11:10 AM PT to 11:50 AM PT, well be part of a panel discussion titled “Balancing Security and InnovationRisks and Rewards in AI-Driven Cybersecurity.” Microsoft is honored to be a VisionAIre sponsor for this event. Brandon Dixon, Partner Product Manager, Security AI Strategy will debate the trade-offs between innovation in AI and security risks, share strategies to foster innovation while maintaining robust security, and more. Note: The AI Summit is a separate, one-day event featuring technical experts, industry leaders, and security tsars, designed to give attendees a comprehensive understanding of the potential risks, challenges, and opportunities associated with AI and cybersecurity. Security researchers are a critical part of the defender community, on the front lines of security response evolution, working to protect customers and the broader ecosystem. On Thursday, August 8, 2024, we’ll host our invite-only Microsoft Researcher Celebration. And on August 6, 2024, Microsoft Security Response Center (MSRC) will announce the annual top 100 Most Valuable Researchers (MVRs) who help protect our customers through surfacing and reporting security vulnerabilities under Coordinated Vulnerability Disclosure (CVD). Follow @msftsecresponse on X and Microsoft Security Response Center on LinkedIn for the MVR reveal. In the hands of security professionals and teams, AI can deliver the greatest advantage to organizations of every size, across every industry, tipping the scales in favor of defenders. Microsoft is bringing together every part of the company in a collective mission to advance cybersecurity protection to help our customers and the security community. We offer four powerful advantages to drive security innovation: large-scale data and threat intelligence; the most complete end-to-end protection; industry leading, responsible AI; and the best tools to secure and govern the use of AI. Together we can propel innovation and create a safer world. Were excited to share the latest product news and Microsoft Security innovations during Black Hat 2024 and we hope to see you there.  Join us at the Microsoft Security VIP MixerDont miss this opportunity to connect with Microsoft Security experts and fellow industry leadersand pick up your copy of Security Mixology!For more threat intelligence guidance and insights from Microsoft security experts, visit Security Insider. To learn more about Microsoft Security solutions, visit ourwebsite. Bookmark theSecurity blogto keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity)for the latest news and updates on cybersecurity. Sources:1Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone, Microsoft. April 2024.2Microsoft Copilot for Security is generally available on April 1, 2024, with new capabilities, Vasu Jakkal. March 13, 2024.
Process Automation/Decision Making
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
WAIC
WAIC 2024 Commences: Focusing on Global Governance, Industry Development, and Scientific Frontiers
Shanghai, China, July 08, 2024 (GLOBE NEWSWIRE) -- On July 4, the 2024 World AI Conference&High-Level Meeting on Global AI Governance (WAIC 2024) kicked off at the Shanghai Expo Center and the Shanghai Expo Exhibition Center. On the first day of the conference, over 180 overseas guests from 55 countries participated, engaging in in-depth discussions and exchanges on three key areas: global governance, industrial development, and scientific frontiers.
https://www.globenewswire.com/news-release/2024/07/08/2909902/0/en/WAIC-2024-Commences-Focusing-on-Global-Governance-Industry-Development-and-Scientific-Frontiers.html
https://ml.globenewswire.com/Resource/Download/58cd05f2-8001-4074-a30a-d6d0a696b128
2024-07-08T18:00:00Z
Shanghai, China, July 08, 2024 (GLOBE NEWSWIRE) -- On July 4, the 2024 World AI Conference&High-Level Meeting on Global AI Governance (WAIC 2024) kicked off at the Shanghai Expo Center and the Shanghai Expo Exhibition Center. On the first day of the conference, over 180 overseas guests from 55 countries participated, engaging in in-depth discussions and exchanges on three key areas: global governance, industrial development, and scientific frontiers.Keyword 1: Global GovernanceArtificial intelligence represents a new frontier in human development, significantly impacting economic and social progress while posing unknown risks and complex challenges. Over 1,000 representatives from the United Nations and various governments, representatives from professional international organizations, globally renowned experts, entrepreneurs, and investors attended the plenary session. Under the global agenda of AI governance, they focused on development, security, and governance, and engaged in a series of international, cross-disciplinary, and multi-perspective in-depth discussions..Three globally renowned Turing Award winners, Chi-Chih Yao, Raj Reddy, and Manuel Blum, along with former Microsoft Executive Vice President and Foreign Member of the US National Academy of Engineering, Harry Shum, engaged in a high-level dialogue on collaborative governance and innovation. Through highly analytical discussions, they delved into the double-edged nature of artificial intelligence, its interpretability and predictability, as well as the rigorous foundation and transformative qualities of AI. These discussions provided valuable insights and perspectives on the global development and governance of artificial intelligence. At the Global Governance Forum, over 30 ministerial-level officials from various countries and heads of relevant international organizations exchanged in-depth views on AI development, security, and governance.Throughout WAIC 2024, ten forums related to global governance explored topics of "governance" from multiple perspectives. Leading forums held that day included the "ITU Al for Good Innovate for lmpact" and the "Frontier AI Safety and Governance Forum," focusing on criticial issues such as AI safety and value alignment, AI for Good, and AI legislation. These forums bring together over 20 leading figures from around the world, including Chris Meserole, Zeng Yi, and Xue Lan, for East-West security dialogues and international, multi-dimensional discussions. The Forum on AI and Education invited representatives from academia and industry to discuss how to promote sustainable development in education equity and quality enhancement in the intelligent era from the perspective of global governance talent cultivation.Keyword 2: Industry DevelopmentTo comprehensively deconstruct the new elements, new business forms, and new models of the general AI industry, and to analyze the global strategic trends of the current AI industry development, WAIC 2024 hosted a series of "Industry Development" forums. These forums discussed hot topics such as AI application prospects and ecosystem construction. Additionally, guests were invited to interpret AI global governance from an industrial perspective, focusing on how "AI+" can help bridge the intelligence gap and enhance human well-being, working together with society to build a "people-centered" AI industry ecosystem. On the afternoon of the opening day, the Industry Development Main Forum was successfully held. Over 20 top experts, industry leaders, emerging talents, and representatives from various sectors of the AI industry chain, including Social Data Lab founder, Andreas Weigend, Baidu co-founder, Robin Li and member of Chinese Academy, Wang Jian, focused on cutting-edge topics such as large models, embodied AI, AI infrastructure, and intelligent terminals. They deeply analyzed global strategic trends in the AI industry, discussing AI application prospects, industry governance, and ecosystem construction. The release of the open-source general-purpose humanoid robot "Qinglong" also injects new vitality into the creation of an international digital industry cluster.Several parallel sub-forums were also held, including the AI for Science-Artificial Intelligence Empowering Chinese Space Astronomy Forum, focusing on the industrial development and achievements in satellite manufacturing; the Summit on Humanoid Robots and Embodied Intelligence Development; discussing the application issues and solutions of embodied AI in humanoid robots and solutions; and forums related to technology empowering various industries, addressing cutting-edge topics and attracting the attention of attendees.Humanoid robots were the biggest highlight of the exhibition. In the central hall of the Shanghai Expo Exhibition Center, an array of humanoid robots named "Eighteen Arhats," from 15 companies, made a joint appearance, showcasing collaborative control among 18 humanoid robots with different technological architectures. This year's exhibition focused on frontier AI technologies such as humanoid robots, virtual and augmented reality integration, autonomous driving, drones, and brain-machine interfaces, creating a new intelligent "technology feast" for practical applications. The intelligent terminal section highlights intelligent robots, core components, and wearable devices,  with a dedicated area for humanoid robots. The exhibition featured the debut of 56 products, including 42 intelligent robots, among which 22 were humanoid robots, demonstrating the potential applications of AI in fields such as healthcare, education, transportation, agriculture, and more.Keyword 3: Scientific FrontiersThe Scientific Frontiers Main Forum, themed "Next-Generation Trustworthy AI Technology", gathered top global scientists and researchers from various interdisciplinary fields and young researchers, including Juergen Schmidhuber, Dawn Song and Zhou Bowen, to address fundamental issues in AI to substantiate truly valuable topics for the sustainable development of artificial intelligence.To highlight the importance of foundational disciplines and infrastructure in technological development, Smale Institute for Mathematics and Computing co-hosted the "Mathematics and Artificial Intelligence Forum." The forum brought together global elites from the fields of mathematics and AI, including experts and scholars like Manuel Blum, Jose A. Carrillo, Michael Bronstein, and Xuan Xiaohua, to delve into in-depth discussions on the mutual empowerment of mathematics and AI.To lay a solid foundation for technological development, the conference focused on trends, strategies, and breakthroughs in the development of large models.It held a series of sub-forums including the AI Infrastructure Forum, the OpenAtom Open Source Large Language Model Development Forum, and The Closed-Door Seminar on "Trends and Innovations in the Application of AI Large Models in Industry." These discussions aimed to contribute positively to the advancement of technology at the forefront.
Unknown
Management/Life, Physical, and Social Science/Computer and Mathematical
null
null
null
null
null
null
news
Tanay Varshney
Develop Production-Grade Text Retrieval Pipelines for RAG with NVIDIA NeMo Retriever
Enterprises are sitting on a goldmine of data waiting to be used to improve efficiency, save money, and ultimately enable higher productivity. With generative AI, developers can build and deploy an…
https://developer.nvidia.com/blog/develop-production-grade-text-retrieval-pipelines-for-rag-with-nvidia-nemo-retriever/
https://developer-blogs.…ipelines-RAG.png
2024-07-22T21:52:33Z
Enterprises are sitting on a goldmine of data waiting to be used to improve efficiency, save money, and ultimately enable higher productivity. With generative AI, developers can build and deploy an agentic flow or a retrieval-augmented generation (RAG) chatbot, while ensuring the insights provided are based on the most accurate and up-to-date information.Building these solutions requires not only high accuracy from models in a retrieval pipeline, but also requires the necessary infrastructure to cost-effectively and reliably deploy these models. The NVIDIA NeMo Retriever collection of NIM inference microservices enables these solutions for text embedding and reranking. NeMo Retriever is part of the NeMo platform, used for developing custom generative AIincluding large language models (LLMs), multimodal, vision, and speech AI anywhere. Today, NVIDIA announced the availability of four new community-based NeMo Retriever NIMs. Three NeMo Retriever Embedding NIMs:NV-EmbedQA-E5-v5: an embedding model optimized for text question-answering retrieval.NV-EmbedQA-Mistral7B-v2: a multilingual model fine-tuned for text embedding and accurate question answering. Snowflake-Arctic-Embed-L: an optimized model for text embedding.One NeMo Retriever Reranking NIM:NV-RerankQA-Mistral4B-v3: a model fine-tuned for text reranking and accurate question answering.Before diving into the models, lets understand the function of embedding and reranking models in a retrieval pipeline (Figure 1).Embedding ModelsWhile ingesting documents, embedding models are used to generate vector representations of text that encode its semantic meaning. These vectors are then stored in a vector database. When a user asks a question, this question is encoded into a vector using the same embedding model. This vector is then used to match against the vectors stored in the vector database to retrieve the most relevant information using a heuristic metric like Approximate Nearest Neighbor search. Reranking Models Reranking models take in text chunks from the documents and the question, create a combined representation of both, and then score the relevance of the two texts. An embedding model, along with a heuristic metric and a reranking model are all fundamentally doing the same thing: scoring how similar two pieces of information are. Given this overlap, one might reasonably ask why use them both. Or, which is the better model to use?The answer to these questions lies in the underlying trade-offs being made. Embedding models are considerably faster and cheaper than a reranking model, but a reranking model is significantly more accurate. To make the best of both models, the lighter and cheaper embedding models are used to identify a small pool of relevant chunks to a users question, say 100. Then a heavier but more accurate reranking model is used to identify the most relevant chunks. If multiple retrieval pipelines are being used to extract the top five most relevant information from each source (think data coming from the web, local pdf files, structured databases, and so on), the reranking model is used to narrow down the best pieces of information from that given pool.For building enterprise-grade model inference pipelines, there are two primary considerationscost and stability. Lets touch on how NeMo Retriever NIMs address them.Cost is impacted by time-to-market and the cost of running the models. Time-to-market: NIMs are designed to be easy-to-use and scalable model inference solutions, enabling enterprise application developers to focus on working on their application logic rather than having to spend cycles on building and scaling out the infrastructure. NIMs are containerized solutions, which come with industry-standard APIs and Helm charts to scale.Maximizing utilization: NIMs use the full suite of NVIDIA AI Enterprise software to accelerate model inference, maximizing the value enterprises can derive from their models and in turn reducing the cost of deploying the pipelines at scale.NIMs are part of the NVIDIA AI Enterprise license, which offers API stability, security patches, quality assurance, and support for a smooth transition from prototype to production for enterprises that run their businesses on AI.While designing a retrieval pipeline, there are four factors to balance: accuracy, latency, throughput for data ingestion, and throughput in production. A suite of NIMs enables developers to balance the above considerations to build the pipeline most suited to their workload. The following are three general recommendations:Maximize throughput and minimize latencyThe recommended path for maximizing throughput is to use the NV-EmbedQA-E5-v5 NIM, which enables inference for an optimized and fine-tuned lightweight embedding model. Optimize for low-volume, low-velocity databasesLow-volume and low-velocity databases tend to include critical documents that are referred to often and by a wide number of users. In these cases, it is recommended to use the NV-EmbedQA-Mistral7B-v2 NIM for both ingestion and production cases to maximize throughput and accuracy while retaining low latency.Optimize for high-volume and high-velocity dataHigh-volume and high-velocity data requires considerations for minimizing ingestion costs while maximizing accuracy. This can be achieved by using a lightweight embedding model to index data and then use a reranker to boost retrieval accuracy. It is recommended to use the NV-EmbedQA-E5-v5 NIM to ingest the documents paired with NV-RerankQA-Mistral-4B-v3 for reranking. Figure 3 and Figure 4 contain further information about throughput and accuracy of the NeMo Retriever NIMs. While the accuracy numbers below showcase a few of the academic benchmarks which are good representations of the general enterprise retrieval use case, it is recommended to evaluate all the NIMs as every dataset has its own unique nuances. Refer to this article to learn more about the best practices for evaluating retrieval pipelines. Figure 4. Performance of NeMo Retriever NIM pipeline shows improved  accuracy on popular benchmark datasets like NQ, HotpotQA, FiQA and TechQA (Recall @5)Experience the NVIDIA NeMo Retriever NIMs today in the API catalog in our hosted environment. Explore NVIDIA generative AI examples, illustrating how to integrate NVIDIA NIMs to write sample applications. Apply for a lab to try the AI Chatbot with RAG workflow in NVIDIA LaunchPad. Customize and download the NIMs to deploy anywhere your data resides.
Information Retrieval Or Search/Content Synthesis/Process Automation
Unknown
null
null
null
null
null
null
news
colin-balzer
Revisiting the NVIDIA Jetson AGX Orin: Tiny Package, Large Language Models
We revisit the Jetson AGX Orin and show how to get a Chat-GPT Type LLM on the low-power device.Editors note: We got the opportunity to sit down and re-dig into the NVIDIA Jetson platform with a new member of our team. Check out our article from last year where we ran a vision model on a final product version of Jetson, the Lenovo SE70With NVIDIA’s Jetson platform, developers are able to explore AI options specifically tailored for edge AI development. These systems allow for GPU-enabled server performance in a package you can hold in one hand. Huge thank you to NVIDIA for giving us the Jetson AGX Orin Development Kit to try out and see just how easy it can
https://www.storagereview.com/review/revisiting-the-nvidia-jetson-agx-orin-tiny-package-large-language-models
https://www.storagerevie…n-AGX-ORIN-6.jpg
2024-07-22T21:17:07Z
We revisit the Jetson AGX Orin and show how to get a Chat-GPT Type LLM on the low-power device.Editors note: We got the opportunity to sit down and re-dig into the NVIDIA Jetson platform with a new member of our team. Check out our article from last year where we ran a vision model on a final product version of Jetson, the Lenovo SE70With NVIDIAs Jetson platform, developers are able to explore AI options specifically tailored for edge AI development. These systems allow for GPU-enabled server performance in a package you can hold in one hand. Huge thank you to NVIDIA for giving us the Jetson AGX Orin Development Kit to try out and see just how easy it can be to have your own Local LLM.The Jetson AGX Orin DevKit comes in a small form factor design, only 11cm or about 4.3in width and length and 7.2cm (about 2.8in) tall. On the inside of the Jetson AGX Orin Dev Kit, sits a 2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores and a max frequency of 1.3GHz. There is also an Arm Cortex CPU 12-core A78AE v8.2 64-bit CPU with 3MB L2 cache, 6MB L3 cache, and a max frequency of 2.20GHz.Those two power components, coupled with 64GB of LPDDR5 unified memory with speeds at 204.8GB/s, combine to create this small machines most impressive feat: 275 TOPS in models with 64GBs. Thats 8.6 times the number of TOPS as NVIDIAs predecessor, the Jetson AGX Xavier, which only delivered 32 TOPS.Also under the hood are two M.2 slots: a PCIe Gen 4×4 Key M for any additional storage beyond the 64GB eMMC, and a Gen 4×1 Key E for wireless connections. Online connectivity isnt an issue though, with a 10 gigabit RJ45 connector. Plus there’s a 40-pin header (for UART, SPI, I2S, I2C, CAN, PWM, DMIC, and GPIO), a 12-pin automation header, a 10-pin audio panel header, a 10-pin JTAG header, a 4-pin fan header, a 2-pin RTC battery backup connector, as well as 16-lane MIPI CSI-2 connector for CSI cameras.Theres no shortage of external connectivity either. There are six USB ports: two USB-A 3.2 Gen 2, two USB-A 3.2 Gen 1, and USB-C 3.2 Gen 2 ports. Of those two USB-C ports, one can supply up to 20 Gbps speeds for flashing and data transfer, and the other is dedicated to the 60W power supply. If you require that extra USB-C port then there is an additional power supply hook-up through a DC power jack. However, the system is only sent with its USB-C power supply. There is also a micro SD card slot for a quick storage option and a micro USB-B port to serve as a serial debug port.Hidden underneath a magnetic cover, lies the external PCIe Gen 5×16 slot. In addition, the external PCIe slot supports up to a PCIe 4×8 connection. With no way to internally power a GPU, the slot is best suited for something like a high-speed NIC. For a dedicated display option, the Orin has a DisplayPort 1.4.Jetson AGX Xavier vs. Jetson AGX OrinFeatureJetson AGX Xavier 64GBJetson AGX Orin 64GB Dev KitAI Performance32 TOPS275 TOPSGPU512-core NVIDIA Volta GPU with 64 Tensor Cores2048-core NVIDIA Ampere GPU with 64 Tensor CoresGPU Max FrequencyNot specified1.3GHzCPU8-core NVIDIA Carmel Arm v8.2 64-bit CPU, 8MB L2 + 4MB L312-core Arm Cortex-A78AE v8.2 64-bit CPU, 3MB L2 + 6MB L3CPU Max Frequency2.2GHz2.2GHzDL Accelerator2x NVDLA v1Not specifiedDLA Max Frequency1.4GHzNot specifiedVision Accelerator2x PVA1x PVA v2Memory64GB LPDDR4x, 136.5GB/s64GB  LPDDR5, 204.8GB/sStorage32GB eMMC 5.1, 64GB available in industrial versionNot specifiedVideo Encode4x 4K60 (H.265), 8x 4K30 (H.265), 16x 1080p60 (H.265), 32x 1080p30 (H.265)Not specifiedVideo Decode2x 8K30 (H.265), 6x 4K60 (H.265), 12x 4K30 (H.265), 26x 1080p60 (H.265), 52x 1080p30 (H.265)Not specifiedCSI CameraUp to 6 cameras (36 via virtual channels), 16 lanes MIPI CSI-2, 8 lanes SLVS-EC, D-PHY 1.2 (up to 40 Gbps), C-PHY 1.1 (up to 62 Gbps)Not specifiedPCIe1×8, 1×4, 1×2, 2×1 (PCIe Gen4, Root Port & Endpoint)x16 PCIe slot supporting x8 PCIe Gen4, M.2 Key M slot with x4 PCIe Gen4, M.2 Key E slot with x1 PCIe Gen4USB3x USB 3.2 Gen2 (10 Gbps), 4x USB 2.0USB-C for power supply (15-60W), Single USB-C for flashing and programming, Micro B for serial debug, 2x USB 3.2 Gen2 (USB Type-C), 2x USB 3.2 Gen2 (USB Type-A), 2x USB 3.2 Gen1 (USB Type-A), USB 2.0 (USB Micro-B)Networking1x GbERJ45 connector with up to 10 GbEDisplay3 multi-mode DP 1.4/eDP 1.4/HDMI 2.01x DisplayPort 1.4a (+MST) connectorOther I/O5x UART, 3x SPI, 4x I2S, 8x I2C, 2x CAN, PWM, DMIC, GPIOs40-pin header (UART, SPI, I2S, I2C, CAN, PWM, DMIC, GPIO), 12-pin automation header, 10-pin audio panel header, 10-pin JTAG header, 4-pin fan header, 2-pin RTC battery backup connector, microSD slot, DC power jack, Power, Force Recovery, and Reset buttonsPower10-30W15-60W (via USB-C)AI Side/NVIDIA SDK Set-UpLarge Language Models (LLMs) are AIs, such as ChatGPT or Ollama, that have been trained on large quantities of data. In such a small footprint, its hard to believe that you would be able to run a local, private AI model. Currently, we are seeing “AI PC” laptops popping up in the market from Intel, AMD, and Snapdragon with dedicated NPUs. Those devices, similar to the Jetson platform, run dedicated silicon on the die, that have additional AI acceleration features. Conceptually, these components are designed to function similarly to our brain (hence the “neural” in NPU), and allow large amounts of data to be processed simultaneously. The inclusion of NPUs means that the CPU and GPU are freed up to process other tasks, leading to a far more efficient computer, both power and processing-wise.However, the 40 TOPS produced by Intels Lunar Lake, or AMD’s 50 TOPS platform is still not as great as the combined power of the Jetson Orin Devkits GPU and CPU, making an advertised 275 TOPS. There is more than enough power to have an AI locally in your office, or even in your house/homelab! Other components assisting with AI are the two NVDLA v2 Deep Learning (DL) accelerators, facilitating the speed at which the system is able to do AI processes; and a single Vision accelerator, which speeds up the rate at which Computer Vision is able to process images.Setting up the system to run AI is streamlined by NVIDIAs numerous guides. To get started, you must make sure that you flash your Jetson with Ubuntu, then follow these 6 steps:Step 1: Install NVIDIA SDK ManagerFull instructions and downloads will be available on the NVIDIA SDK site. A free developer account is required for this process.Step 2: Open NVIDIA SDK Manager installed on UbuntuStep 3: Development EnvironmentThis step is to confirm that you have all of your ducks in a row. Confirm your product, system configurations, SDK version, and additional SDKs. For our setup, we used the Jetson AGX Orin Development Kit, Ubuntu 22.04, JetPack 6.0, and Deep Stream 7.0.Step 4: Details and LicensingThis step serves as an installation screen, ensuring that all Host components and target components are downloaded and installed. This is also the place to select the proper download location. The host system requires 15GBs of storage and the target system requires 17GBs of storage.Step 5: Setup ProcessThis step serves as a confirmation window to finish the setup. Here you will select the Recovery mode, choosing whether it would be a manual or automatic forced recovery mode, automatic being for when you have already had the system flashed and running. From here, you can set up/confirm your IP address, add a username and password, choose your OEM configuration, and the target storage device. Once all of that is set, youll be able to click the Flash option.Step 6: Summary FinalizationFinally, this step will run the system. After this, you will be able to run the code:jetson-containers run --name ollama $(autotag ollama)Running the first line of code will launch the Ollama LLM. Ollama is a popular platform that makes local setup and development of LLMs simple and easy, even being able to be set up inside or outside the container. It includes a built-in model library of pre-quantized weights, and will be automatically downloaded and run using llama.cpp behind the scenes as an inference. The Ollama container was compiled with CUDA support, making it perfect for use on the Jetson AGX Orin. Then by running the code:docker run -it --rm --network=host --add-host=host.docker.internal:host-gateway ghcr.io/open-webui/open-webui:mainYou will then be able to access the Open Web User Interface (OWUI) on the device IP or DNS address on port 8080, which will function as a chatbot. The OWUI serves as a plug-in to the API of the Ollama server, but can also use OpenAIs ChatGPT, Metas Llama-3, or Microsofts Phi-3 Mini as plug-ins.While on such a low power budget, the time to first token for larger models is notably slow, the platform is still able to deliver acceptable performance once loaded.ConclusionThe Jetson AGX Orin Development Kit offers significant performance in a compact form factor. As AI PC solutions become increasingly relevant, the Jetson platform stands out, especially when considering the TOPS limitations of NPUs integrated into new CPU releases. The Jetson AGX Orin provides a robust stepping stone for developers, particularly those requiring ARM-native applications, aiding in model validation and refinement.While this is a development kit, its ease of use and ample power make it an excellent starting point for businesses embarking on their AI journey. The Jetson platform showcases the immense potential of small form factor AI solutionselegantly designed, extremely power-efficient, and capable of delivering 275 TOPS of AI performance. This combination makes the Jetson platform comparable to much larger, rack-mounted AI servers.NVIDIA’s comprehensive guides simplify the process of flashing and deploying a variety of AI models, with Generative AI being just one piece of the puzzle. For businesses ready to develop and deploy AI, the Jetson AGX Orin Development Kit offers a perfect blend of power efficiency, small footprint, and outstanding AI performance, making it an ideal choice for exploring and implementing AI technologies.Jetson AGX Orin Development KitEngage with StorageReviewNewsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed
Content Creation/Content Synthesis/Process Automation
Unknown
null
null
null
null
null
null
news
Yan Ouyang, Xuewei Li, Wenjia Zhou, Wei Hong, Weitao Zheng, Feng Qi, Liming Peng
Integration of machine learning XGBoost and SHAP models for NBA game outcome prediction and quantitative analysis methodology
This study investigated the application of artificial intelligence in real-time prediction of professional basketball games, identifying the variations within performance indicators that are critical in determining the outcomes of the games. Utilizing games data from the NBA seasons 2021 to 2023 as the sample, the study constructed a real-time predictive model for NBA game outcomes, integrating the machine learning XGBoost and SHAP algorithms. The model simulated the prediction of game outcomes at different time of games and effectively quantified the analysis of key factors that influenced game outcomes. The study’s results demonstrated that the XGBoost algorithm was highly effective in predicting NBA game outcomes. Key performance indicators such as field goal percentage, defensive rebounds, and turnovers were consistently related to the outcomes at all times during the game. In the first half of the game, assists were a key indicator affecting the outcome of the game. In the second half of the games, offensive rebounds and three-point shooting percentage were key indicators affecting the outcome of the games. The performance of the real-time prediction model for NBA game outcomes, which integrates machine learning XGBoost and SHAP algorithms, is found to be excellent and highly interpretable. By quantifying the factors that determine victory, it is able to provide significant decision support for coaches in arranging tactical strategies on the court. Moreover, the study provides reliable data references for sports bettors, athletes, club managers, and sponsors.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307478
https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0307478.g009&size=inline
2024-07-23T14:00:00Z
AbstractThis study investigated the application of artificial intelligence in real-time prediction of professional basketball games, identifying the variations within performance indicators that are critical in determining the outcomes of the games. Utilizing games data from the NBA seasons 2021 to 2023 as the sample, the study constructed a real-time predictive model for NBA game outcomes, integrating the machine learning XGBoost and SHAP algorithms. The model simulated the prediction of game outcomes at different time of games and effectively quantified the analysis of key factors that influenced game outcomes. The studys results demonstrated that the XGBoost algorithm was highly effective in predicting NBA game outcomes. Key performance indicators such as field goal percentage, defensive rebounds, and turnovers were consistently related to the outcomes at all times during the game. In the first half of the game, assists were a key indicator affecting the outcome of the game. In the second half of the games, offensive rebounds and three-point shooting percentage were key indicators affecting the outcome of the games. The performance of the real-time prediction model for NBA game outcomes, which integrates machine learning XGBoost and SHAP algorithms, is found to be excellent and highly interpretable. By quantifying the factors that determine victory, it is able to provide significant decision support for coaches in arranging tactical strategies on the court. Moreover, the study provides reliable data references for sports bettors, athletes, club managers, and sponsors.Citation: Ouyang Y, Li X, Zhou W, Hong W, Zheng W, Qi F, et al. (2024) Integration of machine learning XGBoost and SHAP models for NBA game outcome prediction and quantitative analysis methodology. PLoS ONE 19(7): e0307478.https://doi.org/10.1371/journal.pone.0307478Editor: Haroldo V. Ribeiro, Universidade Estadual de Maringa, BRAZILReceived: April 8, 2024; Accepted: July 7, 2024; Published: July 23, 2024Copyright: © 2024 Ouyang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Data Availability: All the data can be found at https://www.basketball-reference.com/. For more detailed instructions, please see Data acquisition in the Data processing procedures and experimental results section.You can access the code using the following link: https://github.com/YanOuyang514/NBA-game-outcome-prediction-and-quantitative-analysis-methodology.git.Funding: This research was funded by the 14th Five-Year-Plan Advantageous and Characteristic Disciplines (Groups) of Colleges and Universities in Hubei Province (Grant Number: E Jiao Yan No. [2021] 5). The funder (Liming Peng) participated in the revision of the manuscript and in the decision to publish. Hubei Provincial Social Science Fund General Project Research on Personalized Recommendation of Online Sports Education Resources Based on Knowledge Graph (Grant Number: 2021330), and the Scientific and Technological Research Project of Hubei Provincial Education Department (Grant Number: B2021189). The funder (Xuewei Li) had participated in study design, data collection, and analysis, the decision to publish, as well as preparation of the manuscript.Competing interests: The authors have declared that no competing interests exist.1 IntroductionWith the rapid development of artificial intelligence and computer technology, data-driven sports training and athletic decision-making have now become a thriving focus of scholarly inquiry in competitive sports development. Scientific and accurate prediction of sports competitions and analysis of sports performance have also become increasingly important. Numerous studies have confirmed that machine learning techniques demonstrate significant potential in predicting game outcomes, analyzing sports performance, assessing physical fitness, and other related sports issues [19], making machine learning models a valuable tool for sports performance analysts.While machine learning methods are powerful due to the complexity of their models, they are still limited by the difficulty of direct interpretation, known as the so-called black box issue. Currently, simple models such as logistic regression and decision trees are widely applied due to their ease of understanding and strong interpretability. However, they often fall short in achieving high predictive accuracy. To overcome the issues of these black boxes, the Shapley Additive exPlanations (SHAP) method is utilized to interpret machine learning models and visualize individual variable predictions [10].This research is designed to construct a real-time model for predicting the winning team in basketball games by integrating machine learning algorithms and to perform a quantitative analysis of the key factors affecting the game outcomes. The innovation in our research methodology lies in the use of a grouping strategy to build real-time game prediction models, leveraging the SHAP algorithm to interpret the optimal model, and conducting a quantitative analysis of the winning factors at different time of basketball games. This provides significant decision support for coaches on-court dynamic decision-making and offers reliable data references for sports bettors, athletes, club managers, and sponsors.Following the introduction, Section 2 thoroughly reviews the existing machine learning research in the field of sports, with a particular focus on advancements in basketball, providing a solid foundation for this study. Subsequently, Section 3 details the designed solution and the essential key technologies required for its implementation. Section 4 elucidates the data processing procedures and presents the experimental results. Section 5 delves into the discussion and analysis of the key factors determining victory or defeat at different time of the game. Section 6 demonstrates the practical application value of the predictive model through case studies. Finally, Section 7 summarizes the research findings and considers the potential limitations of the study.2 Related studiesWith the rapid development of artificial intelligence technology, its application in assisting sports event analysis and decision-making has garnered increasing attention, especially in ball games such as basketball, soccer, and American football. The feasibility and benefits of using various machine learning algorithms to predict game outcomes have been extensively explored in numerous studies [1119]. For instance, Rodrigues and Pinto [11] (2022) employed machine learning algorithms such as Naive Bayes, K-Nearest Neighbors (KNN), Random Forest, Support Vector Machines (SVM), and Artificial Neural Networks (ANN) to analyze game data from five seasons of the English Premier League in order to construct a model for predicting game outcomes. By comparing the prediction accuracy of various models, they selected the one with the most superior performance and successfully validated the models economic benefits using bookmaking data. Ötting [12] (2021) utilized Hidden Markov Models for predicting National Football League games, employing the 2018 National Football League season data as a sample, achieving an accuracy rate of 71.6%. Yaseen et al. [13] (2022) employed Logistic Regression and SVM algorithms to analyze Major League Baseball (MLB) game data spanning the past 20 years, aiming to predict which teams would make it to the playoffs in 2019, with an accuracy rate of 77%.Çene [20] (2022) explored the performance of seven different machine learning algorithms in predicting European league games from the 20162017 to the 20202021 seasons. The findings indicated that logistic regression, SVM, and ANN were the most effective models, with an overall accuracy rate of approximately 84%.Cai et al. [21] (2019) constructed a hybrid ensemble learning framework based on the SVM using data from the Chinese Basketball Association (CBA) teams, achieving a prediction accuracy rate of 84%. Kaur and Jain [22] (2017) developed a hybrid fuzzy-SVM model (HFSVM) using regular season game data from 20152016 to predict NBA game outcomes. Comparative analysis revealed that this model outperformed the standard SVM model, with a prediction accuracy rate of 88.26%.Moreover, Pai et al. [23] (2017) employed a hybrid model that combines SVM algorithms with decision tree algorithms to predict NBA games. Their analysis revealed that the hybrid model leverages the unique strengths of SVM and decision trees in generating rules and predicting game outcomes, providing insights to assist coaches in devising strategies. Huang and Lin [24] (2020) and Shi and Song [25] (2021) utilized regression trees and finite state Markov chain models, respectively, to predict NBA games and, based on NBA season data, validated the models positive economic benefits in the betting market.Zhao et al. [26] (2023) applied Graph Neural Networks (GNN) to predict basketball game outcomes by transforming structured data into unstructured graphs to reveal the complex passing interactions among players. By integrating machine learning algorithms with Graph Convolutional Network (GCN) models, they achieved a prediction accuracy rate of 71.54%, offering a novel perspective on team interactions.Osken and Onay [27] (2022) utilized K-means and C-means clustering analysis to identify style types of NBA basketball players and constructed an ANN to predict NBA game outcomes based on player style types and game data. During the 20122018 NBA seasons, this method achieved a prediction accuracy rate of 76%.In addition to predicting the outcomes of games, many scholars have applied logistic regression and decision tree algorithms to study the factors contributing to success in basketball. For instance, Leicht et al. [28] (2017) constructed logistic regression and Conditional Inference (CI) decision tree algorithms using data from mens Olympic games between 2004 and 2016 to predict game outcomes. They found that the logistic regression prediction model had a higher accuracy rate than the CI decision tree model. However, the CI decision tree offered greater practicality for coaches by addressing non-linear phenomena. The combination of points per game, defensive rebounds, turnovers, and steals was found to effectively explain the results of games, providing important guidance for the formulation of training and game strategies, thereby increasing the probability of success in mens Olympic basketball competitions.Thabtah et al. [29] (2019) utilized Naive Bayes, ANN and Decision tree algorithms to model the historical data of NBA Finals from 1980 to 2017 in the Kaggle dataset. They discovered that the most effective method was a decision tree based on logistic regression, achieving an accuracy rate of 83%. Through feature selection analysis, it was found that defensive rebounds (DRB) was the most significant feature affecting NBA game outcomes. Additionally, three-point percentage (TPP), free throw percentage (FT), field goal percentage (FGP), and total rebounds (TRB) were identified as important factors.3 Applied algorithms and methodsBuilding on the insights from related studies, our research introduces an innovative real-time prediction method for NBA game outcomes. The research flowchart is shown in Fig 1. The method integrates machine learning XGBoost [30] with SHAP [10] models. Considering the practical significance of the prediction model, we designed two different approaches during the model construction process: a real-time prediction model based on the technical performance indicators from the first two quarters and the first three quarters of the game, and a post-game prediction model constructed based on the full-game technical performance indicators.To optimize prediction accuracy, this study employed methods such as Bayesian optimization and grid search to fine-tune the hyperparameters of seven mainstream machine learning algorithm models, including KNN, LightGBM, SVM, Random Forest, Logistic Regression, and Decision Tree. The performance of various metrics such as accuracy, precision, recall, F1 Score, and AUC value was compared in a ten-fold cross-validation experiment, demonstrating the superiority of the XGBoost algorithm.Furthermore, the SHAP algorithm was introduced to enhance the interpretability of the XGBoost models prediction process, quantifying the relationship between game technical performance indicators and the outcomes of games. As an example, the XGBoost models application in predicting the outcomes of NBA games, analyzing the key factors for victory, and devising targeted training strategies was demonstrated using Game 2 of the 2023 NBA Finals.3.1 XGBoost algorithmXGBoost, short for eXtreme Gradient Boosting, is an ensemble learning algorithm based on gradient boosting decision trees. Its core concept involves using decision trees as weak learners, iteratively fitting the residuals of the predictions from the previous iteration, ultimately forming a strong learner through weighted aggregation [30]. The basic formula for the model is presented in Eq (1):(1)In Eq (1), K represents the total number of trees, and fk(x) denotes a specific decision tree. Each tree contributes to the final prediction by fitting the residuals of the previous iteration. The predicted output result after the t-th iteration is represented by Eq (2):(2)Therefore, the objective function of XGBoost, which aims to minimize the difference between predicted and actual values, is represented by Eq (3):(3)The objective function consists of two parts: the loss function and the regularization term (fk). Here, yi represents the actual outcome of the game, and is the predicted outcome from the model. The regularization term (fk) helps to control the complexity of the model, preventing overfitting.According to the forward stage-wise training process, the structure of the first t-1 trees is constant. Therefore, the objective function after t iterations can be rewritten as shown in Eq (4):(4)To construct the optimal model by minimizing the objective function, a second-order Taylor expansion is used to approximate the loss function, leading to the updated objective function in Eq (5):(5)Here, gi and hi represent the first and second-order derivatives of the loss function, respectively. The complexity of the tree model, denoted by (ft), is represented in Eq (6):(6)Where T denotes the number of leaf nodes, indicates the penalty for the number of leaf nodes, and represents the L2 regularization term. The optimal weight for each leaf node j can be derived, and thus the final optimized objective function is represented by Eq (7):(7)By iterating this process, XGBoost constructs an ensemble of decision trees that effectively minimizes the prediction error while maintaining model interpretability and complexity control.3.2 SHAP algorithmMachine learning algorithms have demonstrated excellent performance in predicting the outcomes of NBA games, but they also face the issue of being interpretable. We often cannot comprehend the decision-making process of machine learning algorithms, which is commonly referred to as the black box model. In 2017, Lundberg and Lee [10] (2017) proposed an additive SHAP interpretation algorithm, inspired by game theory. The purpose of this algorithm is to calculate the Shap Values for each feature, reflecting the contribution of different features to the model. This allows for the interpretation of the prediction mechanism of complex machine learning algorithm models from both local and global perspectives [10,31]. The framework of the SHAP algorithm is shown in Fig 2.SHAP interprets the predictions of a machine learning model as the sum of Shap Values for each input feature, as seen in Eq (8).(8)In the equation, represents the predicted value of the model. f0 is the average predicted value over all training data of the model, which is referred to as the base values. f(xij) is the SHAP value for each feature of each sample, which reflects the positive or negative influence of each feature on the models predicted outcome.Feature importance as a traditional method calculates the significance of each feature by altering one feature variable while holding all other feature variables constant. However, this method fails to reflect the relationship between features and the models predictive outcomes. In contrast, SHAP is capable of calculating the contribution values of features and visualizing the results, which allows for a direct understanding of the positive or negative impact of input features on the predicted outcomes of games. This, in turn, increases the explainability of the predictive model.4 Data processing procedures and experimental results4.1 Data processing procedures4.1.1 Data acquisition.This study utilized a Python web scraper to collect relevant data on over 3,800 NBA games from the 20202021 season to the 20222023 season, across three seasons, from the Basketball Reference website (https://www.basketball-reference.com/). After removing outliers, including data points from preseason games, All-Star games, and invalid erroneous entries, a final set of 3,710 valid data points was obtained. The data includes various indicators such as basic game information, team-specific technical statistics for individual quarters, and team-wide technical statistics for the entire game. The study does not involve human or animal subjects and does not include any personal or sensitive information. Therefore, approval from an Institutional Review Board (IRB) or ethics committee was not required [32].4.1.2 Reliability and validity of data.To validate the effectiveness of the data, a subset of 15 games (with at least five games from each season) was randomly selected. Two experienced basketball players (first-class athletes from China) reviewed the game footage and compared it with the data collected from the website. The results demonstrate a high level of data reliability, with an intraclass correlation coefficient (ICC) of 0.98.4.1.3 Data preparation.The game outcome prediction problem was transformed into a binary classification issue. The target value result represented the binary classification label for the home teams win or loss, with a home team win/loss converted to the numerical values 1/0, respectively. Additionally, the data from the first two quarters and the first three quarters of the game were summed up to create new feature variables, with the prefixes H2, H3, and game added to distinguish them. Definitions of these variables can be found in Table 1. Leveraging a thorough understanding of the sport of basketball and the specifics of game outcome prediction, and considering the characteristics of the data indicators, an approach was adopted where features representing identical technical indicators for both the home and away teams are subtracted from each other. This method helped to mitigate situations where the technical statistics of the home and away teams were closely matched, reducing the interference of redundant information. Concurrently, it served to lower the dimensionality of the data, thereby enhancing the performance and efficiency of the predictive model.Exploratory analysis was conducted by plotting a heatmap of feature correlations. The color of the heatmap indicates the correlation between two features: darker colors represent stronger positive correlations, while lighter colors represent stronger negative correlations. The numerical values represent the correlation coefficients between corresponding features. The asterisks reflect the significance levels of the correlation coefficients: no asterisk denotes p > 0.05, one asterisk denotes 0.01 < p < 0.05, two asterisks denote 0.001 < p < 0.01, and three asterisks denote p < 0.001. Taking the full-game technical statistics as an example, the correlation heatmap is shown in Fig 3.Fig 3. Heatmap of NBA game technical statistics data.The color of the heatmap indicates the correlation between two features: darker colors represent stronger positive correlations, while lighter colors represent stronger negative correlations. The numerical values represent the correlation coefficients between corresponding features. The asterisks reflect the significance levels of the correlation coefficients: no asterisk denotes p > 0.05, one asterisk denotes 0.01 < p < 0.05, two asterisks denote 0.001 < p < 0.01, and three asterisks denote p < 0.001.https://doi.org/10.1371/journal.pone.0307478.g003We observed that there are strong correlations (correlation coefficients 0.8) among variables such as game_FG and game_FG%, game_3PA and game_3P%, game_FTA and game_FT, and game_TRB and game_DRB, all with p-values less than 0.001. Consequently, we conducted an exploratory analysis of the full-game technical statistics dataset and plotted the corresponding scatter plots.Analysis of the scatter plots (Fig 4) shows that the number of attempts in two-point shots, three-point shots, and free throws is linearly and positively correlated with the number of successful shots. Additionally, the shooting accuracy improves with an increase in the number of successful shots. Furthermore, there is a linear positive correlation between defensive rebounds and total rebounds, while offensive rebounds tend to decrease as defensive rebounds increase.To avoid the high correlation between feature variables affecting the models predictive performance, feature selection was conducted using a heatmap of feature correlations. The total rebounds were calculated as the sum of offensive and defensive rebounds, and the field goal percentage was determined as the ratio of field goal attempts to field goals made. Since the field goal percentage more accurately reflects the teams strength and condition, features such as field goal makes, field goal attempts, two-pointer makes, two-pointer attempts, three-pointer makes, three-pointer attempts, free throw makes, free throw attempts, and total rebounds were removed. Additionally, irrelevant features from the basic game information were also eliminated, constructing a new set of features for the sample dataset. The describe function was used to perform descriptive statistics on the NBA sample data, and the analysis results are listed in Table 2 below.To ensure the robustness of our predictive model and to enhance the reliability and interpretability of our research findings, we employed logistic regression to test the significance of key performance Variables. Tables 35 present the logistic regression analysis results for different periods of the game.From the logistic regression results, it is evident that field goal percentage, two-point field goal percentage, three-point field goal percentage, free throw percentage, offensive rebounds, defensive rebounds, assists, personal fouls, and turnovers significantly impact the game outcome across different game periods. In contrast, blocks and steals do not significantly affect the game outcome. Additionally, personal fouls and turnovers have a negative impact on the game outcome.Given the crucial role of blocks and steals in influencing game results and their confirmed importance in previous research, we decided to retain all 11 indicators [28, 3335], including blocks and steals, across the three distinct time period datasets.4.2 Model training and experimental resultsTo enhance the practical significance and application value of the predictive model, this study grouped and merged the NBA game dataset according to the duration of the games, constructing datasets for the first two quarters, the first three quarters, and the full game to make real-time predictions of game outcomes at the end of the second and third quarters. An NBA game outcome prediction model was built based on the XGBoost algorithm. Hyperparameter tuning was conducted for the XGBoost algorithm and five other main-stream machine learning algorithmsKNN, LightGBM, SVM, Random Forest, Logistic Regression, and Decision Treeusing methods such as Bayesian optimization and grid search. The optimal predictive model architecture was obtained through a ten-fold cross-validation comparative experiment, combined with evaluation metrics. Finally, the SHAP algorithm was introduced for interpretability analysis of the best model, to uncover the key factors that determine the outcome of the games.This study employs five classification performance metricsAUC, F1 Score, accuracy, precision, and recallto evaluate the quality of the NBA game outcome prediction model. When a model demonstrates good performance across these metrics in comparative experiments, it is considered to have superior predictive capabilities. Accuracy, precision, and recall respectively reflect the performance of a predictive model in being right, being precise, and being comprehensive. The F1 Score combines precision and recall, reflecting the robustness of the model. The probabilistic meaning of the AUC value is the probability that, when randomly selecting a pair of positive and negative samples, the positive sample will have a higher score than the negative sample. The AUC value ranges from 0 to 1, with a higher AUC indicating a higher predictive value of the model.The classification prediction results for game outcomes are presented in the confusion matrix as shown in Table 6. 1. Games where the actual result was a home team win and were predicted as a home team win are True Positives (TP); 2. Games where the actual result was a home team loss but were predicted as a home team win are False Positives (FP); 3. Games where the actual result was a home team loss and were predicted as a home team loss are True Negatives (TN); 4. Games where the actual result was a home team win but were predicted as a home team loss are False Negatives (FN).Based on the confusion matrix, the accuracy, precision, recall, and F1 Score can be calculated using the following equations:(9)(10)(11)(12)4.2.1 Ten-fold cross-validation comparative experiment.After hyperparameter tuning using Bayesian optimization and grid search, the evaluation metric results of the predictive models for different time periods by various algorithms in the ten-fold cross-validation comparative experiments are listed in Tables 79 and shown in Fig 5: The XGBoost algorithm exhibits optimal performance in predicting the outcomes of NBA games. In terms of the AUC and F1 Score metrics, the XGBoost algorithm performed excellently, consistently ranking in the top 2 across the ten-fold cross-validation comparative experiments for the three different time periods. Regarding the accuracy and precision metrics, the XGBoost algorithm consistently showed the best performance. However, in terms of recall, the XGBoost algorithm ranked 4th, 3rd, and 1st in the comparative experiments for the three different time periods, with recall values of 0.775, 0.807, and 0.939, respectively.Fig 5. Comparative chart of performance evaluation metrics for NBA game outcome prediction models.(a) first two quarters period, (b) first three quarters period, (c) full game period. SVM = Support Vector Machines; KNN = K-Nearest Neighbors.https://doi.org/10.1371/journal.pone.0307478.g005In summary, across the ten-fold cross-validation experiments for three different periods (first two quarters, first three quarters, and the full game), the XGBoost algorithm outperformed other mainstream machine learning algorithms in the comprehensive evaluation of five performance metrics, demonstrating the most ideal predictive effect. It effectively captures the complex nonlinear relationship between NBA game technical indicators and game outcomes. The XGBoost algorithm model optimizes the objective function through a second-order Taylor expansion, enhancing computational accuracy and iterative efficiency. Additionally, regularization terms are added to the loss function to control the models complexity and prevent overfitting.Overall, the LightGBM algorithm, an ensemble learning method, showed the closest performance to the XGBoost algorithm, reflecting the superiority of ensemble algorithms to some extent. Following these, the Logistic Regression algorithm performed slightly behind the LightGBM and XGBoost algorithms, which may be attributed to its sensitivity to multicollinearity among independent variables. The Decision Tree algorithm lagged behind the other machine learning algorithms in all metrics, possibly due to its difficulty in handling data with strong feature correlations. The research results indicate that, based on data preprocessing and feature extraction, the model presented in this study can better predict the outcomes of NBA games in real-time compared to existing studies.4.2.2 Analysis of factors influencing game outcomes at different time.SHAP provides powerful and diverse data visualization charts to demonstrate the interpretability of the model. Based on the XGBoost real-time game outcome prediction model discussed earlier, SHAP quantifies and ranks the importance of features that influence the outcome of the games, as listed in Table 10. The SHAP feature summary Fig 6 illustrates that field goal percentage, defensive rebounds, and turnovers consistently rank in the top four for SHAP importance across different time of the game. Additionally, assists rank 3rd in importance during the first two quarters, but drop to 8th and 9th during the first three quarters and the full game, respectively. Offensive rebounds and three-point shooting percentages rank 9th and 8th in the first two quarters, respectively, and then rise to 5th and 3rd during the first three quarters and full game. Blocks, on the other hand, consistently rank 11th in SHAP importance across all time of the game.5 DiscussionTeam technical statistics are one of the essential tools for analyzing games. Delving deeply and exploring technical statistics from multiple angles can help us understand the current state of play and the offensive and defensive technical characteristics and applications of our opponents [36]. This study found that field goal percentage, defensive rebounds, and turnovers are key factors influencing the outcome of games across different periods.The results indicated that Field goal percentage consistently holds the top spot in importance across three distinct periods within the game, corroborating prior studies that have emphasized the critical role of Field goal percentage in clinching victories in basketball contests [3740]. Previously, Gómez et al. [41] (2008) and Ibáñez et al. [42] (2019) have indicated that the essence of basketball is striving to score more points than the opponent. A higher Field goal percentage can lead to a greater scoring advantage over the adversary, thus increasing the likelihood of winning the games and competitions.Defensive rebounds have been proven to be a key winning factor in high-level basketball games [29, 3739, 43]. Specifically, a defensive rebound occurs when a defensive player secures the basketball after an unsuccessful shot by the offensive team, thereby gaining possession. Defensive rebounds reflect the teams overall defensive rotation and coordination, the execution of defensive tactics, as well as individual players one-on-one defensive abilities and attitudes. Sampaio et al. [44] (2010) have shown that defensive players should apply aggressive defense against their assigned opponents, forcing them into turnovers or contested shots, and actively fight for defensive rebounds. The protection of defensive rebounds is closely related to the rapid initiation of fast breaks; the direct purpose of contesting defensive rebounds is to transition from defense to offense, creating counterattacks and thus gaining opportunities to organize offensive plays and score. Additionally, contesting for defensive rebounds can limit the opponents second-chance points and increase ones own offensive opportunities.High-level basketball games are characterized by intense competition, and turnovers are an inevitable possibility. However, in basketball, turnovers signify a change of possession and a loss of offensive opportunities, directly impacting the games scoring and the teams morale and confidence, thereby reducing the probability of winning the game [45]. A research by Leicht et al. [28] (2017) indicates that in World Cup competitions, strong teams may have certain advantages in terms of physical fitness and tactical skills. However, frequent turnovers by players during a particular stage can create psychological pres
Prediction/Decision Making/Recommendation
Arts, Design, Entertainment, Sports, and Media/Business and Financial Operations
null
null
null
null
null
null
news
null
Sharing is caring: How to make the most of your GPUs (part 1 - time-slicing)
As artificial intelligence (AI) applications continue to advance, organizations often face a common dilemma: a limited supply of powerful graphics processing unit (GPU) resources, coupled with an increasing demand for their utilization. In this article, we'll explore various strategies for optimizing GPU utilization via oversubscription across workloads in Red Hat OpenShift AI clusters. OpenShift AI is an integrated MLOps platform for building, training, deploying and monitoring predictive and generative AI (GenAI) models at scale across hybrid cloud environments.GPU oversubscription is like "
https://www.redhat.com/en/blog/sharing-caring-how-make-most-your-gpus-part-1-time-slicing
https://www.redhat.com/t…social-share.jpg
2024-07-02T00:00:00Z
As artificial intelligence (AI) applications continue to advance, organizations often face a common dilemma: a limited supply of powerful graphics processing unit (GPU) resources, coupled with an increasing demand for their utilization. In this article, we'll explore various strategies for optimizing GPU utilization via oversubscription across workloads in Red Hat OpenShift AI clusters. OpenShift AI is an integrated MLOps platform for building, training, deploying and monitoring predictive and generative AI (GenAI) models at scale across hybrid cloud environments.GPU oversubscription is like "carpooling" for your GPU youre getting more people (processes) into the same car (GPU) to use it more efficiently. This approach helps you get more throughput, keeping the overall system latency under specific service level agreements (SLAs), and reducing the time the resources are not used. Of course, there can be some traffic jams (too many processes racing for resources), but with the right strategies, and the understanding of your workloads, you can keep the systems consistently outperforming.This is a series of articles where we will review the different strategies supported by the NVIDIA GPU operator to oversubscribe the available GPU resources. These strategies are tested in the context of the inference service distributed as part of the latest version of OpenShift AI, Text Generation Inference Service (TGIS).The mainstream three strategies supported by NVIDIA's GPU Operator to oversubscribe GPUs are:Time-slicing: Allowing multiple workloads to share GPUs by alternating execution timeMulti-instance GPU (MIG) partitioning: dividing GPUs into isolated and static instances for concurrent usage by different applicationsMulti-Process Service (MPS): optimizing the execution of parallel GPU workloads by enabling concurrent kernel executionThere is a set of assumptions for the development of this article:The experiments and configurations are applied to an OpenShift 4.15 clusterThe GPU used for the experiments is an NVIDIA A100 40GB PCIeThe software stack deployed to run the experiments is Red Hat OpenShift AI 2.9 with the latest version of the NVIDIA GPU operatorRed Hat OpenShift AI v2.X to serve models from the flan-t5 LLM familyIn this first article we will look at time-slicing, how it is configured, how the models behave when doing inference with time-slicing enabled and when you might want to use it.Time-slicingThe simplest approach for sharing an entire GPU is time-slicing, which is akin to giving each process a turn at using the GPU, with every process scheduled to use the GPU in a round-robin fashion. This method provides access for those slices, but there is no control over how many resources a process can request, leading to potential out-of-memory issues if we don't control or understand the workloads involved.Configuring the NVIDIA GPU operatorThe NVIDIA GPU operator can be configured to use the Kubernetes device plugin to manage GPU resources efficiently within the cluster. The NVIDIA GPU operator streamlines the deployment and management of GPU workloads by automating the setup of the necessary drivers and runtime components. With the Kubernetes device plugin, the operator integrates with Kubernetes resource management capabilities, allowing for dynamic allocation and deallocation of GPU resources as needed by the workloads.The Kubernetes device plugin is the interface used to apply the configuration changes in the nodes containing GPUs. When configuring the NVIDIA GPU operator, the device plugin is responsible for advertising the availability of GPU resources to the Kubernetes API, making sure that these resources can be requested by pods and assigned accordingly. These changes can be applied per node.Configuring time-slicingThe following custom resource (CR) example defines how we will be sharing the GPU in a config map (this won't have any effect on the cluster at the moment). In the CR we specify the sharing strategy for a specific keythis key is the GPU model NVIDIA-A100-PCIE-40GBand we allocate seven replicas for that resource.cat << EOF | oc apply -f ----apiVersion: v1kind: ConfigMapmetadata: name: time-slicing-config namespace: nvidia-gpu-operatordata: NVIDIA-A100-PCIE-40GB: |-   version: v1   sharing:     timeSlicing:       resources:         - name: nvidia.com/gpu           replicas: 7EOFWith the resource created, we need to patch the initial ClusterPolicy from the GPU operator called gpu-cluster-policy. The changes need to be applied to the devicePlugin section.oc patch clusterpolicy \   gpu-cluster-policy \   -n nvidia-gpu-operator \   --type merge \   -p '{"spec": {"devicePlugin": {"config": {"name": "time-slicing-config"}}}}'To make sure the resources are configured correctly, we label a specific node stating that the device-plugin.config should point to the configuration we created in the previous steps. This means also that the configuration can be applied on a per node basis.oc label \--overwrite node this-is-your-host-name.example.com \nvidia.com/device-plugin.config=NVIDIA-A100-PCIE-40GBAfter a few minutes, we can see that the GPU operator reconfigured the node to use time-slicing. We can verify that by running:oc get node \--selector=nvidia.com/gpu.product=NVIDIA-A100-PCIE-40GB \-o json | jq '.items[0].status.capacity'The output should look like:{"cpu": "128","ephemeral-storage": "3123565732Ki","hugepages-1Gi": "0","hugepages-2Mi": "0","memory": "527845520Ki","nvidia.com/gpu": "7","pods": "250"}That means we have seven slices of the GPU ready to be used, identified as  nvidia.com/gpu: 7.Performance evaluation of time-slicingNow that we configured time slicing, lets compare the performance of an inference workload when the GPU is used by only one replica (not shared) and when we allocate the GPU to multiple replicas of the same inference service (shared with time-slicing).What is llm-load-test?Red Hats Performance and Scale (PSAP) team created llm-load-test, a tool for benchmarking the performance of large language models (LLMs). Reproducibility is critical when benchmarking, and llm-load-test helps users evaluate performance, enabling better consistency and reliability for LLMs across different environments. By providing a structured framework for performance testing, llm-load-test enables users to understand how their models behave under various loads, helping to identify potential bottlenecks and areas for optimization.Configuring llm-load-testIn this experiment we will be measuring the throughput and latency of an LLM as the number of parallel queries increases. We will query an OpenShift AI inference service endpoint (TGIS standalone), where an LLM from the flan-t5 family (flan-t5-base) was loaded. Once OpenShift AI has been installed, and the inference service is up and running, we should get a valid URL where we can ask the model for inference.The first step is to download the latest version of llm-load-test:git clone https://github.com/openshift-psap/llm-load-test.gitcd llm-load-testOnce in the root folder of the project we need to adjust the configuration YAML file. The following is an abstract of the example configuration file (config.yaml) with the parameters that will be modified for this article.dataset: file: "datasets/openorca_large_subset_011.jsonl" max_queries: 1000 min_input_tokens: 0 max_input_tokens: 1024 max_output_tokens: 256 max_sequence_tokens: 1024load_options: type: constant #Future options: loadgen, stair-step concurrency: 1 duration: 600plugin: "tgis_grpc_plugin"grpc/http use_tls: True streaming: True model_name: "flan-t5-base" host: "route.to.host" port: 443In the case of these experiments, we use a mix of:Concurrency: 1, 2, 4, 8, 16, 32 and 64 virtual users. This is the number of parallel queries llm-load-test will run against the API endpointDuration: 100 seconds. This is the time where llm-load-test will runUse tls: True. This is to make sure that we will query a TLS endpointMax sequence tokens: 480. This is the total maximum number of tokens from both the input and the output when querying the endpointMax input tokens: 200. This is the maximum number of tokens in the input when querying the endpointNow we run llm-load-test to get the benchmark results from the endpoint:python3 load_test.py -c my_custom_config.yamlOnce the tests finish the output should look like:{ "results":[], "config":{  "load_options": {     "concurrency": <value>,  }, "summary":{   "tpot": {     "percentile_95": <value>,   },   "throughput": <value>, }}Where we will be focusing on getting the throughput, time-per-output-tokens (TPOT, percentile 95), and the concurrency values for each test to show how flan-t5-base behaves.Evaluating large language models results from llm-load-testTo describe the results we will showcase throughput (x-axis), with respect to the time-per-output-tokens (y-axis), over the different amounts of virtual users used in llm-load-test to query the endpoints.Figure 1. Unshared GPU - flan-t5-base throughput over time-per-output-token latency.Figure 1 shows that the inference service limit is 16 virtual users, after that, the TPOT latency spikes to values over the boundary SLA of 55 milliseconds per token (this 55ms. boundary ensures that the interaction feels immediate and natural, closely mimicking human conversation, higher values might disrupt the flow of interaction, leading to frustration and a poor user experience). The maximum throughput value is ~980 tokens per second.Figure 2. GPU shared with time-slicing - flan-t5-base throughput over time-per-output-token latency with different inference service replicas.Now lets introduce time-slicing results. Figure 2 shows different configurations of virtual users, where for 1 and 2 replicas, the inference service latency spikes when the virtual users number is over 16, although the throughput for 2 replicas is ~2000 tokens per second, which is almost twice as many as 1 replica. For the experiments with 4 and 8 replicas of the inference service, were able to process 32 and 64 virtual users without the previously described spike in the latency values. There is also an increase in the throughput, but it is not as dramatic as when we compared 1 and 2 replicas from the same inference service. With 8 replicas and 64 virtual users, the load per replica should be ~4 virtual users. This pushes the maximum throughput over 2000 tokens per second without hitting any out-of-memory issues and operates under the SLA of 55 milliseconds.This small test demonstrates that when sizing the infrastructure to run specific workloads it is crucial to understand the resources they need to run, so you can configure and decide the best strategy to maximize the performance of any application running on the cluster.ConclusionWhen should you use time-slicing as an effective policy for oversubscribing the GPUs?When you need to deploy several small modelsWhen you know how many resources the models will useWhen the workloads are controlledWhen you need a simple configuration to start allocating workloadsWhen you're using it for development, testing and staging environmentsWhen you're using it for workloads without strict latency requirementsIn the second part of this series, we will review MIG partitioning, showcasing where it can be useful and the benefits and current drawbacks of that approach.
Unknown
Computer and Mathematical/Education, Training, and Library
null
null
null
null
null
null
news
Shashi Raina
Amazon SageMaker unveils the Cohere Command R fine-tuning model
AWS announced the availability of the Cohere Command R fine-tuning model on Amazon SageMaker. This latest addition to the SageMaker suite of machine learning (ML) capabilities empowers enterprises to harness the power of large language models (LLMs) and unlock their full potential for a wide range of applications. Cohere Command R is a scalable, frontier […]
https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-unveils-the-cohere-command-r-fine-tuning-model/
https://d2908q01vomqb2.c…d-r-1126x630.jpg
2024-07-17T16:49:00Z
AWS announced the availability of the Cohere Command R fine-tuning model on Amazon SageMaker. This latest addition to the SageMaker suite of machine learning (ML) capabilities empowers enterprises to harness the power of large language models (LLMs) and unlock their full potential for a wide range of applications.Cohere Command R is a scalable, frontier LLM designed to handle enterprise-grade workloads with ease. Cohere Command R is optimized for conversational interaction and long context tasks. It targets the scalable category of models that balance high performance with strong accuracy, enabling companies to move beyond proof of concept and into production. The model boasts high precision on Retrieval Augmented Generation (RAG) and tool use tasks, low latency and high throughput, a long 128,000-token context length, and strong capabilities across 10 key languages.In this post, we explore the reasons for fine-tuning a model and the process of how to accomplish it with Cohere Command R.Fine-tuning: Tailoring LLMs for specific use casesFine-tuning is an effective technique to adapt LLMs like Cohere Command R to specific domains and tasks, leading to significant performance improvements over the base model. Evaluations of fine-tuned Cohere Command R model have demonstrated improved performance by over 20% across various enterprise use cases in industries such as financial services, technology, retail, healthcare, legal, and healthcare. Because of its smaller size, a fine-tuned Cohere Command R model can be served more efficiently compared to models much larger than its class.The recommendation is to use a dataset that contains at least 100 examples.Cohere Command R uses a RAG approach, retrieving relevant context from an external knowledge base to improve outputs. However, fine-tuning allows you to specialize the model even further. Fine-tuning text generation models like Cohere Command R is crucial for achieving ultimate performance in several scenarios: Domain-specific adaptation RAG models may not perform optimally in highly specialized domains like finance, law, or medicine. Fine-tuning allows you to adapt the model to these domains nuances for improved accuracy.Data augmentation Fine-tuning enables incorporating additional data sources or techniques, augmenting the models knowledge base for increased robustness, especially with sparse data.Fine-grained control Although RAG offers impressive general capabilities, fine-tuning permits fine-grained control over model behavior, tailoring it precisely to your desired task for ultimate precision.The combined power of RAG and fine-tuned LLMs empowers you to tackle diverse challenges with unparalleled versatility and effectiveness. With the introduction of Cohere Command R fine-tuning on SageMaker, enterprises can now customize and optimize the models performance for their unique requirements. By fine-tuning on domain-specific data, businesses can enhance Cohere Command Rs accuracy, relevance, and effectiveness for their use cases, such as natural language processing, text generation, and question answering.By combining the scalability and robustness of Cohere Command R with the ability to fine-tune its performance on SageMaker, AWS empowers enterprises to navigate the complexities of AI adoption and use its transformative power to drive innovation and growth across various industries and domains.Customer data, including prompts, completions, custom models, and data used for fine-tuning or continued pre-training, remains private to customer AWS accounts and is never shared with third-party model providers.Solution overviewIn the following sections, we walk through the steps to fine-tune the Cohere Command R model on SageMaker. This includes preparing the data, deploying a model, preparing for fine-tuning, creating an endpoint for inference, and performing inference.Prepare the fine-tuning dataBefore you can start a fine-tuning job, you need to upload a dataset with training and (optionally) evaluation data.First, make sure your data is in jsonl format. It should have the following structure: messages This contains a list of messages of the conversation. A message consists of the following parts: role This specifies the current speaker. You can pick from System, User, or Chatbot. content This contains the content of the message.The following is an example that trains a chatbot to answer questions. For the sake of readability, the document spans over multiple lines. For your dataset, make sure that each line contains one whole example.{  "messages": [    {      "role": "System",      "content": "You are a large language model trained by Cohere."    },    {      "role": "User",      "content": "Hi! What were Time magazines top 10 cover stories in the last 10 years?"    },    {      "role": "Chatbot",      "content": "Time magazines top 10 cover stories in the last 10 years were:\\n\\n1. Volodymyr Zelenskyy\\n2. Elon Musk\\n3. Martin Luther King Jr.\\n4. How Earth Survived\\n5. Her Lasting Impact\\n6. Nothing to See Here\\n7. Meltdown\\n8. Deal With It\\n9. The Top of America\\n10. Bitter Pill"    },    {      "role": "User",      "content": "Who is Volodymyr Zelenskyy?"    },    {      "role": "Chatbot",      "content": "Volodymyr Zelenskyy is a Ukrainian politician who was elected President of Ukraine in 2019. He previously served as the Minister of Internal Affairs in the government of Prime Minister Volodymyr Groysman."    },    {      "role": "User",      "content": "Thank you!"    }  ]}Deploy a modelComplete the following steps to deploy the model:On AWS Marketplace, subscribe to the Cohere Command R modelAfter you subscribe to the model, you can configure it and create a training job.Choose View in Amazon SageMaker.Follow the instructions in the UI to create a training job.Alternatively, you can use the following example notebook to create the training job.Prepare for fine-tuningTo fine-tune the model, you need the following:Product ARN This will be provided to you after you subscribe to the product.Training dataset and evaluation dataset Prepare your datasets for fine-tuning.Amazon S3 location Specify the Amazon Simple Storage Service (Amazon S3) location that stores the training and evaluation datasets.Hyperparameters Fine-tuning typically involves adjusting various hyperparameters like learning rate, batch size, number of epochs, and so on. You need to specify the appropriate hyperparameter ranges or values for your fine-tuning task.Create an endpoint for inferenceWhen the fine-tuning is complete, you can create an endpoint for inference with the fine-tuned model. To create the endpoint, use the create_endpoint method. If the endpoint already exists, you can connect to it using the connect_to_endpoint method.Perform inferenceYou can now perform real-time inference using the endpoint. The following is the sample message that you use for input:message = "Classify the following text as either very negative, negative, neutral, positive or very positive: mr. deeds is , as comedy goes , very silly -- and in the best way."result = co.chat(message=message)print(result)The following screenshot shows the output of the fine-tuned model.Optionally, you can also test the accuracy of the model using the evaluation data (sample_finetune_scienceQA_eval.jsonl).Clean upAfter you have completed running the notebook and experimenting with the Cohere Command R fine-tuned model, it is crucial to clean up the resources you have provisioned. Failing to do so may result in unnecessary charges accruing on your account. To prevent this, use the following code to delete the resources and stop the billing process:co.delete_endpoint()co.close()SummaryCohere Command R with fine-tuning allows you to customize your models to be performant for your business, domain, and industry. Alongside the fine-tuned model, users additionally benefit from Cohere Command Rs proficiency in the most commonly used business languages (10 languages) and RAG with citations for accurate and verified information. Cohere Command R with fine-tuning achieves high levels of performance with less resource usage on targeted use cases. Enterprises can see lower operational costs, improved latency, and increased throughput without extensive computational demands.Start building with Coheres fine-tuning model in SageMaker today.About the AuthorsShashi Raina is a Senior Partner Solutions Architect at Amazon Web Services (AWS), where he specializes in supporting generative AI (GenAI) startups. With close to 6 years of experience at AWS, Shashi has developed deep expertise across a range of domains, including DevOps, analytics, and generative AI.James Yi is a Senior AI/ML Partner Solutions Architect in the Emerging Technologies team at Amazon Web Services. He is passionate about working with enterprise customers and partners to design, deploy and scale AI/ML applications to derive their business values. Outside of work, he enjoys playing soccer, traveling and spending time with his family.Pradeep Prabhakaran is a Customer Solutions Architect at Cohere. In his current role at Cohere, Pradeep acts as a trusted technical advisor to customers and partners, providing guidance and strategies to help them realize the full potential of Coheres cutting-edge Generative AI platform. Prior to joining Cohere, Pradeep was a Principal Customer Solutions Manager at Amazon Web Services, where he led Enterprise Cloud transformation programs for large enterprises. Prior to AWS, Pradeep has held various leadership positions at consulting companies such as Slalom, Deloitte, and Wipro. Pradeep holds a Bachelors degree in Engineering and is based in Dallas, TX.
Content Creation/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
Sunita Nadampalli
Accelerated PyTorch inference with torch.compile on AWS Graviton processors
Originally PyTorch used an eager mode where each PyTorch operation that forms the model is run independently as soon as it’s reached. PyTorch 2.0 introduced torch.compile to speed up PyTorch code over the default eager mode. In contrast to eager mode, the torch.compile pre-compiles the entire model into a single graph in a manner that’s optimal for […]
https://aws.amazon.com/blogs/machine-learning/accelerated-pytorch-inference-with-torch-compile-on-aws-graviton-processors/
https://d2908q01vomqb2.c…ton-1116x630.jpg
2024-07-02T17:19:26Z
Originally PyTorch used an eager mode where each PyTorch operation that forms the model is run independently as soon as its reached. PyTorch 2.0 introduced torch.compile to speed up PyTorch code over the default eager mode. In contrast to eager mode, the torch.compile pre-compiles the entire model into a single graph in a manner thats optimal for running on a given hardware platform. AWS optimized the PyTorch torch.compile feature for AWS Graviton3 processors. This optimization results in up to 2x better performance for Hugging Face model inference (based on geomean of performance improvement for 33 models) and up to 1.35x better performance for TorchBench model inference (geomean of performance improvement for 45 models) compared to the default eager mode inference across several natural language processing (NLP), computer vision (CV), and recommendation models on AWS Graviton3-based Amazon EC2 instances. Starting with PyTorch 2.3.1, the optimizations are available in torch Python wheels and AWS Graviton PyTorch deep learning container (DLC).In this blog post, we show how we optimized torch.compile performance on AWS Graviton3-based EC2 instances, how to use the optimizations to improve inference performance, and the resulting speedups.Why torch.compile and whats the goal?In eager mode, operators in a model are run immediately as they are encountered. Its easier to use, more suitable for machine learning (ML) researchers, and hence is the default mode. However, eager mode incurs runtime overhead because of redundant kernel launch and memory read overhead. Whereas in torch compile mode, operators are first synthesized into a graph, wherein one operator is merged with another to reduce and localize memory reads and total kernel launch overhead.The goal for the AWS Graviton team was to optimize torch.compile backend for Graviton3 processors. PyTorch eager mode was already optimized for Graviton3 processors with Arm Compute Library (ACL) kernels using oneDNN (also known as MKLDNN). So, the question was, how to reuse those kernels in torch.compile mode to get the best of graph compilation and the optimized kernel performance together?ResultsThe AWS Graviton team extended the torch inductor and oneDNN primitives that reused the ACL kernels and optimized compile mode performance on Graviton3 processors. Starting with PyTorch 2.3.1, the optimizations are available in the torch Python wheels and AWS Graviton DLC. Please see the Running an inference section that follows for the instructions on installation, runtime configuration, and how to run the tests.To demonstrate the performance improvements, we used NLP, CV, and recommendation models from TorchBench and the most downloaded NLP models from Hugging Face across Question Answering, Text Classification, Token Classification, Translation, Zero-Shot Classification, Translation, Summarization, Feature Extraction, Text Generation, Text2Text Generation, Fill-Mask, and Sentence Similarity tasks to cover a wide variety of customer use cases.We started with measuring TorchBench model inference latency, in milliseconds (msec), for the eager mode, which is marked 1.0 with a red dotted line in the following graph. Then we compared the improvements from torch.compile for the same model inference, the normalized results are plotted in the graph. You can see that for the 45 models we benchmarked, there is a 1.35x latency improvement (geomean for the 45 models).Image 1: PyTorch model inference performance improvement with torch.compile on AWS Graviton3-based c7g instance using TorchBench framework. The reference eager mode performance is marked as 1.0. (higher is better)Similar to the preceding TorchBench inference performance graph, we started with measuring the Hugging Face NLP model inference latency, in msec, for the eager mode, which is marked 1.0 with a red dotted line in the following graph. Then we compared the improvements from torch.compile for the same model inference, the normalized results are plotted in the graph. You can see that for the 33 models we benchmarked, there is around 2x performance improvement (geomean for the 33 models).Image 2: Hugging Face NLP model inference performance improvement with torch.compile on AWS Graviton3-based c7g instance using Hugging Face example scripts. The reference eager mode performance is marked as 1.0. (higher is better)Running an inferenceStarting with PyTorch 2.3.1, the optimizations are available in the torch Python wheel and in AWS Graviton PyTorch DLC. This section shows how to run inference in eager and torch.compile modes using torch Python wheels and benchmarking scripts from Hugging Face and TorchBench repos.To successfully run the scripts and reproduce the speedup numbers mentioned in this post, you need an instance from the Graviton3 family (c7g/r7g/m7g/hpc7g) of hardware. For this post, we used the c7g.4xl (16 vcpu) instance. The instance, the AMI details, and the required torch library versions are mentioned in the following snippet.Instance: c7g.4xl instanceRegion: us-west-2AMI: ami-05cc25bfa725a144a (Ubuntu 22.04/Jammy with 6.5.0-1017-aws kernel)# Install Pythonsudo apt-get updatesudo apt-get install -y python3 python3-pip# Upgrade pip3 to the latest versionpython3 -m pip install --upgrade pip# Install PyTorch and extensionspython3 -m pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1The generic runtime tunings implemented for eager mode inference are equally applicable for the torch.compile mode, so, we set the following environment variables to further improve the torch.compile performance on AWS Graviton3 processors.# Enable the fast math GEMM kernels, to accelerate fp32 inference with bfloat16 gemmexport DNNL_DEFAULT_FPMATH_MODE=BF16# Enable Linux Transparent Huge Page (THP) allocations,# to reduce the tensor memory allocation latencyexport THP_MEM_ALLOC_ENABLE=1# Set LRU Cache capacity to cache the primitives and avoid redundant# memory allocationsexport LRU_CACHE_CAPACITY=1024TorchBench benchmarking scriptsTorchBench is a collection of open source benchmarks used to evaluate PyTorch performance. We benchmarked 45 models using the scripts from the TorchBench repo. Following code shows how to run the scripts for the eager mode and the compile mode with inductor backend.# Set OMP_NUM_THREADS to number of vcpus, 16 for c7g.4xl instanceexport OMP_NUM_THREADS=16# Install the dependenciessudo apt-get install -y libgl1-mesa-glxsudo apt-get install -y libpangocairo-1.0-0python3 -m pip install psutil numpy transformers pynvml numba onnx onnxruntime scikit-learn timm effdet gym doctr opencv-python h5py==3.10.0 python-doctr# Clone pytorch benchmark repogit clone https://github.com/pytorch/benchmark.gitcd benchmark# PyTorch benchmark repo doesn't have any release tags. So,# listing the commit we used for collecting the performance numbersgit checkout 9a5e4137299741e1b6fb7aa7f5a6a853e5dd2295# Setup the modelspython3 install.py# Colect eager mode performance using the following command. The results will be# stored at .userbenchmark/cpu/metric-<timestamp>.json.python3 run_benchmark.py cpu --model BERT_pytorch,hf_Bert,hf_Bert_large,hf_GPT2,hf_Albert,hf_Bart,hf_BigBird,hf_DistilBert,hf_GPT2_large,dlrm,hf_T5,mnasnet1_0,mobilenet_v2,mobilenet_v3_large,squeezenet1_1,timm_efficientnet,shufflenet_v2_x1_0,timm_regnet,resnet50,soft_actor_critic,phlippe_densenet,resnet152,resnet18,resnext50_32x4d,densenet121,phlippe_resnet,doctr_det_predictor,timm_vovnet,alexnet,doctr_reco_predictor,vgg16,dcgan,yolov3,pytorch_stargan,hf_Longformer,timm_nfnet,timm_vision_transformer,timm_vision_transformer_large,nvidia_deeprecommender,demucs,tts_angular,hf_Reformer,pytorch_CycleGAN_and_pix2pix,functorch_dp_cifar10,pytorch_unet --test eval --metrics="latencies,cpu_peak_mem"# Collect torch.compile mode performance with inductor backend# and weights pre-packing enabled. The results will be stored at# .userbenchmark/cpu/metric-<timestamp>.jsonpython3 run_benchmark.py cpu --model BERT_pytorch,hf_Bert,hf_Bert_large,hf_GPT2,hf_Albert,hf_Bart,hf_BigBird,hf_DistilBert,hf_GPT2_large,dlrm,hf_T5,mnasnet1_0,mobilenet_v2,mobilenet_v3_large,squeezenet1_1,timm_efficientnet,shufflenet_v2_x1_0,timm_regnet,resnet50,soft_actor_critic,phlippe_densenet,resnet152,resnet18,resnext50_32x4d,densenet121,phlippe_resnet,doctr_det_predictor,timm_vovnet,alexnet,doctr_reco_predictor,vgg16,dcgan,yolov3,pytorch_stargan,hf_Longformer,timm_nfnet,timm_vision_transformer,timm_vision_transformer_large,nvidia_deeprecommender,demucs,tts_angular,hf_Reformer,pytorch_CycleGAN_and_pix2pix,functorch_dp_cifar10,pytorch_unet --test eval --torchdynamo inductor --freeze_prepack_weights --metrics="latencies,cpu_peak_mem"On successful completion of the inference runs, the script stores the results in JSON format. The following is the sample output:{"name": "cpu""environ": {"pytorch_git_version": "d44533f9d073df13895333e70b66f81c513c1889"},"metrics": {"BERT_pytorch-eval_latency": 56.3769865,"BERT_pytorch-eval_cmem": 0.4169921875}}Hugging Face benchmarking scriptsGoogle T5 Small Text Translation model is one of the around 30 Hugging Face models we benchmarked. Were using it as a sample model to demonstrate how to run inference in eager and compile modes. The additional configurations and APIs required to run it in compile mode are highlighted in BOLD. Save the following script as google_t5_small_text_translation.py .import argparsefrom transformers import T5Tokenizer, T5Modelimport torchfrom torch.profiler import profile, record_function, ProfilerActivityimport torch._inductor.config as config config.cpp.weight_prepack=True config.freezing=Truedef test_inference(mode, num_iter):tokenizer = T5Tokenizer.from_pretrained("t5-small")model = T5Model.from_pretrained("t5-small")input_ids = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt").input_ids  # Batch size 1decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  # Batch size 1    if (mode == 'compile'):         model = torch.compile(model)with torch.no_grad():for _ in range(50):outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)with profile(activities=[ProfilerActivity.CPU]) as prof:with record_function("model_inference"):for _ in range(num_iter):outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)print(prof.key_averages().table(sort_by="self_cpu_time_total"))def main() -> None:global m, argsparser = argparse.ArgumentParser(__doc__)parser.add_argument("-m","--mode",choices=["eager", "compile"],default="eager",help="Which test to run.",)parser.add_argument("-n","--number",type=int,default=100,help="how many iterations to run.",)args = parser.parse_args()test_inference(args.mode, args.number)if __name__ == "__main__":main()Run the script with the following steps.# Set OMP_NUM_THREADS to number of vcpus to 4 because# the scripts are running inference in sequence, and# they don't need large number of vcpusexport OMP_NUM_THREADS=4# Install the dependenciespython3 -m pip install transformers# Run the inference script in Eager mode# using number of iterations as 1 just to show the torch profiler output# but for the benchmarking, we used 1000 iterations.python3 google_t5_small_text_translation.py -n 1 -m eager# Run the inference script in torch compile modepython3 google_t5_small_text_translation.py -n 1 -m compileOn successful completion of the inference runs, the script prints the torch profiler output with the latency breakdown for the torch operators. The following is the sample output from torch profiler:# Torch profiler output for the eager mode run on c7g.xl (4vcpu)--------------- ------------  -----------  ------------  -----------  ------------  ------------Name     Self CPU %   Self CPU   CPU total %   CPU total   CPU time avg    # of Calls--------------- ------------  -----------  ------------  -----------  ------------  ------------aten::mm         40.71%       12.502ms       40.71%      12.502ms     130.229us            96model_inference     26.44%       8.118ms       100.00%      30.708ms      30.708ms             1aten::bmm         6.85%       2.102ms         9.47%       2.908ms      80.778us            36aten::matmul         3.73%       1.146ms        57.26%      17.583ms     133.205us           132aten::select         1.88%     576.000us         1.90%     583.000us       0.998us           584aten::transpose      1.51%     464.000us         1.83%     563.000us       3.027us           186--------------- ------------ ----------- ------------ ----------- ------------ -------------Self CPU time total: 30.708ms# Torch profiler output for the compile mode run for the same model on the same instance------------------------- ----------  -----------  ------------  ------------  ------------  ------------Name     Self CPU %    Self CPU   CPU total %    CPU total   CPU time avg   # of Calls------------------------- ----------  -----------  ------------  ------------  ------------  ------------mkldnn::_linear_pointwise   37.98%       5.461ms        45.91%       6.602ms      68.771us            96Torch-Compiled Region       29.56%       4.251ms        98.53%      14.168ms      14.168ms             1aten::bmm         14.90%       2.143ms        21.73%       3.124ms      86.778us            36aten::select         4.51%     648.000us         4.62%     665.000us       1.155us           576aten::view         3.29%     473.000us         3.29%     473.000us       1.642us           288aten::empty         2.53%     364.000us         2.53%     364.000us       3.165us           115------------------------- --------- ----------- ------------ ------------ ------------ -------------Self CPU time total: 14.379msWhats nextNext, were extending the torch inductor CPU backend support to compile Llama model, and adding support for fused GEMM kernels to enable torch inductor operator fusion optimization on AWS Graviton3 processors.ConclusionIn this tutorial, we covered how we optimized torch.compile performance on AWS Graviton3-based EC2 instances, how to use the optimizations to improve PyTorch model inference performance, and demonstrated the resulting speedups. We hope that you will give it a try! If you need any support with ML software on Graviton, please open an issue on the AWS Graviton Technical Guide GitHub.About the AuthorSunita Nadampalli is a Software Development Manager and AI/ML expert at AWS. She leads AWS Graviton software performance optimizations for AI/ML and HPC workloads. She is passionate about open source software development and delivering high-performance and sustainable software solutions for SoCs based on the Arm ISA.
Unknown
Computer and Mathematical/Life, Physical, and Social Science
null
null
null
null
null
null
news
Tobias Mann
Mistral Large 2 leaps out as a leaner, meaner rival to GPT-4-class AI models
It's not the size that matters, it's how you use it Mistral AI on Wednesday revealed a 123-billion-parameter large language model (LLM) called Mistral Large 2 (ML2) which, it claims, comes within spitting distance of the top models from OpenAI, Anthropic, and Meta.…
https://www.theregister.com/2024/07/25/mistral_large_2/
https://regmedia.co.uk/2…ck_208487719.jpg
2024-07-25T01:45:42Z
Mistral AI on Wednesday revealed a 123-billion-parameter large language model (LLM) called Mistral Large 2 (ML2) which, it claims, comes within spitting distance of the top models from OpenAI, Anthropic, and Meta.The news comes a day after Meta launched the hotly anticipated 405-billion-parameter variant of Llama 3 with a 128,000 token context window think of this as the model's short-term memory and support for eight languages.ML2 boasts many of these same qualities including the 128,000 token context window, support for "dozens" of languages, and more than 80 coding languages. Language support has been one of Mistral's biggest differentiators compared to other open models which are often English-only and ML2 continues this trend.If Mistral's benchmarks are to be believed, ML2 trades blows with OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, Meta's Llama 3.1 405B, and others across a number of language, coding, and mathematics tests.For instance, in the popular Massive Multitask Language Understanding (MMLU) benchmark, the French model builder's latest LLM achieves a score of 84 percent. By comparison, just yesterday Meta revealed Llama 3.1 405B achieved a score of 88.6 percent while GPT-4o and Claude 3.5 Sonnet manage scores of 88.7 and 88.3 percent, respectively. Scientists estimate that domain experts the human kind would score in the neighborhood of 89.8 percent on the bench.While impressive in its own right, the more important factor is that ML2 manages to achieve this level of performance using a fraction of the resources of competing models. ML2 is less than a third the size of Meta's biggest model and roughly one fourteenth the magnitude of GPT-4.This has major implications for deployment, and will no doubt make ML2 a very attractive model for commercial applications. At the full 16-bit precision at which it was trained, the 123-billion-parameter model requires about 246GB of memory. For now, that's still too large to fit on a single GPU or accelerator from Nvidia, AMD, or Intel but it could easily be deployed on a single server with four or eight GPUs without resorting to quantization.The same can't necessarily be said of GPT-4, presumably Claude 3.5 Sonnet, or Meta's Llama 3.1 405B. In fact, as we discussed earlier this week, Meta opted to provide an 8-bit quantized version of the 3.1 model so it could run on existing HGX A100 and H100 systems. You can learn more about quantization in our hands-on guide here in a nutshell, it's a compression method that trades model precision for memory and bandwidth savings.But, as Mistral is keen to point out, ML2's smaller footprint also means it can achieve much higher throughput. This is because LLM performance, often measured in tokens per second, is dictated in large part by memory bandwidth. In general, for any given system, smaller models will produce responses to queries faster than larger ones, because they put less pressure on the memory subsystem.If you happen to have a beefy enough system, you can try Mistral Large 2 for yourself by following our guide to running LLMs at home.In its launch announcement, Mistral highlighted the model builder's efforts to combat hallucinations where the model generates convincing but factually inaccurate information.This included fine-tuning the model to be more "cautious and discerning" about how it responds to requests. Mistral also explained the model was trained to recognize when it doesn't know something, or if it has insufficient information to answer there's perhaps a lesson in that for all of us. Mistral also contends that ML2 should be much better than past models at following complex instructions, especially in longer conversations.This is good news, as one of the main ways in which people interact with LLMs is through prompts that dictate how the model should respond or behave in plain language. You can find an example of that in our recent AI containerization guide, in which we coax Microsoft's Phi 3 Mini into acting like a TV weather personality.Additionally, Mistral claims ML2 has been optimized to generate succinct responses wherever possible. While it notes that long-form responses can result in higher scores in some benchmarks, they aren't always desirable in business contexts they tend to tie up the compute for longer, resulting in higher operational costs.While ML2 is open in the sense it's freely available on popular repositories like Hugging Face the model's license is more restrictive than many of Mistral's past models.For instance, the recently released Mistral-NeMo-12B model, which was developed in collaboration with Nvidia, bore an open source Apache 2 license.ML2 on the other hand bears the far less permissive Mistral Research License [Markdown], which allows for use in non-commercial and research capacities, but requires a separate commercial license if you want to put it to work in a business setting.Considering the amount of computational horse power required to train, fine tune, and validate larger models, this isn't all that surprising. It also isn't the first time we've seen model builders give away smaller models under common open source licenses only to restrict their larger ones. Alibaba's Qwen2 model, for instance, is licensed under Apache 2 with the exception of the 72B variant, which used its own Qianwen license. ®
Unknown
Unknown
null
null
null
null
null
null
news
Dena De Angelo and Tapan Mehta
AI Provides an Rx for Cybersecurity in Healthcare
Tapan Mehta discusses the challenges, opportunities and future of AI in protecting healthcare infrastructure and data.The post AI Provides an Rx for Cybersecurity in Healthcare appeared first on Palo Alto Networks Blog.
https://www.paloaltonetworks.com/blog/2024/07/ai-provides-an-rx-for-cybersecurity-in-healthcare/
https://www.paloaltonetw…ge-324516-1.jpeg
2024-07-10T13:00:43Z
As cyberthreats evolve and proliferate, healthcare organizations are increasingly recognizing the need to embrace artificial intelligence (AI) in their cybersecurity efforts. This can be especially difficult due to the inherent challenges of integrating new technologies into their complex and often legacy-laden environments.And, as AI continues to evolve, its impact on healthcare cybersecurity is becoming increasingly significant. To explore this critical intersection, we spoke with Tapan Mehta, Healthcare and Pharma Life Sciences Executive, Strategy and GTM, about the challenges, opportunities and future of AI in protecting healthcare infrastructure and data. The Current State of AI in Healthcare SecurityWhile AI has made substantial inroads in medical diagnostics and treatment, its application in healthcare cybersecurity is still in its nascent stages. However, the potential for AI in healthcare security is immense. Mehta notes:"AI is a perfect match for ingesting [internet of things] IoT data, as the devices generate such huge amounts of data that we couldn't access before, or we couldn't access in real time. This capability is crucial as healthcare organizations face an ever-expanding attack surface. Additional examples of where AI is being used include medical imaging analysis, predictive analytics for population health management, and virtual health assistants and chatbots. Unique Challenges in Healthcare CybersecurityThe healthcare industry faces unique cybersecurity challenges stemming from its traditionally cautious approach to new technology adoption, which is coupled with the highly sensitive nature of its data. The sector has been slower to embrace cutting-edge technologies compared to other industries. It still grapples with a complex mix of legacy systems, gradual digital transformation and the integration of connected devices (medical, OT and IoT). This cautious, often piecemeal approach to modernization, combined with the critical nature of patient data and care delivery, creates a distinctive and vulnerable cybersecurity landscape. Mehta comments further:Unlike other industries, the healthcare industry is usually not at the forefront of cutting-edge technology because of the nature of the industry, because of patient privacy and data security. There's this wait-and-watch mindset. Healthcare organizations want to see what other industries are doing and how they're adopting AI.So when we think about AI in healthcare, I would say it is very much in its early infancy. And to use the baseball analogy, I would say it's like inning one or two of this journey in the healthcare space. And it again centers back to regulatory requirements as well, including patient privacy and data security and everything related to that.Smart medical devices often lack robust privacy controls and security features, while many medical IoT devices transmit unencrypted data. The mixing of IoT/OT and IT assets on healthcare networks, combined with the prevalence of outdated operating systems on medical devices, creates a complex and vulnerable environment. To that end, Mehta explains how the modern care model is evolving, further expanding the resultant threat landscape:It is no longer confined to the four walls of the hospital, which I'm going back to five, six years ago. That's how you typically receive care. You have to go to the hospital, but that care model has shifted dramatically, moving from an acute care setting to an ambulatory care setting or outpatient setting to what we call a hospital-at-home or home setting.In other words, providing a very high level of quality care regardless of where the patient is located. And you have a scenario where that model has transcended the hospital's walls. It opens up the exposure area or the attack surface.The proliferation of internet-connected medical devices and mobile apps handling protected health information (PHI) and personally identifiable information (PII), without proper safeguards, further compounds these risks.Moreover, the healthcare sector's reliance on legacy systems, many of which are no longer supported by manufacturers, introduces persistent vulnerabilities. These outdated systems cannot receive critical security updates, leaving healthcare networks exposed to evolving cyberthreats. This combination of factors rapid technological adoption, inadequate security measures and legacy system vulnerabilities makes healthcare organizations particularly attractive targets for cybercriminals, underscoring the critical need for robust, AI-driven cybersecurity solutions in this sector.Mehta emphasizes the gravity of the situation: "It's no longer a question of if a healthcare organization will be targeted, it's a question of when they will be targeted." This reality underscores the urgent need for robust AI-powered security solutions in healthcare. How AI Is Transforming Healthcare CybersecurityDespite being in its early stages, AI is already making significant contributions to healthcare cybersecurity:Threat Detection AI algorithms can analyze vast amounts of data from multiple sources, identifying patterns and anomalies that may indicate a cyberthreat. This capability is particularly valuable in detecting sophisticated attacks that might evade traditional security measures.Automated Response AI can help automate initial containment actions when a threat is detected, significantly reducing response times. Mehta explains, "AI is really, really good at scaling up a solution to these billions of IoT devices, which is very hard for a human being to do."Predictive Analytics By analyzing historical data and current trends, AI can help predict and prevent potential security breaches before they occur.Data Enrichment AI can rapidly process and correlate data from various sources, including threat intelligence feeds, providing security analysts with enriched, contextual information to make more informed decisions.Anomaly Detection in IoT/OT Devices Given the large number of IoT devices in healthcare settings, AI can monitor device behavior patterns and quickly identify unusual activities that may indicate a compromise.Natural Language Processing for Threat Intelligence AI can analyze unstructured data from various sources to identify emerging threats specific to the healthcare sector.Adaptive Security Policies AI can continuously learn from new data and adjust security policies in real-time, ensuring that defenses evolve alongside new threats.Behavioral Analysis AI can establish baselines of normal user and system behaviors, flagging deviations that might indicate insider threats or compromised accounts.Automated Vulnerability Management AI can prioritize patching and remediation efforts by assessing the criticality of vulnerabilities in the context of the healthcare organization's specific environment and threat landscape. Challenges in Implementing AI for Healthcare CybersecurityWhile AI holds great promise, its implementation in healthcare cybersecurity is not without significant challenges. The unique nature of healthcare cybersecurity creates a complex threat landscape for AI adoption:Regulatory Compliance Healthcare organizations must navigate complex regulations, such as HIPAA and GDPR, when implementing AI solutions. Mehta notes, "We're kind of flying the plane and fixing it, if you may, as we're flying that plane," referring to the evolving regulatory landscape around AI in healthcare.Data Quality and Bias AI models are only as good as the data they're trained on. Ensuring high-quality, unbiased data is crucial for effective AI-powered security solutions. In healthcare, where data can be highly variable and context-dependent, this challenge is particularly acute. Biased or incomplete datasets could lead to AI systems that are less effective for certain patient populations or types of healthcare facilities. This is particularly true as healthcare organizations continue to consider the usage of LLMs as part of their clinical and operational workflows.Skills Gap Mehta points out, "Healthcare organizations don't necessarily have the bench for this level of talent. Where cybersecurity is a very specific skill set that you need, if you're trying to layer that with AI, that pool gets even narrower." This shortage of professionals with both healthcare domain knowledge and AI expertise can significantly hinder the implementation and ongoing management of AI-driven security systems.Integration with Legacy Systems Many healthcare organizations rely on legacy systems that may not be compatible with modern AI technologies. Integrating AI solutions with these older systems without disrupting critical healthcare services presents a significant technical challenge.Ethical Considerations The use of AI in healthcare raises unique ethical concerns, particularly around patient privacy and data use.Cost and Resource Allocation Implementing AI solutions can be expensive, requiring significant upfront investment in technology and training. For many healthcare organizations operating on tight budgets, justifying these costs can be challenging.Transparency and Explainability In healthcare, where decisions can have life-or-death consequences, the "black box" nature of some AI algorithms poses a challenge. Ensuring that AI-driven security decisions are transparent and explainable is crucial for building trust and meeting potential regulatory requirements. The Future of AI in Healthcare CybersecurityLooking ahead, Mehta envisions a future where AI plays an increasingly critical role in healthcare cybersecurity. He predicts, "Moving forward, I do see there is going to be further integration and leverage of AI for IoT. More scalable, more automation, more intelligent and faster identification and detection."However, he also emphasizes the importance of human oversight:"For example, when a medical device or system is hacked, not only can we lose sensitive information, but also it can impact the operation, which can mean life or death in the cases of healthcare and critical infrastructure. Therefore, the accuracy of AI is extremely crucial." What Healthcare Needs from AIMehta's insights underscore the need for a balanced approach that leverages AI's capabilities while maintaining human expertise and oversight. As we move forward, healthcare organizations must invest in AI-powered security solutions, develop the necessary talent, and stay ahead of evolving regulations to protect sensitive patient data effectively.As Mehta concludes:Security needs to be automated and real-time in the era of AI. As we face new challenges and zero-day threats, we need to innovate new solutions at a much faster pace. And that's also where the opportunities will come to improve quality of care and access to care.Ready to Learn More?
Detection and Monitoring/Prediction/Process Automation
Computer and Mathematical
null
null
null
null
null
null
news
Jowi Morales
Chinese AI built off open-source code matches American tech in chatbot benchmark tests
"Open-source AI is the foundation of AI development and it looks like China could do the same thing," says Hugging Face CEO Clément Delangue.
https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-ai-built-off-open-source-code-matches-american-tech-in-chatbot-benchmark-tests
https://cdn.mos.cms.futu…FhLW-1200-80.jpg
2024-07-27T15:52:45Z
As per The New York Times, Chinese firm 01.AI’s Yi-Large Global SOTA LLM, based on open-source technology, scored nearly as well as its American counterparts, like GPT-4o and Llama-3. Another AI tool called Kling, a video generator, is also publicly available in China. It works similarly to Open AI’s Sora, but unlike the latter, which is still limited access, Kling is already widely available to almost all users.The AI boom started in the United States with the release of ChatGPT in late 2022. However, Chinese companies are quickly catching up on the back of open-source AI code, which allows other users to build upon, iterate, and improve existing AI tech. Open-source (or open-standard) is the backbone of many of our technologies — from operating systems like Linux to new processors like RISC-V.Many American companies are at the forefront of AI development and have open-sourced their data; however, concerns about the unethical and criminal use of AI technologies have also given them pause, with researchers discussing risks and potential regulations. However, Chinese companies have been less strict about these issues, with many of them making their tech available to the general public as soon as possible and even sharing the code with developers, programmers, and other institutions.This kind of sharing has turbocharged the progress of AI development in China. Companies that were previously scorned for using open-source code are now leading in AI benchmark tests.These advancements in AI technologies are happening in China despite the stringent measures that the U.S. is placing against its technological ambitions. The bans and embargoes on high-end chips Washington has placed on Beijing meant that Chinese companies have difficulty sourcing Nvidia’s latest, most potent AI processors. While China is hard at work trying to catch up on the hardware side of things, many experts believe it’s still at least a decade away from achieving parity with the United States.It’s evident, though, that Chinese tech firms can catch up on the software side despite the hardware bans. Nevertheless, the White House wants to change that with a bill in the U.S. Congress that would give the American government more control over limiting the export of American AI code. Some parties are trying to limit the development of open-source AI tech to stop Chinese companies from using it, but doing so might do more harm than good for the U.S.Clément Delangue, the CEO of Hugging Face, a firm that lets its users share its open-source AI projects, said, “Open-source AI is the foundation of AI development,” referring to how the collaboration of researchers and organizations in the U.S. allowed it to become the world leader in AI, “and it looks like China could do the same thing.”Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.Furthermore, Stanford University’s recently released Llama 3-V LLM is built upon open-source code sourced from China. It shows that Chinese AI technologies have matured to the point that American researchers are now basing their work on them. Delangue warns that if Washington stifles open-source development in the U.S., American AI developers building their tech off open-source Chinese tech could become the norm.However, despite all the AI advancements China has made based on open-source technology, Professor Yiran Chen of Duke University believes they are still at a disadvantage. “Chinese companies are good at replicating and improving what the U.S. already has,” Chen said. “They are not as good at inventing something completely new that will bypass the U.S. in five to ten years.”Whether this is true or not, we will only see it in a decade. But what happens between now and then could also be dependent on how the U.S. manages the open development and collaboration between companies that led it to become the current leader in AI today.
Unknown
Unknown
null
null
null
null
null
null
news
Harsh Shivam
Apple denies using YouTube videos for training Apple Intelligence: Report
Apple Intelligence: Many big technology companies including Apple, Nvidia and Amazon-backed Anthropic have been reportedly using subtitles of YouTube videos to train their AI models
https://www.business-standard.com/technology/tech-news/apple-denies-using-youtube-videos-for-training-apple-intelligence-report-124071800397_1.html
https://bsmedia.business…9895888-0949.jpg
2024-07-18T06:28:08Z
Many big technology companies including Apple, Nvidia and Amazon-backed Anthropic have been reportedly using subtitles of YouTube videos to train their AI modelsApple IntelligenceHarsh ShivamNew DelhiApple has clarified that its artificial intelligence features that the company collectively calls Apple Intelligence is not powered by the companys OpenELM AI model. According to a report by 9To5Mac, the cupertino-based technology giant, in a statement to the media outlet stated that OpenELM doesnt power any of its AI or machine learning features including Apple Intelligence.This comes after a Wired report stated that many technology giants including Apple, Nvidia and Amazon-backed Anthropic used material from thousands of YouTube videos, including video subtitles to train their AI models. The report mentions that Apple used plain text of videos subtitles along with its translations into different languages to train its OpenELM model.Google prohibits the use of videos posted on YouTube for applications that are independent of the video platform.In its statement to 9To5Mac, Apple said that it created the OpenELM to contribute to the research community and advancing open source large language model (LLM) development. According to the company, OpenELM was only created for research purposes rather than for powering AI features on its products and devices. Apple Intelligence trainingPreviously, in a research paper published by Apple on June 10, the company said that it does not use its users' private personal data or user interactions for training its AI models. However, the tech giant did say that it uses publically available data from the web using its web-crawler AppleBot. The company said that web publishers have to opt-out if they wish to not allow Apple to use their web content for Apple Intelligence training.Apple OpenELM: What is itIn April, Apple released its OpenELM AI model on the Hugging Face model library. OpenELM, which stands for Open-source Efficient Language Models", is a series of four small language models that are capable of running on devices such as Phones and PCs.The four models within OpenELM come with 270 million parameters, 450 million parameters, 1.1 billion parameters, and the largest with 3 billion parameters, respectively. These parameters refer to the number of variables an AI-model can understand from its training data for decision making.For comparison, Microsofts Phi-3 model can go up to 3.8 billion parameters. Similarly, Googles open model Gemma, which was launched earlier this year, offers up to 2 billion parameters.Don't miss the most important news and views of the day. Get them on our Telegram channelFirst Published: Jul 18 2024 | 11:58 AMIST
Unknown
Unknown
null
null
null
null
null
null
news
David Marshall
SysAdmin Day 2024: Industry Experts Shine a Light on the IT Backbone
Every year, on the last Friday of July, we celebrate SysAdmin Day, a dedicated moment to recognize the tireless efforts of the IT professionals who keep our digital world humming.
https://vmblog.com:443/archive/2024/07/26/sysadmin-day-2024-industry-experts-shine-a-light-on-the-it-backbone.aspx
null
2024-07-26T12:32:00Z
Every year, on the last Friday of July, we celebrate SysAdmin Day, a dedicated moment to recognize the tireless efforts of the IT professionals who keep our digital world humming. SysAdmins are the unsung heroes, the wizards behind the curtain, who ensure our networks run smoothly, applications function flawlessly, and data remains secure. They are the first responders when disaster strikes, the troubleshooters who untangle complex problems, and the architects who design and maintain the critical infrastructure that underpins our modern lives.In honor of SysAdmin Day 2024, VMblog brings together a panel of industry experts to shed light on the invaluable contributions of SysAdmins. We'll hear from respected leaders across various IT domains, gaining insights into the ever-evolving role of SysAdmins and the immense value they bring to organizations of all sizes. So, please read and share this commentary, and help celebrate these dedicated professionals: the SysAdmins who keep our digital realm secure, efficient, and up and running!++Shailesh Manjrekar, Chief Marketing Officer, CloudFabrixCloudFabrix is in the business of making the lives of Sysadmins - the unsung heroes, much easier. As any Sysadmin can vouch, he needs Observability, AIOps, and Automation so his systems can embark on that journey of an Autonomous Enterprise and it all starts with data. CloudFabrix's Robotic Data Automation Fabric automates his life with Data. CloudFabrix meets him where he is in this journey and takes him through the Descriptive, Predictive, Prescriptive and Cognitive phases. The cherry on the cake is the entire journey now using Macaw GenAI's Conversational queries, where the new programming language is English!++ Donny van der Linde, Pre-Sales Consultant Benelux, Liquit , part of Recast SoftwareEven though every day is Sysadmin appreciation day at Liquit, part of Recast Software, we want to say a special thank you to all those EUC champions on the official day!Sysadmins keep the EUC world of every organization running smoothly, from devices and systems to applications and endpoint management.As part of Recast Software, Liquit is better positioned than ever to provide the vital solutions for migrations, refreshes, and digital transformation projects. Your hybrid cloud and work world will only get more complex, making contextual device and application access from any location more challenging. This means handling workarounds, migrations, integrations, packaging, patching, visibility, and configuration across macOS, Windows, workspaces, VDI, DaaS, and beyond.SysAdmin Appreciation Day is the perfect time to say thank you! Everyone at Liquit and Recast Software is working to support IT teams and SysAdmins every day because we know the vital role you play in keeping every organization efficient and productive.++ Anthony Cusimano, technical director at tech startup Object FirstReflecting on last year,  I predicted the potential of AI to make jobs easier, but also that it would introduce a bevy of new issues specifically that the sysadmin would have to deal with. Now, with AI phishing, deep fakes, and blackmail targeting business and their employees, sysadmins have a whole new slew of concerning threats. These attack vectors are used to compromise employees, manipulating them into granting access to company secrets and privileged information. Thats why Im telling anyone who will listen this year: sysadmins must up their Zero Trust game.Zero Trust is a security concept that assumes no user or device within or outside the network is trusted, requiring verification for every access request to secure resources. Yet, these fortifying principles are lacking in the backup and recovery space though most attacks target backup data first. I encourage system and backup administrators to research and understand Zero Trust Data Resilience (ZTDR). If applied correctly, this framework can ensure a safe recovery and avoid ransom payouts no matter where the attack occurs and what it targets. In a never-ending role filled with stress and reactivity, consider implementing Zero Trust across apps and infrastructure to add a bit of simplicity to your data protection strategiesToday, we acknowledge and appreciate all sysadmins! Alongside thanks this year, considering all the advancements in tech and the increase in the number and complexity of attacks, this Sysadmin Day, I recommend we all encourage a little less trust and a little more security across our businesses.++ Jeff Ready, Chief Executive Officer, Board Member, Co-founder of Scale ComputingOn this 25th SysAdmin Appreciation Day, Scale Computing thanks all SysAdmins for tirelessly keeping IT systems running smoothly. As technology evolves and becomes more critical to business operations, your role is more vital than ever. Your dedication to managing complex environments, ensuring uptime, and maintaining security is commendable. We're honored to support you with solutions that simplify your tasks and enhance your capabilities. Thank you for your relentless efforts and commitment. As part of our SysAdmin Day celebration, we are offering a Yeti Crossroads Backpack 22L ($199) for individuals who take a brief discovery meeting and have a custom quote generated. Click here to claim your Yeti Backpack! ++ Kevin Cole, director of product and technical marketing, ZertoThe recent Crowdstrike disruption, which affected thousands of businesses and caused major service outages, is a perfect example of why every organization needs a strong and supported System Administrator team. System administrators are the unsung heroes for ensuring organizations uphold business continuity. The responsibility of keeping companies online is a task that is more daunting and intricate than most think. From navigating IT failures, along with maintaining normal computer and network functions, SysAdmins are the key to keeping your business running smoothly.  Through investing in solutions designed to establish robust infrastructure, such as disaster recovery and cloud mobility, organizations can provide an additional layer of support for SysAdmins. By allocating resources for a premium infrastructure, companies enable their SysAdmin teams to concentrate on the essential aspects of their role: creating and maintaining systems that are responsive and continuously operational. So, on this SysAdmin Day, be sure to show appreciation to these behind-the-scenes but vital team members.++ Simon Townsend, Field CTO, ControlUpImagine spending 70% of your day buried in support work and troubleshooting. For SysAdmins, this is often the reality, and it's one reason the tech industry sees such high turnover. SysAdmins are constantly overwhelmed by tedious tasks, leaving little room for the passion projects and strategic initiatives that truly drive job satisfaction, motivation, and loyalty. It's time we celebrate our SysAdmins by freeing them from these mundane chores. By adopting advanced digital employee experience (DEX) solutions with automated remediation and AI capabilities, we can provide the proactive troubleshooting needed to get SysAdmins off the support ticket treadmill, deliver a better employee experience, and ensure that endpoints are patched and free from vulnerabilities. This means issues are resolved before they impact the employee experience, solidifying SysAdmins as the true superheroes they are.++ Angela Heindl-Schober, SVP, Global Marketing, HYCUSystem administrators are the unsung heroes powering our digital world and ensuring that we all are able to do the work we do daily. Their tireless efforts behind the scenes keep our networks running, our data secure, and our businesses operational and resilient. On this SysAdmin Appreciation Day, we salute their expertise, dedication, and round-the-clock commitment to excellence. Today's modern enterprise simply couldn't function without the invaluable work of these skilled professionals.++ Manu Heirbaut, Vice President of Engineering, DatadobiAs someone who has direct exposure to the folks in the backroom, I have seen first-hand the invaluable role that SysAdmins play in the success of any organization. While oftentimes, the SysAdmin remains unseen, quietly ensuring that our systems keep humming, I cannot even count the number of times a SysAdmin has come to the rescue. The truth is, without SysAdmins, most organizations would be up a creek, without a paddle. These unsung heroes work tirelessly behind the scenes -- truly at all hours of the day or night -- to ensure our systems run seamlessly, securely, and without interruption. As John Wooden, the first person ever inducted as a member of the Basketball Hall of Fame as both a player and as a coach, once said, 'Its the little details that are vital. Little things make big things happen.' So on SysAdmin Appreciation Day, or better yet all year round, take a moment to shake the hand of the individual who contributes so greatly to your success, and say thank you.++DeeDee Kato, Vice President, Corporate Marketing, FoxitSysAdmins are truly the backbone of any organization tirelessly working behind the scenes to maintain secure and smooth-running systems. SysAdmins expertise and dedication keep the digital infrastructure humming allowing the entire organization to function safely and efficiently. Without their constant vigilance and problem-solving skills even minor issues could escalate into major disruptions, impacting productivity, security, and business continuity.Of course, it should go without saying, we must provide our SysAdmins with solutions that are the most advanced and proven possible. This is the best I would even venture to say, only   way to ensure that they can maintain a robust and efficient IT infrastructure.But we cannot stop there... SysAdmins must also be empowered with cutting-edge solutions like productivity solutions that embed Generative AI. By adding GenAI into workflows, SysAdmins can boost what employees can do letting them focus on strategic and creative projects that create competitive advantage, and push the organization forward. ++Doug Murray, CEO of AuvikWith remote and hybrid workforces becoming the new, permanent norm, the role of the SysAdmin continues to grow and evolve, with many working twice as hard to maintain user productivity amid the adoption of non-traditional devices and other distributed network components. Overseeing everything that exists in IT infrastructure, SysAdmins are no longer only integral to IT but critically important to business productivity as a whole. As AI proliferates the influence of SysAdmins is likely to grow with it. In addition to possessing important technical knowledge of networking fundamentals, security, and troubleshooting abilities, the SysAdmins understanding of the business and how their users operate is critical to ensure that problems are solved quickly and work is executed efficiently. Because of this, it is important to elevate SysAdmins to incorporate them into strategic planning efforts. At the heart, being a SysAdmin is a people-centered role requiring just as many soft skills as technical skills and encompassing much more than general IT administration tasks. Integration and communication between SysAdmins and the rest of their organization is critical, and businesses that prioritize and include SysAdmins in their overall business strategy will have a significant competitive advantage because of the unique and valuable insight they bring to the table. ++Rick Vanover, VP Product Strategy, Veeam SoftwareThis is a very exciting System Administrator Appreciation Day, as there are more opportunities than ever to do better at this still important role. One of the biggest breakthroughs for todays system administrators comes from AI. System administrators around the world have been using Generative AI for more precise information for their technical and supportability questions, as well as the ability to create scripts and automation optimizations. Many system administrators approached scripting in a way of, I would use what I could find online, if they individually do not have scripting and automation experience. Generative AI effectively levels the playing field when it comes for SysAdmin automation. ++Juan Andres Robayo, Principal System Administrator, Leaseweb CANSysAdmins are the backbone of any organization -- that important supporting structure that we never see unless something bad happens. As one of our organization's most appreciated departments, these experts help businesses and their customers day and night to maintain, update, and troubleshoot internal and external networks and services. Without their diligent efforts, businesses could not guarantee reliability and quality for their end-users. SysAdmins need to be highly curious thinkers with the flexibility to quickly adjust to changing conditions. This enables them to effectively manage daily tasks while also acting as crisis managers when technical issues occur-such as cyberattacks. To support SysAdmins in their day-to-day tasks, organizations should invest in resilient infrastructure and provide the necessary tools, disaster recovery, and power outage solutions. Protecting data and ensuring access is vital, but organizations must also minimize downtime and speed up the restoration of operations with redundant solutions.++ Josh Aaron, CEO at Aiden TechnologiesOn SysAdmin Day, it's important to acknowledge and thank all the system administrators for their dedication. These unsung heroes are the backbone of our digital infrastructure, working tirelessly behind the scenes to keep our networks running smoothly and securely.Friday's massive outage due to CrowdStrike, affecting millions of computers and businesses, is a stark reminder of the immense pressure and critical role that SysAdmins and IT professionals play. Many have been working around the clock, sacrificing their weekends to ensure that businesses can continue to operate. They are not just IT professionals; they are the guardians of our digital realms, constantly adapting to new challenges and threats. At Aiden, we recognize the immense pressure SysAdmins face in managing complex systems, especially in an era of increased remote work and sophisticated cyber threats. That's why we've dedicated ourselves to developing AI-driven solutions that empower SysAdmins, allowing them to focus on strategic initiatives rather than repetitive tasks. Our hyperautomation technology is designed to enhance IT security and streamline operations, giving SysAdmins the tools they need to excel in their roles. Let's not only celebrate these tech wizards but also commit to supporting them with innovative solutions that make their jobs easier and more impactful. After all, a well-supported SysAdmin team is crucial for any organization's success in our increasingly digital world. Happy SysAdmin Day, and a heartfelt thank you to all the IT and Security professionals who work tirelessly to keep our digital world safe and operational, especially on days like today. ++Will LaSala, Field CTO, OneSpanThis SysAdmin Day, its crucial to highlight the symbiotic relationship between IT security and the broader organization. Its widely known that IT security provides the framework and tools necessary to secure sensitive data a critical responsibility to ensure seamless operations. Still, security must extend beyond system administrators and become a shared responsibility and collaborative effort involving all departments across organizations.While system administrators play a vital role in areas like user access control and enforcing security policies, they often support HR processes, like onboarding and offboarding, as well. However, in working together, IT security and HR departments can collaborate to not only reduce insider threats but enhance security throughout the entire organization from holding regular security awareness training to integrating continuous identity verification throughout employee onboarding processes. In doing so, system administrators are better supported to keep our digital world running smoothly no easy feat. ++Greg Collins, product marketing manager, ProgressSystem Administrators are a first line of defense against threats both internally and externally. If these experts dont have the tools and strategies they need to succeed, organizations are vulnerable to all sorts of malicious threats and may even breach key compliance regulations. The average enterprise accumulates up to four gigabytes of log data daily, and it is critical they have full visibility and control over this data.  Data privacy for personal health information (PHI) and personally identifiable information (PII) is a top concern globally, and even in the United States the SEC has updated their cybersecurity incident reporting requirements asking for more in-depth historical and impact information.  System Administrators need the right resources to report on historical data, identify and diagnose issues across internal IT networks, and determine unauthorized access and changes to permissions in todays advanced technological landscape.  The best way organizations can celebrate and recognize their System Administrators is to understand and address their evolving needs with the proliferation of artificial intelligence (AI), expanding network infrastructure, and increasingly aggressive threats by enhancing and supporting their network visibility capabilities. ++Sumedh Barde, Head of Product, for SimbianWe are honored to recognize System Administrators (SysAdmins) around the world, who have one of the most demanding and important roles in every organization, as it is their hard-earned task to help keep businesses running. Undoubtedly, it is the SysAdmin key responsibility to keep a company online, incredibly challenging in today's complex environment.Being on the front lines of IT migrations and digital transformation projects large and small, the SysAdmin must also support employees' ability to work from any location, using any device, 24x7. The SysAdmin maintains not only the network and computer systems but are key in assisting to avoid IT disasters such as ransomware attacks all the way to natural disaster events. Without SysAdmins, it is safe to say businesses would not be able to operate.Congratulations and thank you to all SysAdmins! ##
Process Automation/Digital Assistance/Decision Making
Computer and Mathematical
null
null
null
null
null
null
news
Rashmica Gopinath
Detect and protect sensitive data with Amazon Lex and Amazon CloudWatch Logs
In today’s digital landscape, the protection of personally identifiable information (PII) is not just a regulatory requirement, but a cornerstone of consumer trust and business integrity. Organizations use advanced natural language detection services like Amazon Lex for building conversational interfaces and Amazon CloudWatch for monitoring and analyzing operational data. One risk many organizations face is […]
https://aws.amazon.com/blogs/machine-learning/detect-and-protect-sensitive-data-with-amazon-lex-and-amazon-cloudwatch-logs/
https://d2908q01vomqb2.c…ect-1121x630.jpg
2024-07-23T20:01:37Z
In todays digital landscape, the protection of personally identifiable information (PII) is not just a regulatory requirement, but a cornerstone of consumer trust and business integrity. Organizations use advanced natural language detection services like Amazon Lex for building conversational interfaces and Amazon CloudWatch for monitoring and analyzing operational data.One risk many organizations face is the inadvertent exposure of sensitive data through logs, voice chat transcripts, and metrics. This risk is exacerbated by the increasing sophistication of cyber threats and the stringent penalties associated with data protection violations. Dealing with massive datasets is not just about identifying and categorizing PII. The challenge also lies in implementing robust mechanisms to obfuscate and redact this sensitive data. At the same time, its crucial to make sure these security measures dont undermine the functionality and analytics critical to business operations.This post addresses this pressing pain point, offering prescriptive guidance on safeguarding PII through detection and masking techniques specifically tailored for environments using Amazon Lex and CloudWatch Logs.Solution overviewTo address this critical challenge, our solution uses the slot obfuscation feature in Amazon Lex and the data protection capabilities of CloudWatch Logs, tailored specifically for detecting and protecting PII in logs.In Amazon Lex, slots are used to capture and store user input during a conversation. Slots are placeholders within an intent that represent an action the user wants to perform. For example, in a flight booking bot, slots might include departure city, destination city, and travel dates. Slot obfuscation makes sure any information collected through Amazon Lex conversational interfaces, such as names, addresses, or any other PII entered by users, is obfuscated at the point of capture. This method reduces the risk of sensitive data exposure in chat logs and playbacks.In CloudWatch Logs, data protection and custom identifiers add an additional layer of security by enabling the masking of PII within session attributes, input transcripts, and other sensitive log data that is specific to your organization.This approach minimizes the footprint of sensitive information across these services and helps with compliance with data protection regulations.In the following sections, we demonstrate how to identify and classify your data, locate your sensitive data, and finally monitor and protect it, both in transit and at rest, especially in areas where it may inadvertently appear. The following are the four ways to do this:Amazon Lex Monitor and protect data with Amazon Lex using slot obfuscation and selective conversation log captureCloudWatch Logs Monitor and protect data with CloudWatch Logs using playbacks and log group policiesAmazon S3 Monitor and protect data with Amazon Simple Storage Service (Amazon S3) using bucket security and encryptionService Control Policies Monitor and protect with data governance controls and risk management policies using Service Control Policies (SCPs) to prevent changes to Amazon Lex chatbots and CloudWatch Logs groups, and restrict unmasked data viewing in CloudWatch Logs InsightsIdentify and classify your dataThe first step is to identify and classify the data flowing through your systems. This involves understanding the types of information processed and determining their sensitivity level.To determine all the slots in an intent in Amazon Lex, complete the following steps:On the Amazon Lex console, choose Bots in the navigation pane.Choose your preferred bot.In the navigation pane, choose the locale under All Languages and choose Intents.Choose the required intent from the list.In the Slots section, make note of all the slots within the intent.After you identify the slots within the intent, its important to classify them according to their sensitivity level and the potential impact of unauthorized access or disclosure. For example, you may have the following data types:NameAddressPhone numberEmail addressAccount numberEmail address and physical mailing address are often considered a medium classification level. Sensitive data, such as name, account number, and phone number, should be tagged with a high classification level, indicating the need for stringent security measures. These guidelines can help with systematically evaluating data.Locate your data storesAfter you classify the data, the next step is to locate where this data resides or is processed in your systems and applications. For services involving Amazon Lex and CloudWatch, its crucial to identify all data stores and their roles in handling PII.CloudWatch captures logs generated by Amazon Lex, including interaction logs that might contain PII. Regular audits and monitoring of these logs are essential to detect any unauthorized access or anomalies in data handling.Amazon S3 is often used in conjunction with Amazon Lex for storing call recordings or transcripts, which may contain sensitive information. Making sure these storage buckets are properly configured with encryption, access controls, and lifecycle policies are vital to protect the stored data.Organizations can create a robust framework for protection by identifying and classifying data, along with pinpointing the data stores (like CloudWatch and Amazon S3). This framework should include regular audits, access controls, and data encryption to prevent unauthorized access and comply with data protection laws.Monitor and protect data with Amazon LexIn this section, we demonstrate how to protect your data with Amazon Lex using slot obfuscation and selective conversation log capture.Slot obfuscation in Amazon LexSensitive information can appear in the input transcripts of conversation logs. Its essential to implement mechanisms that detect and mask or redact PII in these transcripts before they are stored or logged.In the development of conversational interfaces using Amazon Lex, safeguarding PII is crucial to maintain user privacy and comply with data protection regulations. Slot obfuscation provides a mechanism to automatically obscure PII within conversation logs, making sure sensitive information is not exposed. When configuring an intent within an Amazon Lex bot, developers can mark specific slotsplaceholders for user-provided informationas obfuscated. This setting tells Amazon Lex to replace the actual user input for these slots with a placeholder in the logs. For instance, enabling obfuscation for slots designed to capture sensitive information like account numbers or phone numbers makes sure any matching input is masked in the conversation log. Slot obfuscation allows developers to significantly reduce the risk of inadvertently logging sensitive information, thereby enhancing the privacy and security of the conversational application. Its a best practice to identify and mark all slots that could potentially capture PII during the bot design phase to provide comprehensive protection across the conversation flow.To enable obfuscation for a slot from the Amazon Lex console, complete the following steps:On the Amazon Lex console, choose Bots in the navigation pane.Choose your preferred bot.In the navigation pane, choose the locale under All Languages and choose Intents.Choose your preferred intent from the list.In the Slots section, expand the slot details.Choose Advanced options to access additional settings.Select Enable slot obfuscation.Choose Update slot to save the changes.Selective conversation log captureAmazon Lex offers capabilities to select how conversation logs are captured with text and audio data from live conversations by enabling the filtering of certain types of information from the conversation logs. Through selective capture of necessary data, businesses can minimize the risk of exposing private or confidential information. Additionally, this feature can help organizations comply with data privacy regulations, because it gives more control over the data collected and stored. There is a choice between text, audio, or text and audio logs.When selective conversation log capture is enabled for text and audio logs, it disables logging for all intents and slots in the conversation. To generate text and audio logs for particular intents and slots, set the text and audio selective conversation log capture session attributes for those intents and slots to true. When selective conversation log capture is enabled, any slot values in SessionState, Interpretations, and Transcriptions for which logging is not enabled using session attributes will be obfuscated in the generated text log.To enable selective conversation log capture, complete the following steps:On the Amazon Lex console, choose Bots in the navigation pane.Choose your preferred bot.Choose Aliases under Deployment and choose the bots alias.Choose Manage conversation logs.Select Selectively log utterances.For text logs, choose a CloudWatch log group.For audio logs, choose an S3 bucket to store the logs and assign an AWS Key Management Service (AWS KMS) key for added security.Save the changes.Now selective conversation log capture for a slot is activated.Choose Intents in the navigation pane and choose your intent.Under Initial responses, choose Advanced options and expand Set values.For Session attributes, set the following attributes based on the intents and slots for which you want to enable selective conversation log capture. This will capture utterances that contain only a specific slot in the conversation. x-amz-lex:enable-audio-logging:<intent>:<slot> = "true"x-amz-lex:enable-text-logging:<intent>:<slot> = "true"Choose Update options and rebuild the bot.Replace <intent>and <slot> with respective intent and slot names.Monitor and protect data with CloudWatch LogsIn this section, we demonstrate how to protect your data with CloudWatch using playbacks and log group policies.Playbacks in CloudWatch LogsWhen Amazon Lex engages in interactions, delivering prompts or messages from the bot to the customer, theres a potential risk for PII to be inadvertently included in these communications. This risk extends to CloudWatch Logs, where these interactions are recorded for monitoring, debugging, and analysis purposes. The playback of prompts or messages designed to confirm or clarify user input can inadvertently expose sensitive information if not properly handled. To mitigate this risk and protect PII within these interactions, a strategic approach is necessary when designing and deploying Amazon Lex bots.The solution lies in carefully structuring how slot values, which may contain PII, are referenced and used in the bots response messages. Adopting a prescribed format for passing slot values, specifically by encapsulating them within curly braces (for example, {slotName}), allows developers to control how this information is presented back to the user and logged in CloudWatch. This method makes sure that when the bot constructs a message, it refers to the slot by its name rather than its value, thereby preventing any sensitive information from being directly included in the message content. For example, instead of the bot saying, Is your phone number 123-456-7890? it would use a generic placeholder, Is your phone number {PhoneNumber}? with {PhoneNumber} being a reference to the slot that captured the users phone number. This approach allows the bot to confirm or clarify information without exposing the actual data.When these interactions are logged in CloudWatch, the logs will only contain the slot name references, not the actual PII. This technique significantly reduces the risk of sensitive information being exposed in logs, enhancing privacy and compliance with data protection regulations. Organizations should make sure all personnel involved in bot design and deployment are trained on these practices to consistently safeguard user information across all interactions.The following is a sample AWS Lambda function code in Python for referencing the slot value of a phone number provided by the user. SML tags are used to format the slot value to provide slow and clear speech output, and returning a response to confirm the correctness of the captured phone number:def lambda_handler(event, context): # Extract the intent name from the event intent_name = event['sessionState']['intent']['name'] # Extract the slots from the event slots = event['sessionState']['intent']['slots']# Check if the intent name is 'INTENT_NAME' if intent_name == 'INTENT_NAME': # Retrieve the phone number from the 'SLOT_NAME' slot phone_number = slots['SLOT_NAME']['value']['interpretedValue']# Create an SSML-formatted message with the phone number msg = f'''<speak> Thank you for providing your phone number. Is <prosody rate="slow"> <say-as interpret-as="telephone">{phone_number}</say-as> </prosody> correct? </speak>'''# Create a message array message_array = [ { 'contentType': 'SSML', 'content': msg } ]# Response with the dialog action, intent state, and the message array response = { 'sessionState': { 'dialogAction': { 'type': 'Close' }, 'intent': { 'name': intent_name, 'state': 'Fulfilled' } }, 'messages': message_array } else: # Generic response for unhandled intents response = { 'sessionState': { 'dialogAction': { 'type': 'Close' }, 'intent': { 'name': intent_name, 'state': 'Fulfilled' } }, 'messages': [ { 'contentType': 'PlainText', 'content': 'I apologize, but I am unable to assist.' } ] } return responseReplace INTENT_NAME and SLOT_NAME with your preferred intent and slot names, respectively.CloudWatch data protection log group policies for data identifiersSensitive data thats ingested by CloudWatch Logs can be safeguarded by using log group data protection policies. These policies allow to audit and mask sensitive data that appears in log events ingested by the log groups in your account.CloudWatch Logs supports both managed and custom data identifiers.Managed data identifiers offer preconfigured data types to protect financial data, personal health information (PHI), and PII. For some types of managed data identifiers, the detection depends on also finding certain keywords in proximity with the sensitive data.Each managed data identifier is designed to detect a specific type of sensitive data, such as name, email address, account numbers, AWS secret access keys, or passport numbers for a particular country or region. When creating a data protection policy, you can configure it to use these identifiers to analyze logs ingested by the log group, and take actions when they are detected.CloudWatch Logs data protection can detect the categories of sensitive data by using managed data identifiers.To configure managed data identifiers on the CloudWatch console, complete the following steps:On the CloudWatch console, under Logs in the navigation pane, choose Log groups.Select your log group and on the Actions menu, choose Create data protection policy.Under Auditing and masking configuration, for Managed data identifiers, select all the identifiers for which data protection policy should be applied.Choose the data store to apply the policy to and save the changes.Custom data identifiers let you define your own custom regular expressions that can be used in your data protection policy. With custom data identifiers, you can target business-specific PII use cases that managed data identifiers dont provide. For example, you can use custom data identifiers to look for a company-specific account number format.To create a custom data identifier on the CloudWatch console, complete the following steps:On the CloudWatch console, under Logs in the navigation pane, choose Log groups.Select your log group and on the Actions menu, choose Create data protection policy.Under Custom Data Identifier configuration, choose Add custom data identifier.Create your own regex patterns to identify sensitive information that is unique to your organization or specific use case.After you add your data identifier, choose the data store to apply this policy to.Choose Activate data protection.For details about the types of data that can be protected, refer to Types of data that you can protect.Monitor and protect data with Amazon S3In this section, we demonstrate how to protect your data in S3 buckets.Encrypt audio recordings in S3 bucketsPII can often be captured in audio recordings, especially in sectors like customer service, healthcare, and financial services, where sensitive information is frequently exchanged over voice interactions. To comply with domain-specific regulatory requirements, organizations must adopt stringent measures for managing PII in audio files.One approach is to disable the recording feature entirely if it poses too high a risk of non-compliance or if the value of the recordings doesnt justify the potential privacy implications. However, if audio recordings are essential, streaming the audio data in real time using Amazon Kinesis provides a scalable and secure method to capture, process, and analyze audio data. This data can then be exported to a secure and compliant storage solution, such as Amazon S3, which can be configured to meet specific compliance needs including encryption at rest. You can use AWS KMS or AWS CloudHSM to manage encryption keys, offering robust mechanisms to encrypt audio files at rest, thereby securing the sensitive information they might contain. Implementing these encryption measures makes sure that even if data breaches occur, the encrypted PII remains inaccessible to unauthorized parties.Configuring these AWS services allows organizations to balance the need for audio data capture with the imperative to protect sensitive information and comply with regulatory standards.S3 bucket security configurationsYou can use an AWS CloudFormation template to configure various security settings for an S3 bucket that stores Amazon Lex data like audio recordings and logs. For more information, see Creating a stack on the AWS CloudFormation console. See the following example code:AWSTemplateFormatVersion: '2010-09-09'Description: Create a secure S3 bucket with KMS encryption to store Lex DataResources: S3Bucket: Type: AWS::S3::Bucket Properties: BucketName: YOUR_LEX_DATA_BUCKET AccessControl: Private PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: aws:kms KMSMasterKeyID: alias/aws/s3 VersioningConfiguration: Status: Enabled ObjectLockConfiguration: ObjectLockEnabled: Enabled Rule: DefaultRetention: Mode: GOVERNANCE Years: 5 LoggingConfiguration: DestinationBucketName: !Ref YOUR_SERVER_ACCESS_LOG_BUCKET LogFilePrefix: lex-bucket-logs/The template defines the following properties:BucketName Specifies your bucket. Replace YOUR_LEX_DATA_BUCKET with your preferred bucket name.AccessControl Sets the bucket access control to Private, denying public access by default.PublicAccessBlockConfiguration Explicitly blocks all public access to the bucket and its objectsBucketEncryption  Enables server-side encryption using the default KMS encryption key ID, alias/aws/s3, managed by AWS for Amazon S3. You can also create custom KMS keys. For instructions, refer to Creating symmetric encryption KMS keysVersioningConfiguration Enables versioning for the bucket, allowing you to maintain multiple versions of objects.ObjectLockConfiguration Enables object lock with a governance mode retention period of 5 years, preventing objects from being deleted or overwritten during that period.LoggingConfiguration  Enables server access logging for the bucket, directing log files to a separate logging bucket for auditing and analysis purposes. Replace YOUR_SERVER_ACCESS_LOG_BUCKET with your preferred bucket name.This is just an example; you may need to adjust the configurations based on your specific requirements and security best practices.Monitor and protect with data governance controls and risk management policiesIn this section, we demonstrate how to protect your data with using a Service Control Policy (SCP). To create an SCP, see Creating an SCP.Prevent changes to an Amazon Lex chatbot using an SCPTo prevent changes to an Amazon Lex chatbot using an SCP, create one that denies the specific actions related to modifying or deleting the chatbot. For example, you could use the following SCP:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": [ "lex:DeleteBot", "lex:DeleteBotAlias", "lex:DeleteBotChannelAssociation", "lex:DeleteBotVersion", "lex:DeleteIntent", "lex:DeleteSlotType", "lex:DeleteUtterances", "lex:PutBot", "lex:PutBotAlias", "lex:PutIntent", "lex:PutSlotType" ], "Resource": [ "arn:aws:lex:*:YOUR_ACCOUNT_ID:bot:YOUR_BOT_NAME", "arn:aws:lex:*:YOUR_ACCOUNT_ID:intent:YOUR_BOT_NAME:*", "arn:aws:lex:*:YOUR_ACCOUNT_ID:slottype:YOUR_BOT_NAME:*" ], "Condition": { "StringEquals": { "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE" } } } ]}The code defines the following:Effect This is set to Deny, which means that the specified actions will be denied.Action This contains a list of actions related to modifying or deleting Amazon Lex bots, bot aliases, intents, and slot types.Resource This lists the Amazon Resource Names (ARNs) for your Amazon Lex bot, intents, and slot types. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_BOT_NAME with the name of your Amazon Lex bot.Condition This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the AWS Identity and Access Management (IAM) provisioned role you want this policy to apply to.When this SCP is attached to an AWS Organizations organizational unit (OU) or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from modifying or deleting the specified Amazon Lex bot, intents, and slot types.This SCP only prevents changes to the Amazon Lex bot and its components. It doesnt restrict other actions, such as invoking the bot or retrieving its configuration. If more actions need to be restricted, you can add them to the Action list in the SCP.Prevent changes to a CloudWatch Logs log group using an SCPTo prevent changes to a CloudWatch Logs log group using an SCP, create one that denies the specific actions related to modifying or deleting the log group. The following is an example SCP that you can use:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": [ "logs:DeleteLogGroup", "logs:PutRetentionPolicy" ], "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:/aws/YOUR_LOG_GROUP_NAME*", "Condition": { "StringEquals": { "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE" } } } ]}The code defines the following:Effect This is set to Deny, which means that the specified actions will be denied.Action This includes logs:DeleteLogGroup and logs:PutRetentionPolicy actions, which prevent deleting the log group and modifying its retention policy, respectively.Resource This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.Condition This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.Similar to the preceding chatbot SCP, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role to delete the specified CloudWatch Logs log group or modify its retention policy, while preventing all other IAM entities (users, roles, or groups) within that OU or account from performing these actions.This SCP only prevents changes to the log group itself and its retention policy. It doesnt restrict other actions, such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add it to the Action list in the SCP.Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.Restrict viewing of unmasked sensitive data in CloudWatch Logs Insights using an SCPWhen you create a data protection policy, by default, any sensitive data that matches the data identifiers youve selected is masked at all egress points, including CloudWatch Logs Insights, metric filters, and subscription filters. Only users who have the logs:Unmask IAM permission can view unmasked data. The following is an SCP you can use:{ "Version": "2012-10-17", "Statement": [ { "Sid": "RestrictUnmasking", "Effect": "Deny", "Action": "logs:Unmask", "Resource": "arn:aws:logs:*:YOUR_ACCOUNT_ID:log-group:YOUR_LOG_GROUP:*", "Condition": { "StringEquals": { "aws:PrincipalArn": "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_IAM_ROLE" } } } ]}It defines the following:Effect This is set to Deny, which means that the specified actions will be denied.Action This includes logs:Unmask, which prevents viewing of masked data.Resource This lists the ARN for your CloudWatch Logs log group. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_LOG_GROUP_NAME with the name of your log group.Condition This makes sure the policy only applies to actions performed by a specific IAM role. Replace YOUR_ACCOUNT_ID with your AWS account ID and YOUR_IAM_ROLE with the name of the IAM provisioned role you want this policy to apply to.Similar to the previous SCPs, when this SCP is attached to an Organizations OU or an individual AWS account, it will allow only the specified provisioning role while preventing all other IAM entities (users, roles, or groups) within that OU or account from unmasking sensitive data from the CloudWatch Logs log group.Similar to the previous log group service control policy, this SCP only prevents changes to the log group itself and its retention policy. It doesnt restrict other actions such as creating or deleting log streams within the log group or modifying other log group configurations. To restrict additional actions, add them to the Action list in the SCP.Also, this SCP will apply to all log groups that match the specified resource ARN pattern. To target a specific log group, modify the Resource value accordingly.Clean upTo avoid incurring additional charges, clean up your resources:Delete the Amazon Lex bot: On the Amazon Lex console, choose Bots in the navigation pane.Select the bot to delete and on the Action menu, choose Delete.Delete the associated Lambda function: On the Lambda console, choose Functions in the navigation pane.Select the function associated with the bot and on the Action menu, choose Delete.Delete the account-level data protection policy. For instructions, see DeleteAccountPolicy.Delete the CloudFormation log group policy: On the CloudWatch console, under Logs in the navigation pane, choose Log groups.Choose your log group.On the Data protection tab, under Log group policy, choose the Actions menu and choose Delete policy.Delete the S3 bucket that stores the Amazon Lex data: On the Amazon S3 console, choose Buckets in the navigation pane.Select the bucket you want to delete, then choose Delete.To confirm that you want to delete the bucket, enter the bucket name and choose Delete bucket.Delete the CloudFormation stack. For instructions, see Deleting a stack on the AWS CloudFormation console.Delete the SCP. For instructions, see Deleting an SCP.Delete the KMS key. For instructions, see Deleting AWS KMS keys.ConclusionSecuring PII within AWS services like Amazon Lex and CloudWatch requires a comprehensive and proactive approach. By following the steps in this postidentifying and classifying data, locating data stores, monitoring and protecting data in transit and at rest, and implementing SCPs for Amazon Lex and Amazon CloudWatchorganizations can create a robust security framework. This framework not only protects sensitive data, but also complies with regulatory standards and mitigates potential risks associated with data breaches and unauthorized access.Emphasizing the need for regular audits, continuous monitoring, and updating security measures in response to emerging threats and technological advancements is crucial. Adopting these practices allows organizations to safeguard their digital assets, maintain customer trust, and build a reputation for strong data privacy and security in the digital landscape.About the AuthorsRashmica Gopinath is a software development engineer with Amazon Lex. Rashmica is responsible for developing new features, improving the services performance and reliability, and ensuring a seamless experience for customers building conversational applications. Rashmica is dedicated to creating innovative solutions that enhance human-computer interaction. In her free time, she enjoys winding down with the works of Dostoevsky or Kafka.Dipkumar Mehta is a Principal Consultant with the Amazon ProServe Natural Language AI team. He focuses on helping customers design, deploy, and scale end-to-end Conversational AI solutions in production on AWS. He is also passionate about improving customer experience and driving business outcomes by leveraging data. Additionally, Dipkumar has a deep interest in Generative AI, exploring its potential to revolutionize various industries and enhance AI-driven applications.David Myers is a Sr. Technical Account Manager with AWS Enterprise Support . With over 20 years of technical experience observability has been part of his career from the start. David loves improving customers observability experiences at Amazon Web Services.Sam Patel is a Security Consultant specializing in safeguarding Generative AI (GenAI), Artificial Intelligence systems, and Large Language Models (LLM) for Fortune 500 companies. Serving as a trusted advisor, he invents and spearheads the development of cutting-edge best practices for secure AI deployment, empowering organizations to leverage transformative AI capabilities while maintaining stringent security and privacy standards.
Content Creation/Detection and Monitoring
Management/Business and Financial Operations
null
null
null
null
null
null
news
Tanay Varshney
Creating Synthetic Data Using Llama 3.1 405B
Synthetic data isn’t about creating new information. It’s about transforming existing information to create different variants. For over a decade, synthetic data has been used to improve model…
https://developer.nvidia.com/blog/creating-synthetic-data-using-llama-3-1-405b/
https://developer-blogs.…nthetic-data.jpg
2024-07-23T15:25:57Z
Synthetic data isnt about creating new information. It’s about transforming existing information to create different variants. For over a decade, synthetic data has been used to improve model accuracy across the boardwhether it is transforming images to improve object detection models, strengthening fraudulent credit card detection, or improving BERT models for QA. Whats new? With the advent of large language models (LLMs), both the motivation for generating synthetic data and the techniques for generating it have been supercharged.Enterprises across industries are generating synthetic data to fine-tune foundation LLMs for various use cases, such as improving risk assessment in finance, optimizing supply chains in retail, improving customer service in telecom, and advancing patient care in healthcare.Today, Meta released Llama 3.1 405B, their most powerful open LLM that can be used for both batch and online inference. It can also serve as a base to do specialized pretraining or fine-tuning for a specific domain. Given the size of the model and the amount of data it was trained on, it’s well-suited for synthetic data generation. In this blog post, we’ll cover a few application cases for synthetic data generation and dive deep into one of them.Lets take a look at a few of the high-level application cases for synthetic data in the generative AI space where you can use the Llama 3.1 405B to get started.  There are broadly two approaches that are considered for generating synthetic data for tuning modelsknowledge distillation and self-improvement. Knowledge distillation is the process of translating the capabilities of a larger model into a smaller model. This isn’t possible by simply training both models on the same dataset as the smaller model may not learn the most accurate representation of the underlying data. In this case, we can use the larger model to solve a task and use that data to make the smaller model imitate the larger one. Self-improvement involves using the same model to criticize its own reasoning and is often used to further hone the models capabilities. Both these approaches can be used to leverage the Llama 405B model to improve smaller LLMs. Lets take a look at a few ways how this can be achieved. Training an LLM involves a three-step process: pretraining, fine-tuning, and alignment. Pretraining: This involves using an extremely large corpus of information to train the model on how the general structure of a language is organized. While for a generic LLM, this is typically done with Internet-scale data, for any domain-specific LLMs, we need to imbue the specifics on that domain (think LLM for geometry, LLM for radiology, and LLM for telco). This is called domain adaptive pretraining (DAPT). Another example of the application of synthetic data in the pretraining stage is the popular Phi-1.5 model, where a large model was used to synthesize data for imbuing logical reasoning at the pretraining stage. Fine-tuning: Once the model is trained for general language structure, the next step is to fine-tune it for following specific instructions. For example, tuning the model to be better at reading comprehension-type extractive questions, improving logical reasoning, achieving better code generation, and function calling fall under this category. Self-Instruct, WizardCoder, Alpaca, and more employ these techniques to create task-specific fine-tuning data. Refer to this example for curating domain-specific data to learn more.Alignment: Lastly, we want to ensure that the style and tone of an LLM response align with the user’s expectations, such as sounding conversational, having appropriate verbosity, complexity, coherence, and other user-defined attributes. This can be achieved by using a pipeline that has an instruct model and a reward model. The chat model creates multiple responses and the reward model gives feedback about the quality of the response. This technique falls under the umbrella of reinforcement learning from AI feedback (RLAIF). This notebook will walk you through how the new Llama 405B model together with the NVIDIA 340B Reward model can be used to generate synthetic data for model alignment.Since the application space for synthetic data is vast, lets focus this discussion on LLM-adjacent models and LLM-powered pipelines. Retrieval-augmented generation (RAG) uses both an embedding model to retrieve the relevant information and an LLM to generate the answer. The embedding model generates a mathematical representation for the semantics of text. We can use LLMs to parse through underlying documents and synthesis data for both evaluating and fine-tuning the embedding model. Similar to RAG, any agentic pipeline can be evaluated and its component models fine-tuned. This can be achieved by building simulations with LLM-powered agents.These simulations can also be usedto study behavioral patterns. Furthermore, personas can be induced into LLMs to generate task-specific data at scale. To crystalize the discussion above, lets think through a basic pipeline for one of the use cases discussed abovegenerating evaluation data for retrieval. Follow along with this notebook.The primary challenges for curating data for evaluating a retrieval pipeline are:Diversity: The questions shouldnt focus on a single aspect of information or just have extractive questions.Complexity: Generated questions should require some reasoning or multiple pieces of evidence to answer the question. We’ll focus on diversity, but to explore the complexity anglethe key is to find chunks with overlapping points of information. A couple of approaches to finding overlapping information are calculating Jaccard similarity over sentence-level semantics and leveraging long context models to draw correlations across chunks from the same document.Diversity stems from different perspectives. For example, consider the following passage.The proposed acquisition of GreenTech Inc. by SolarPower Corporation stands as one of the most notable transactions in the renewable energy sector this year. Valued at $3 billion, the deal aims to combine GreenTech’s cutting-edge battery technology with SolarPower’s extensive solar panel manufacturing and distribution network. The anticipated operational synergies are expected to result in a 20% reduction in production costs and a 15% increase in revenue over the next two years. However, the transaction is under intense scrutiny from regulatory bodies due to potential antitrust concerns. The Federal Trade Commission (FTC) has indicated that the merger could potentially create a monopoly in the renewable energy storage market, potentially stifling competition and innovation.SolarPower has committed to maintaining GreenTech’s research and development (R&D) center, which employs over 500 scientists and engineers, as an independent entity to preserve its innovative culture. Additionally, all existing employment contracts will be honored, alleviating concerns about potential layoffs. The merger agreement includes a $150 million breakup fee, payable to GreenTech if SolarPower fails to secure the necessary regulatory approvals, thereby mitigating financial risks for GreenTech should the deal fall through.The agreement includes detailed representations and warranties, specifying the accuracy of financial statements, the absence of undisclosed liabilities, and compliance with applicable laws. It also entails a thorough indemnification process to protect both parties against potential breaches of these representations and warranties. SolarPower and GreenTech have agreed to covenants that restrict GreenTech from incurring new debt, issuing additional shares, or significantly altering business operations without SolarPower’s consent prior to the deals closure. These covenants are designed to preserve the value of GreenTech and ensure a smooth transition post-merger. The agreement further outlines a comprehensive due diligence process, including environmental assessments and audits of GreenTechs intellectual property portfolio, to ensure all assets and liabilities are accurately accounted for before the finalization of the transaction.The European Commission is also reviewing the merger to assess its impact on the EU market, particularly regarding competition and market dominance. This evaluation involves submitting detailed filings that include market analyses, competitive impact assessments, and economic justifications for the merger. The review process requires both companies to respond promptly to inquiries and provide comprehensive documentation. Additionally, to secure approval, SolarPower and GreenTech may need to make concessions, such as divesting certain business units or assets, to alleviate concerns about reduced competition. Ensuring compliance with the EU Merger Regulation involves not only addressing competitive effects but also ensuring that the merger aligns with broader EU policies on market fairness and consumer protection.A financial analyst is interested in the financial performance of the two companies before and after the merger. Legal experts may be interested in the legal scrutiny the company has faced from the FTC, EU, and other parties. A journalist would be looking to understand the main points. All these are valid viewpoints and user personas, and since they approach the same information with different points of view, an evaluation pipeline also needs to accommodate the same. So, lets design a pipeline that takes in documents and personas and gives out questions in a tone that the persona will ask them in. Conceptually, this pipeline has three main steps as seen in Figure 1.Step 1: Generate all possible questions, which would be of interest to the personas.Step 2: Filter all the generated questions.Step 3: Induce the persona’s writing style.Before diving into question generation, we need to ingest the document and create chunks out of it. For the rest of this discussion, lets use Figure 1 as the reference chunk of text. User persona is just a description of the user who may be asking the question. Refer to the following examples.Persona 1Joan is a very senior financial analyst and focuses on using econometrics to recommend investment strategies. Joan is used to having a team of analysts who they can ask for information, so they may not be up to date with the specifics so they may ask vague questions. However, they are very knowledgeable about the general topic.Persona 2Padma is a seasoned corporate litigator with over 10 years of experience in handling complex legal cases for large corporations. She has a no-nonsense approach and is known for her sharp analytical mind and attention to detail.Persona 3Aaron is an underconfident journalism major and thus doesn’t probe the underlying material too deeply. He is still new to the English language so doesn’t have that much proficiency. He also has a bad habit of sensationalizing things. Once ingested, an LLM extracts points of interest from the given chunk for each of the personas. Multiple personas can have similar points of interest, so we use an embedding model to run semantic deduplication. This maps out different interesting pieces of information in a passage. The other aspect of diversity is the type of questions being asked. We need to ask questions that are extractive, abstractive, comparative, and so on, and not just straight how/what questions. To that end, the next step is to identify the types of questions that are applicable to each point of interest given the information in the passage. Lastly, with the chunk-interest-question type triplet, we generate all possible questions. Directing the questions of generation using personas and types of questions enables developers to steer the questions generated toward the types of questions their users would ask.Sample areas of interest and types of questions:Antitrust regulatory scrutiny: abstractive, diagnosticOperational synergies: extractive, abstractiveInnovation preservation: extractive, abstractive, diagnosticEU merger regulation: abstractive, diagnostic, extractive, aggregativeSample questions:What potential risks or drawbacks might arise from the merger between SolarPower Corporation and GreenTech Inc., and how do the companies plan to mitigate them?How might the European Commission’s review of the merger impact the terms of the agreement, and what concessions might SolarPower and GreenTech need to make to secure approval?What strategic benefits do SolarPower and GreenTech hope to achieve through the merger, and how do they plan to integrate their operations to realize these benefits,What are the three main benefits of the proposed acquisition of GreenTech Inc. by SolarPower Corporation, and how do they relate to the company’s operations and finances?How many regulatory bodies are reviewing the merger, and what are the specific concerns they are addressing in their evaluations?What concessions or divestitures might SolarPower Corporation and GreenTech Inc. need to make in order to secure approval for the merger from the European Commission, and how might these concessions impact the company’s operations and market position?What is the primary concern of the FTC regarding the proposed acquisition of GreenTech Inc. by SolarPower Corporation, and how might this problem impact the renewable energy storage market?Once the questions are generated, the next step is filtering and extracting the most useful subset. The first step is to deduplicate across all the questions that have been generated. We need a deduplication pass, as different points of interest can make use of adjacent points of information and spawn across overlapping questions. Next, we use an LLM as a judge to determine the relevance of the question to the underlying passage. With this, we are trying to ensure that the question is completely answerable by the information present in the passage. This is followed up by rewriting all relevant questions to have a conversational tone. Lastly, we have another filter to categorize and filter out questions that may be too general. In the first two steps, we created and curated diverse questions. The final step is to imbue the writing style of the personas with all the questions. Using LLMs, we first formulate the writing styles from the given persona description. Then using these writing styles, the questions are re-written.Writing style samplesPadma’s writing style is characterized by clarity, precision, and a formal tone. She writes in a direct and assertive manner, using simple and concise language to convey complex ideas. Her sentences are well-structured and logically connected, reflecting her analytical mind and attention to detail. She avoids using emotional language, personal opinions, or rhetorical flourishes, instead focusing on presenting facts and arguments in a clear and objective manner. Her writing is free of ambiguity and vagueness, with each point carefully supported by evidence and reasoning. The overall tone is professional and authoritative, commanding respect and attention from the reader. While her writing may not be engaging or persuasive in a creative sense, it is highly effective in conveying her message and achieving her goals in a corporate litigation context.Aaron’s writing is marked by a lack of depth and analysis, often skimming the surface of complex issues. His sentences are short and simple, reflecting his limited proficiency in English. Despite his best efforts, errors in grammar, syntax, and word choice are common. To compensate for his lack of confidence, Aaron often resorts to sensationalism, exaggerating or distorting facts to make them more attention-grabbing. His tone is hesitant and uncertain as if he’s not quite sure of himself. Overall, Aaron’s writing style is more akin to a tabloid journalist than a serious news reporter.At the end of this three-step pipeline, we end up with questions like:In light of the prevailing regulatory framework, what additional policy directives is it likely that the proposed merger will need to conform to secure approval from the relevant authorities?What specific aspects of the SolarPower and GreenTech merger are currently under review by the relevant regulatory authorities?Will GreenTechs brainiacs get the boot if the R&D center stays solo after the big buyout?These questions have implicit ground-truth labels to their specific chunks and can thus be used for evaluating various retrieval pipelines. If you are interested in the granular details or want to learn how to improve and customize this pipeline for your use case, refer to this Jupyter Notebook.Synthetic data generation is a critical workflow for enterprises to fuel their domain-specific generative AI applications. The new Llama 3.1 405B model, when paired with the NVIDIA Nemotron-4 340B reward model, generates synthetic data, enabling enterprises to build more accurate, domain-specific custom models. RAG pipelines are critical for LLMs to generate grounded responses based on up-to-date information, and the accuracy of these responses depends on the quality of the pipeline. The synthetic data generation workflow described above can help evaluate the RAG for enterprises.To get started with Llama 3.1 and NVIDIA Nemotron-4 models, visit ai.nvidia.com.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
xfastertransformer-gnr added to PyPI
Boost large language model inference performance on CPU platform.
https://pypi.org/project/xfastertransformer-gnr/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-23T01:49:58Z
English |xFasterTransformer is an exceptionally optimized solution for large language models (LLM) on the X86 platform, which is similar to FasterTransformer on the GPU platform. xFasterTransformer is able to operate in distributed mode across multiple sockets and nodes to support inference on larger models. Additionally, it provides both C++ and Python APIs, spanning from high-level to low-level interfaces, making it easy to adopt and integrate.Table of ContentsModels overviewLarge Language Models (LLMs) develops very fast and are more widely used in many AI scenarios. xFasterTransformer is an optimized solution for LLM inference using the mainstream and popular LLM models on Xeon. xFasterTransformer fully leverages the hardware capabilities of Xeon platforms to achieve the high performance and high scalability of LLM inference both on single socket and multiple sockets/multiple nodes.xFasterTransformer provides a series of APIs, both of C++ and Python, for end users to integrate xFasterTransformer into their own solutions or services directly. Many kinds of example codes are also provided to demonstrate the usage. Benchmark codes and scripts are provided for users to show the performance. Web demos for popular LLM models are also provided.Model support matrixModelsFrameworkDistributionPyTorchC++ChatGLMChatGLM2ChatGLM3GLM4LlamaLlama2Llama3BaichuanBaichuan2QWenQWen2SecLLM(YaRN-Llama)OptDeepseek-codergemmagemma-1.1codegemmaDataType support listFP16BF16INT8W8A8INT4NF4BF16_FP16BF16_INT8BF16_W8A8BF16_INT4BF16_NF4W8A8_INT8W8A8_int4W8A8_NF4DocumentsxFasterTransformer Documents and Wiki provides the following resources:An introduction to xFasterTransformer.Comprehensive API references for both high-level and low-level interfaces in C++ and PyTorch.Practical API usage examples for xFasterTransformer in both C++ and PyTorch.InstallationFrom PyPIpipinstallxfastertransformerUsing Dockerdockerpullintel/xfastertransformer:latestRun the docker with the command (Assume model files are in /data/ directory):dockerrun-it\--namexfastertransformer\--privileged\--shm-size=16g\-v/data/:/data/\-e"http_proxy=$http_proxy"\-e"https_proxy=$https_proxy"\intel/xfastertransformer:latestNotice!!!: Please enlarge --shm-size if bus error occurred while running in the multi-ranks mode. The default docker limits the shared memory size to 64MB and our implementation uses many shared memories to achieve a better performance.Built from sourcePrepare EnvironmentManuallyPyTorch v2.3 (When using the PyTorch API, it's required, but it's not needed when using the C++ API.)pipinstalltorch--index-urlhttps://download.pytorch.org/whl/cpuFor GPU, xFT needs ABI=1 from torch==2.3.0+cpu.cxx11.abi in torch-whl-list due to DPC++ need ABI=1.Install dependent librariesPlease install libnuma package:CentOS: yum install libnuma-develUbuntu: apt-get install libnuma-devHow to buildUsing 'CMake'# Build xFasterTransformergitclonehttps://github.com/intel/xFasterTransformer.gitxFasterTransformercdxFasterTransformergitcheckout<latest-tag># Please make sure torch is installed when run python examplemkdirbuild&&cdbuildcmake..make-jUsing python setup.py# Build xFasterTransformer library and C++ example.pythonsetup.pybuild# Install xFasterTransformer into pip environment.# Notice: Run `python setup.py build` before installation!pythonsetup.pyinstallxFasterTransformer supports a different model format from Huggingface, but it's compatible with FasterTransformer's format.Download the huggingface format model firstly.After that, convert the model into xFasterTransformer format by using model convert module in xfastertransformer. If output directory is not provided, converted model will be placed into ${HF_DATASET_DIR}-xft.python -c 'import xfastertransformer as xft; xft.LlamaConvert().convert("${HF_DATASET_DIR}","${OUTPUT_DIR}")'PS: Due to the potential compatibility issues between the model file and the transformers version, please select the appropriate transformers version.Supported model convert list:LlamaConvertYiConvertGemmaConvertChatGLMConvertChatGLM2ConvertChatGLM4ConvertOPTConvertBaichuanConvertBaichuan2ConvertQwenConvertQwen2ConvertDeepseekConvertAPI usageFor more details, please see API document and examples.Python API(PyTorch)Firstly, please install the dependencies.Python dependenciespipinstall-rrequirements.txtPS: Due to the potential compatibility issues between the model file and the transformers version, please select the appropriate transformers version.oneCCL (For multi ranks)Install oneCCL and setup the environment. Please refer to Prepare Environment.xFasterTransformer's Python API is similar to transformers and also supports transformers's streamer to achieve the streaming output. In the example, we use transformers to encode input prompts to token ids.importxfastertransformerfromtransformersimportAutoTokenizer,TextStreamer# Assume huggingface model dir is `/data/chatglm-6b-hf` and converted model dir is `/data/chatglm-6b-xft`.MODEL_PATH="/data/chatglm-6b-xft"TOKEN_PATH="/data/chatglm-6b-hf"INPUT_PROMPT="Once upon a time, there existed a little girl who liked to have adventures."tokenizer=AutoTokenizer.from_pretrained(TOKEN_PATH,use_fast=False,padding_side="left",trust_remote_code=True)streamer=TextStreamer(tokenizer,skip_special_tokens=True,skip_prompt=False)input_ids=tokenizer(INPUT_PROMPT,return_tensors="pt",padding=False).input_idsmodel=xfastertransformer.AutoModel.from_pretrained(MODEL_PATH,dtype="bf16")generated_ids=model.generate(input_ids,max_length=200,streamer=streamer)C++ APISentencePiece can be used to tokenizer and detokenizer text.#include<vector>#include<iostream>#include"xfastertransformer.h"// ChatGLM token ids for prompt "Once upon a time, there existed a little girl who liked to have adventures."std::vector<int>input({3393,955,104,163,6,173,9166,104,486,2511,172,7599,103,127,17163,7,130001,130004});// Assume converted model dir is `/data/chatglm-6b-xft`.xft::AutoModelmodel("/data/chatglm-6b-xft",xft::DataType::bf16);model.config(/*max length*/100,/*num beams*/1);model.input(/*input token ids*/input,/*batch size*/1);while(!model.isDone()){std::vector<int>nextIds=model.generate();}std::vector<int>result=model.finalize();for(autoid:result){std::cout<<id<<" ";}std::cout<<std::endl;How to runRecommend preloading libiomp5.so to get a better performance.[Recommended] Run export $(python -c 'import xfastertransformer as xft; print(xft.get_env())') if xfastertransformer's python wheel package is installed.libiomp5.so file will be in 3rdparty/mkl/lib directory after building xFasterTransformer successfully if building from source code.Single rankFasterTransformer will automatically check the MPI environment, or you can use the SINGLE_INSTANCE=1 environment variable to forcefully deactivate MPI.Multi ranksCommand lineUse MPI to run in the multi-ranks mode, please install oneCCL firstly.oneCCL InstallationIf you have built xfastertransformer from source, oneCCL is installed in 3rdparty when compilation.source ./3rdparty/oneccl/build/_install/env/setvars.sh[Recommended] Use provided scripts to build it from source code.cd3rdpartyshprepare_oneccl.shsource./oneccl/build/_install/env/setvars.shInstall oneCCL through installing Intel® oneAPI Base Toolkit.(Notice:It is recommended to use versions 2023.x and below.) And source the enviroment by:source /opt/intel/oneapi/setvars.shHere is a example on local.# or export LD_PRELOAD=libiomp5.so manuallyexport$(python-c'import xfastertransformer as xft; print(xft.get_env())')OMP_NUM_THREADS=48mpirun\-n1numactl-N0-m0${RUN_WORKLOAD}:\-n1numactl-N1-m1${RUN_WORKLOAD}CodeFor more details, please refer to examples.Pythonmodel.rank can get the process's rank, model.rank == 0 is the Master.For Slaves, after loading the model, the only thing needs to do is model.generate(). The input and generation configuration will be auto synced.model=xfastertransformer.AutoModel.from_pretrained("/data/chatglm-6b-xft",dtype="bf16")# SlavewhileTrue:model.generate()C++model.getRank() can get the process's rank, model.getRank() == 0 is the Master.For Slaves, any value can be input to model.config() and model.input since Master's value will be synced.xft::AutoModelmodel("/data/chatglm-6b-xft",xft::DataType::bf16);// Slavewhile(1){model.config();std::vector<int>input_ids;model.input(/*input token ids*/input_ids,/*batch size*/1);while(!model.isDone()){model.generate();}}A web demo based on Gradio is provided in repo. Now support ChatGLM, ChatGLM2 and Llama2 models.Perpare the model.Install the dependenciespipinstall-rexamples/web_demo/requirements.txtPS: Due to the potential compatibility issues between the model file and the transformers version, please select the appropriate transformers version.Run the script corresponding to the model. After the web server started, open the output URL in the browser to use the demo. Please specify the paths of model and tokenizer directory, and data type. transformer's tokenizer is used to encode and decode text so ${TOKEN_PATH} means the huggingface model directory. This demo also support multi-rank.# Recommend preloading `libiomp5.so` to get a better performance.# or LD_PRELOAD=libiomp5.so manually, `libiomp5.so` file will be in `3rdparty/mkl/lib` directory after build xFasterTransformer.export$(python-c'import xfastertransformer as xft; print(xft.get_env())')pythonexamples/web_demo/ChatGLM.py\--dtype=bf16\--token_path=${TOKEN_PATH}\--model_path=${MODEL_PATH}ServingvLLMA fork of vLLM has been created to integrate the xFasterTransformer backend, maintaining compatibility with most of the official vLLM's features. Refer this link for more detail.Installpipinstallvllm-xftNotice: Please do not install both vllm-xft and vllm simultaneously in the environment. Although the package names are different, they will actually overwrite each other.OpenAI Compatible ServerNotice: Preload libiomp5.so is required!# Preload libiomp5.so by following cmd or LD_PRELOAD=libiomp5.so manuallyexport$(python-c'import xfastertransformer as xft; print(xft.get_env())')python-mvllm.entrypoints.openai.api_server\--model${MODEL_PATH}\--tokenizer${TOKEN_PATH}\--dtypebf16\--kv-cache-dtypefp16\--served-model-namexft\--port8000\--trust-remote-codeFor multi-rank mode, please use python -m vllm.entrypoints.slave as slave and keep params of slaves align with master.# Preload libiomp5.so by following cmd or LD_PRELOAD=libiomp5.so manuallyexport$(python-c'import xfastertransformer as xft; print(xft.get_env())')OMP_NUM_THREADS=48mpirun\-n1numactl--all-C0-47-m0\python-mvllm.entrypoints.openai.api_server\--model${MODEL_PATH}\--tokenizer${TOKEN_PATH}\--dtypebf16\--kv-cache-dtypefp16\--served-model-namexft\--port8000\--trust-remote-code\:-n1numactl--all-C48-95-m1\python-mvllm.entrypoints.slave\--dtypebf16\--model${MODEL_PATH}\--kv-cache-dtypefp16FastChatxFasterTransformer is an official inference backend of FastChat. Please refer to xFasterTransformer in FastChat and FastChat's serving for more details.MLServerA example serving of MLServer is provided which supports REST and gRPC interface and adaptive batching feature to group inference requests together on the fly.Benchmark scripts are provided to get the model inference performance quickly.Prepare the model.Install the dependencies, including oneCCL and python dependencies.Enter the benchmark folder and run run_benchmark.sh. Please refer to Benchmark README for more information.Notes!!!: The system and CPU configuration may be different. For the best performance, please try to modify OMP_NUM_THREADS, datatype and the memory nodes number (check the memory nodes using numactl -H) according to your test environment.SupportAccepted PapersIf xFT is useful for your research, please cite:@article{he2024distributed, title={Distributed Inference Performance Optimization for LLMs on CPUs}, author={He, Pujiang and Zhou, Shan and Li, Changqing and Huang, Wenhuan and Yu, Weifei and Wang, Duyi and Meng, Chen and Gui, Sheng}, journal={arXiv preprint arXiv:2407.00029}, year={2024}}and@inproceedings{he2024inference, title={Inference Performance Optimization for Large Language Models on CPUs}, author={He, Pujiang and Zhou, Shan and Huang, Wenhuan and Li, Changqing and Wang, Duyi and Guo, Bin and Meng, Chen and Gui, Sheng and Yu, Weifei and Xie, Yi}, booktitle={ICML 2024 Workshop on Foundation Models in the Wild}}Q&AQ: Can xFasterTransformer run on a Intel® Core CPU?A: No. xFasterTransformer requires support for the AMX and AVX512 instruction sets, which are not available on Intel® Core CPUs.Q: Can xFasterTransformer run on the Windows system?A: There is no native support for Windows, and all compatibility tests are only conducted on Linux, so Linux is recommended.Q: Why does the program freeze or exit with errors when running in multi-rank mode after installing the latest version of oneCCL through oneAPI?A: Please try downgrading oneAPI to version 2023.x or below, or use the provided script to install oneCCL from source code.Q: Why does running the program using two CPU sockets result in much lower performance compared to running on a single CPU socket?A: Running in this way causes the program to engage in many unnecessary cross-socket communications, significantly impacting performance. If there is a need for cross-socket deployment, consider running in a multi-rank mode with one rank on each socket.Q:The performance is normal when running in a single rank, but why is the performance very slow and the CPU utilization very low when using MPI to run multiple ranks?A:This is because the program launched through MPI reads OMP_NUM_THREADS=1, which cannot correctly retrieve the appropriate value from the environment. It is necessary to manually set the value of OMP_NUM_THREADS based on the actual situation.Q: Why do I still encounter errors when converting already supported models?A: Try downgrading transformer to an appropriate version, such as the version specified in the requirements.txt. This is because different versions of Transformer may change the names of certain variables.
Unknown
Unknown
null
null
null
null
null
null
news
Simon Bisson
Understanding DiskANN, a foundation of the Copilot Runtime
One of the key components of Microsoft’s Copilot Runtime edge AI development platform for Windows is a new vector search technology, DiskANN (Disk Accelerated Nearest Neighbors). Building on a long-running Microsoft Research project, DiskANN is a way of building and managing vector indexes inside your applications. It uses a mix of in-memory and disk storage to map an in-memory quantized vector graph to a high-precision graph help on disk.To read this article in full, please click here
https://www.infoworld.com/article/3715630/understanding-diskann-a-foundation-of-the-copilot-runtime.html
https://images.idgesg.ne…bp&quality=85,70
2024-07-04T09:00:00Z
One of the key components of Microsofts Copilot Runtime edge AI development platform for Windows is a new vector search technology, DiskANN (Disk Accelerated Nearest Neighbors). Building on a long-running Microsoft Research project, DiskANN is a way of building and managing vector indexes inside your applications. It uses a mix of in-memory and disk storage to map an in-memory quantized vector graph to a high-precision graph help on disk.What is DiskANN?Although its not an exact match, you can think of DiskANN as the vector index equivalent of tools like SQLite. Added to your code, it gives you a straightforward way to search across a vector index made up of semantic embeddings from a small language model (SLM) such as the Copilot Runtimes Phi Silica.Its important to understand that DiskANN is not a database; its a set of algorithms delivered as a tool for adding vector indexes to other stores that arent designed to support vector searches. This makes it an ideal companion to other embedded stores, whether relational or a NoSQL key value store.The requirement for in-memory and disk storage helps explain some of the hardware specifications for Copilot+ PCs, with double the previous Windows base memory requirements as well as larger, faster SSDs. Usefully, theres a lower CPU requirement over other vector search algorithms, with at-scale implementations in Azure services requiring only 5% of the CPU traditional methods use.Youll need a separate store for the data thats being indexed. Having separate stores for both your indexes and the source of your embeddings does have its issues. If youre working with personally identifiable information or other regulated data, you cant neglect ensuring that the source data is encrypted. This can add overhead on queries, but interestingly Microsoft is working on software-based secure enclaves that can both encrypt data at rest and in use, reducing the risk of PII leaking or prompts being manipulated by malware.DiskANN is an implementation of an approximate nearest neighbor search, using a Vamana graph index. Its designed to work with data that changes frequently, which makes it a useful tool for agent-like AI applications that need to index local files or data held in services like Microsoft 365, such as email or Teams chats.Getting started with diskannpyA useful quick start comes in the shape of the diskannpy Python implementation. This provides classes for building indexes and for searching. Theres the option to use numerical analysis Python libraries such as NumPy to build and work with indexes, tying it into existing data science tools. It also allows you to use Jupyter notebooks in Visual Studio Code to test indexes before building applications around them. Taking a notebook-based approach to prototyping will allow you to develop elements of an SLM-based application separately, passing results between cells.Start by using either of the two Index Builder classes to build either a hybrid or in-memory vector index from the contents of a NumPy array or a DiskANN format vector file. The diskannpy library contains tools that can build this file from an array, which is a useful way of adding embeddings to an index quickly. Index files are saved to a specified directory, ready for searching. Other features let you update indexes, supporting dynamic operations.Searching is again a simple class, with a query array containing the search embedding, along with parameters that define the number of neighbors to be returned, along with the complexity of the list. A bigger list will take longer to deliver but will be more accurate. The trade-off between accuracy and latency makes it essential to run experiments before committing to final code. Other options allow you to improve performance by batching up queries. Youre able to define the complexity of the index, as well as the type of distance metric used for searches. Larger values for complexity and graph degree are better, but the resulting indexes do take longer to create.Diskannpy is a useful tool for learning how to use DiskANN. Its likely that as the Copilot Runtime evolves, Microsoft will deliver a set of wrappers that provides a high-level abstraction, much like the one its delivering for Cosmos DB. Theres a hint of how this might work in the initial Copilot Runtime announcement, with reference to a Vector Embeddings API used to build retrieval-autmented generation (RAG)-based applications. This is planned for a future update to the Copilot Runtime.Why DiskANN?Exploring the GitHub repository for the project, its easy to see why Microsoft picked DiskANN to be one of the foundational technologies in the Copilot Runtime, as its optimized for both SSD and in-memory operations, and it can provide a hybrid approach that indexes a lot of data economically. The initial DiskANN paper from Microsoft Research suggests that a hybrid SSD/RAM index can index five to ten times as many vectors as the equivalent pure in-memory algorithm, able to address about a billion vectors with high search accuracy and with 5ms latency.In practice, of course, an edge-hosted SLM application isnt likely to need to index that much data, so performance and accuracy should be higher. If youre building a semantic AI application on an SLM, you need to focus on throughput, using a small number of tokens for each operation. If you can keep the search needed to build grounded prompts for a RAG application as fast as possible, you reduce the risk of unhappy users waiting for what might be a simple answer.By loading an in-memory index at launch, you can simplify searches so that your application only needs to access source data when its needed to construct a grounded prompt for your SLM. One useful option is the ability to add filters to a search, refining the results and providing more accurate grounding for your application.Were in the early days of the Copilot Runtime, and some key pieces of the puzzle are still missing. One essential for using DiskANN indexes is tools for encoding your source data as vector embeddings. This is required to build a vector search, either as part of your code or to ship a base set of vector indexes with an application.DiskANN elsewhere in MicrosoftOutside of the Copilot Runtime, Microsoft is using DiskANN to add fast vector search to Cosmos DB. Other services that use it include Microsoft 365 and Bing. In Cosmos DB its adding vector search to its NoSQL API, where you are likely to work with large amounts of highly distributed data. Here DiskANNs support for rapidly changing data works alongside Cosmos DBs dynamic scaling, adding a new index to each new partition. Queries can then be passed to all available partition indexes in parallel.Microsoft Research has been working on tools like DiskANN for some time now, and its good to see them jump from pure research to product, especially products as widely used as Cosmos DB and Windows. Having a fast and accurate vector index as part of the Copilot Runtime will reduce the risks associated with generative AI and will keep your indexes on your PC, keeping the source data private and grounding SLMs. Combined with confidential computing techniques in Windows, Microsoft looks like it could be ready to deliver secure, private AI on our own devices.Copyright © 2024 IDG Communications, Inc.
Unknown
Unknown
null
null
null
null
null
null
news
mlx-sharding added to PyPI
A package for MLX model sharding and distributed inference
https://pypi.org/project/mlx-sharding/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-24T12:47:20Z
This project demonstrates how to implement pipeline parallelism for large language models using MLX. It includes tools for sharding a model, serving shards across multiple machines, and generating text using the distributed model. Additionally, it features an OpenAI API-compatible server for easier integration and usage.Demo VideoTo see the distributed inference in action, check out our demo video:Sharding DeepSeek-Coder-V2-Lite-Instruct DemoEducational PurposeThis repository is designed for educational purposes to illustrate how pipeline parallelism can be implemented in MLX. It provides a basic framework for:Sharding a large language modelDistributing model shards across multiple machinesImplementing a simple pipeline for text generationServing the model through an OpenAI API-compatible interfaceWhile not optimized for production use, this demo serves as a starting point for understanding and experimenting with pipeline parallelism in machine learning workflows.Setup and Usage1. Model PreparationYou have two main options for preparing and using the model:Option A: Pre-Sharding the ModelIf you prefer to pre-shard the model, use sharding_weight.py:pythonsharding_weight.py--model"mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx"--output_dirshard_0--start_layer0--end_layer14--total_layers27pythonsharding_weight.py--model"mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx"--output_dirshard_1--start_layer14--end_layer27--total_layers27# Repeat for additional shards as neededOption B: Dynamic ShardingYou can let the system dynamically load and shard the weights when starting the server. This option doesn't require pre-sharding.2. Distribute Shards (If Using Option A)If you've pre-sharded the model, copy the shard directories to their respective machines. Skip this step for Option B.3. Start the ServersStart server instances based on your chosen approach:For Pre-Sharded Model (Option A)On each machine with a shard, start a server instance. For example:python-mshard.main--modelmzbac/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx-shard-1For Dynamic Sharding (Option B)Start the server with specific layer ranges:python-mshard.main--model"mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx"--start-layer0--end-layer14Note the IP address and port printed by each server.4. Generate TextUsing the generate scriptFor a dynamically sharded setup:pythongenerate.py--model"mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx"--start_layer0--end_layer14--server_address<remote_ip1>:<port1>,<remote_ip2>:<port2>--prompt"Your prompt here"--max_tokens512For a pre-sharded setup:pythongenerate.py--modelmzbac/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx-shard-0--server_address<remote_ip1>:<port1>,<remote_ip2>:<port2>--prompt"Your prompt here"--max_tokens512Using the OpenAI API-compatible serverStart the server:For dynamic sharding:python-mshard.openai_api--model"mlx-community/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx"--llm-shard-addresseslocalhost:50051,<remote_ip1>:<port1>,<remote_ip2>:<port2>--start-layer0--end-layer14For pre-sharded model:python-mshard.openai_api--modelmzbac/DeepSeek-Coder-V2-Lite-Instruct-4bit-mlx-shard-0--llm-shard-addresseslocalhost:50051,<remote_ip1>:<port1>,<remote_ip2>:<port2>Use the API endpoints:/v1/completions: Text completion endpoint/v1/chat/completions: Chat completion endpointExample usage:curllocalhost:8080/v1/chat/completions\-H"Content-Type: application/json"\-d'{ "messages": [{"role": "user", "content": "Say this is a test!"}], "temperature": 0.7 }'Limitations and ConsiderationsNetwork Dependency: The performance of this pipeline parallelism implementation is heavily dependent on network speed and latency between machines.Error Handling: The current implementation has basic error handling. In a production environment, you'd want to implement more robust error handling and recovery mechanisms.Security: This demo uses insecure gRPC channels. For any real-world application, implement proper security measures.Shard Configuration: Ensure that when using multiple shards, the layer ranges are set correctly to cover the entire model without overlap.Extending the SystemTo extend the system for more shards:If pre-sharding, create additional shards using sharding_weight.py.Set up more server instances, one for each new shard.In generate.py or when using the OpenAI API server, include all shard addresses.Adjust the layer ranges accordingly when using dynamic sharding.RequirementsPython 3.xMLX librarygRPC and related dependenciesNumPyTransformers librarySufficient RAM on each machine to load and process its model shardAcknowledgments
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
Steve Clarke, Editor
Frequently Asked Questions about using AI in Windows apps
The post Frequently Asked Questions about using AI in Windows apps appeared first on Windows Developer Blog.
https://learn.microsoft.com/en-us/windows/ai/faq
https://learn.microsoft.…-graph-image.png
2024-07-03T14:09:55Z
Integrating AI into your Windows application can be achieved through two primary methods: a local model or a cloud-based model. For the local model option, you have the ability to utilize a pre-existing model or train your own using platforms like TensorFlow or PyTorch, and then incorporate it into your application via OnnxRuntime. The Windows Copilot Runtime offers APIs for various functions, including OCR or utilizing the Phi Silica model. On the other hand, hosting your model on the cloud and accessing it through a REST API allows your application to remain streamlined by delegating resource-intensive tasks to the cloud. See Use Machine Learning models in your Windows app for more information.You can use any programming language you prefer. For instance, C# is widely used for creating Windows client apps. If you require more control over low-level details, C++ is an excellent option. Alternatively, you might consider using Python. You can also use the Windows Subsystem for Linux (WSL) to run Linux-based AI tools on Windows.We recommend using OnnxRuntime.Respecting the privacy and security of user data is essential when developing AI-powered apps. You should follow best practices for data handling, such as encrypting sensitive data, using secure connections, and obtaining user consent before collecting data. You should also be transparent about how you are using data and give users control over their data. Make sure to read Developing Responsible Generative AI Applications and Features on Windows too.System requirements for Windows apps that use AI depend on the complexity of the AI model and the hardware acceleration used. For simple models, a modern CPU may be sufficient, but for more complex models, a GPU or NPU may be required. You should also consider the memory and storage requirements of your app, as well as the network bandwidth required for cloud-based AI services.To optimize AI performance in Windows apps, you should consider using hardware acceleration, such as GPUs or NPUs, to speed up model inference. Windows Copilot+ laptops are optimized for AI workloads and can provide a significant performance boost for AI tasks. See also AI Toolkit for Visual Studio Code overview.Yes, you can use pre-trained AI models in your Windows app. You can download pre-trained models from the internet or use a cloud-based AI service to access pre-trained models. You can then integrate these models into your app using a framework like OnnxRuntime.DirectML is a low-level API for machine learning that provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.Open Network Neural Exchange, or ONNX, is an open standard format for representing ML models. Popular ML model frameworks, such as PyTorch, TensorFlow, SciKit-Learn, Keras, Chainer, MATLAB, etc., can be exported or converted to the standard ONNX format. Once in ONNX format, the model can run on a variety of platforms and devices. ONNX is good for using an ML model in a different format than it was trained on.OnnxRuntime, or ORT, is a unified runtime tool for executing models in different frameworks (PyTorch, TensorFlow, etc) that supports hardware accelerators (device CPUs, GPUs, or NPUs).PyTorch and TensorFlow are used for developing, training, and running deep learning models used in AI applications. PyTorch is often used for research, TensorFlow is often used for industry deployment, and ONNX is a standardized model exchange format that bridges the gap, allowing you to switch between frameworks as needed and compatible across platforms.A Neural Processing Unit, or NPU, is a dedicated AI chip designed specifically to perform AI tasks. The focus of an NPU differs from that of a CPU or GPU. A Central Processing Unit, or CPU, is the primary processor in a computer, responsible for executing instructions and general-purpose computations. A Graphics Processing Unit, or GPU, is a specialized processor designed for rendering graphics and optimized for parallel processing. It is capable of rendering complex imagery for video editing and gaming tasks.NPUs are designed to accelerate deep learning algorithms and can remove some of the work from a computer's CPU or GPU, so the device can work more efficiently. NPUs are purpose-built for accelerating neural network tasks. They excel in processing large amounts of data in parallel, making them ideal for common AI tasks like image recognition or natural language processing. As an example, during an image recognition task, the NPU may be responsible for object detection or image acceleration, while the GPU takes responsibility for image rendering.To check the type of CPU, GPU, or NPU on your Windows device and how it's performing, open Task Manager (Ctrl + Alt + Delete), then select the Performance tab and you will be able to see your machine's CPU, Memory, Wi-Fi, GPU, and/or NPU listed, along with information about it's speed, utilization rate, and other data.WinML, or Windows Machine Learning, is a high-level API for deploying hardware-accelerated machine learning (ML) models on Windows devices that enables developers to utilize the capabilities of the device to perform model inference. The focus is on model loading, binding, and evaluation. WinML utilizes the ONNX model format.An LLM is a type of Machine Learning (ML) model known for the ability to achieve general-purpose language generation and understanding. LLMs are artificial neural networks that acquire capabilities by learning statistical relationships from vast amounts of text documents during a computationally intensive self-supervised and semi-supervised training process. LLMs are often used for Text Generation, a form of generative AI that, given some input text, generates words (or tokens) that are most likely to create coherent and contextually relevant sentences in return. There are also Small Language Models (SLMs) that have fewer parameters and more limited capacity, but may be more efficient (requiring less computational resources), cost-effective, and ideal for specific domains.In Machine Learning, model training involves feeding a dataset into a model (an LLM or SLM), allowing it to learn from the data so that the model can make predictions or decisions based on that data, recognizing patterns. It may also involve adjusting the model parameters iteratively to optimize its performance.The process of using a trained machine learning model to make predictions or classifications on new, unseen data is called Inferencing. Once a language model has been trained on a dataset, learning its underlying patterns and relationships, it's ready to apply this knowledge to real-world scenarios. Inference is an AI model's moment of truth, a test of how well it can apply information learned during training to make a prediction or solve a task. The process of using an existing model for inference is different from the training phase, which requires the use of training and validation data to develop the model and fine-tune its parameters.Fine-tuning is a crucial step in machine learning where a pre-trained model is adapted to perform a specific task. Instead of training a model from scratch, fine-tuning starts with an existing model (usually trained on a large dataset) and adjusts its parameters using a smaller, task-specific dataset. By fine-tuning, the model learns task-specific features while retaining the general knowledge acquired during pre-training, resulting in improved performance for specific applications.Prompt engineering is a strategic approach used with generative AI to shape the behavior and responses of a language model. It involves thoughtfully crafting input prompts or queries to achieve the desired result from a language model (like GPT-3 or GPT-4). By designing an effective prompt, you can guide an ML model to produce the type of response you want. Techniques include adjusting the wording, specifying context, or using control codes to influence model output.Hardware acceleration refers to the use of specialized computer hardware designed to speed up AI applications beyond what is achievable with general-purpose CPUs. Hardware acceleration enhances the speed, energy efficiency, and overall performance of machine learning tasks, such as training models, making predictions, or offloading computation to dedicated hardware components that excel at parallel processing for deep learning workloads. GPUs and NPUs are both examples of hardware accelerators.The process of creating and using ML models involves three main roles: Data Scientists: Responsible for defining the problem, collecting and analyzing the data, choosing and training the ML algorithm, and evaluating and interpreting the results. They use tools such as Python, R, Jupyter Notebook, TensorFlow, PyTorch, and scikit-learn to perform these tasks. ML Engineers: Responsible for deploying, monitoring, and maintaining the ML models in production environments. They use tools such as Docker, Kubernetes, Azure ML, AWS SageMaker, and Google Cloud AI Platform to ensure the scalability, reliability, and security of the ML models. App Developers: Responsible for integrating the ML models into the app logic, UI, and UX. They use tools such as Windows Copilot Runtime, OnnxRuntime, or REST APIs and process the user input and model output.Each role involves different responsibilities and skills, but collaboration and communication between these roles is required to achieve the best results. Depending on the size and complexity of the project, these roles can be performed by the same person or by different teams.
Content Creation/Digital Assistance
Computer and Mathematical
null
null
null
null
null
null
news
Akash Dutta, David Delima
Meta AI Studio With Support for Generating AI Characters, Digital Avatars on Instagram Rolled Out
Meta AI Studio began rolling out to Instagram users on Monday (July 29). The social media giant first announced the AI avatar creation platform last year during its Connect event. Powered by the recently released Llama 3.1 405B artificial intelligence (AI) model, AI Studio can create mini chatbots called AI characters with a specific knowledge base that can interact with other users.
https://www.gadgets360.com/ai/news/meta-ai-studio-instagram-ai-characters-avatar-creators-rolling-out-6220828
https://i.gadgets360cdn.…722321026057.jpg
2024-07-30T07:18:41Z
Meta AI Studio is now rolling out to users on Instagram. The social media giant announced the AI avatar creation platform last year during its Connect event and it is now available to some users on its photo and video sharing platform. Powered by the recently released Llama 3.1 405B artificial intelligence (AI) model, AI Studio can create mini chatbots dubbed AI characters with a specific knowledge base that can interact with other users. Instagram creators can also use the tool to build AI-powered digital avatars of themselves to interact with those who follow them.In a newsroom post, Meta announced that its AI Studio is now rolling out to users. It can be accessed either through the Instagram app or the AI Studio website. Currently, the feature is limited to those residing in the US. Making the announcement, the company said, AI Studio lets anyone create and discover AI characters and allows creators to build an AI as an extension of themselves to reach more fans.AI character feature on AI StudioPhoto Credit: MetaThere are two distinct features within AI Studio. First is AI characters. This feature will allow any user to create an AI-powered character completer with a name, tagline, and description. The user can add this information, and based on that, AI will generate a mini chatbot that will adhere to those instructions. Once created, the AI character can interact with other users across Instagram, Messenger, WhatsApp, and the web.In functionality, the feature is similar to AI agents. These are mini chatbots created with a specific knowledge base and purpose. They usually only generate responses in that specific domain without losing their conversational nature. OpenAI has similar AI agents it calls GPTs. Google Gemini's AI agents are known as Gemma.However, Instagram isn't the first to introduce this feature in the social media space. Platforms such as Character.ai and Butterflies already allow users to create such AI characters.Another feature in AI Studio is available only to Instagram creators. While Meta did not disclose the eligibility criteria, it said that creators can use the AI platform to create AI avatars of themselves that can respond to direct messages (DMs) and story replies.This automation is aimed at those creators who have a large number of unanswered messages and struggle to connect with those who follow them. The tool will let them engage and respond to all of them.Instagram AI avatars featurePhoto Credit: MetaThe feature was first reported in April. At the time, the company was said to be running initial tests on the feature. It was also said to be extended to the comment section and let AI avatars reply to comments on their posts and Reels.AI avatars will be customisable, and Meta says creators will be able to base these avatars on things such as their Instagram content, topics to avoid and links they want it to share. Creators will also be able to turn auto-replies on and off through the professional dashboard within the Instagram app and choose who the AI replies to. Further, the responses generated using AI will be labelled to help the user know who is responding.
Content Creation/Personalization/Digital Assistance
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Sophia Melanson Ricciardone, Postdoctoral Fellow, Department of Psychology, Neuroscience & Behaviour, McMaster University
How AI bots spread misinformation online and undermine democratic politics
Bots are increasingly prevalent on social media and are being used to spread political messages.
https://theconversation.com/how-ai-bots-spread-misinformation-online-and-undermine-democratic-politics-234915
https://images.theconver…6&h=668&fit=crop
2024-07-24T18:37:00Z
In our increasingly digitized world, how do we know whether the accounts we interact with online are other humans or bots? (Shutterstock)Consider a typical morning routine: coffee in hand, you peruse Twitter (now rebranded as X) to catch up on the news. Headlines appear among a flurry of tweets on everything from memes about political leaders to cultural Marxism, free speech, making America great again and draining the swamp. Before your day has even begun, a burst of disparate ideas coalesces in your mind in response to the appearance of a single word or catchphrase. It’s a scenario repeated daily, where snippets of information mould themselves onto our views and biases, influencing how we interpret online discourse and those who engage in it.In the heated space of contemporary politics, popularized words and catchphrases wield a lot of influence. Controversial rallying cries like “build the wall” and “Trudeau must go” regularly appear on social media, punctuating debates and discourse with an emotionally palpable fervour.These phrases are more than mere words; they are ideological shorthand that seek to galvanize people and spark outrage online like never before.But, in our increasingly digitized world, how do we know whether the accounts we interact with online are other human beings or bots? And given the powerful influence this kind of rhetoric can have, what impact do these bots have on our decision-making and democratic processes?AI bots Bots are influencing political discourse online with catchy phrases and words. (Shutterstock) My PhD research focused on the rise of “botaganda” — online content circulated by automated accounts, or bots, for electioneering purposes. Bots are automated accounts on social media that can be used to post tweets, like and share content or follow users on social media without needing a person to do it manually. Scholars have highlighted how bots “could be used to covertly exploit weaknesses in [a person’s] character and persuade them to take action against their own best interest.” The advent of artificial intelligence and machine learning has certainly equipped us with several advantages in contemporary life, but it has also made independent political thought much harder to achieve. It is increasing the prevalence of digital misinformation, and demands that we exercise vigilance to ensure we can make informed decisions.Understanding the social psychology that makes us susceptible to catchphrases like “drain the swamp” is integral to combating the impact of misinformation circulated online. Our social brains are susceptible to these kinds of linguistic triggers in three important ways: The fact that humans mimic and synchronize communication styles when interacting;We’re more likely to remember frequently repeated words and;Our brains are more likely to connect unrelated ideas when words or phrases frequently recur together over and over again in the same context. When we unwittingly engage with bots that repeatedly use certain phrases and terms, this subtly reinforces their association and impact.What I foundI conducted statistical analysis on how bot-generated content influenced Canadian Twitter users during the SNC Lavalin scandal from March 14 to April 9, 2019. My study found strong correlations between bot-generated and human tweets, suggesting people engaged closely with AI-generated content and stored it in memory for easy retrieval and recall.My analysis shows that bot-circulated tweets shared a high degree of similarity with human-generated tweets. The similarity in the emotional salience of bot-generated and human-generated tweets was significantly pronounced.I first used Spearman’s phi coefficient, a statistical tool, to measure how strongly bot tweets related with human tweets. I then applied linear regression to understand this relationship in more detail and to see if changes in bot tweets affected changes in human tweets.The results show there is a strong correlation between bot and human tweets, and that the content of bot tweets significantly influences linguistic aspects of human generated tweets. In simpler terms, human tweets replicated bot tweets to a high degree.On March 14, 2019, for instance, bot tweets shared 75 per cent similarity with human tweets, which increased to 92 per cent similarity by March 28. The emotional words used in bot tweets were reproduced in human tweets just over 97 per cent of the time on March 14. Though the reproduction of emotional words in human tweets decreased over time, the similarity remained significant. This underscores how closely bot- and human-generated content can mirror each other.Sometimes, bots simply mirror human activity because machine learning and generative AI technology are designed to replicate our behaviours. However, my analysis aims to understand not just how similar bot tweets and human tweets are, but also which one influences the other and in what way. My examination of prevalent words and phrases like “obstruction of justice,” “Trudeau’s scandal” and “Liberal coverup” propagated by bots were replicated at high frequency in human-generated tweets, both in unique and retweeted tweets. Content created by bots on social media can often look a lot like content made by humans. (Shutterstock) Social mimicryThis study’s findings support the idea that we are inclined to mimic language structures of tweets posted by bots, which gains traction among users engaged in a resulting ecology of tweets posted both by peers and computer programs, a phenomenon observed in human-computer interactions. Research underscores our innate tendency to mimic the communication patterns of those around us, particularly those who share our values and beliefs — a phenomenon also evident in our digital interactions. This inclination shapes how we express ourselves online, influencing our language, intonation and even how we make arguments. Popular words and catchphrases that appear at high frequency within online contexts help our brains organize and remember large amounts of information more easily.But how do these phrases achieve such resonance?We know that repeatedly hearing specific words or phrases significantly enhances our ability to recall them more effortlessly, especially when they consistently appear within the same context.In the landscape of our social brains, single words and phrases don’t carry much weight by themselves. What makes them powerful tools for spreading misinformation is their knack for evoking mental connections to familiar ideas, shaping how we understand current political developments. Consider this: when several disparate concepts are cleverly woven into a memorable catchy phrase, they merge those underlying concepts in our minds, making them feel logical and familiar. For example: “Make America Great Again.”This fusion is especially potent when these ideas strike a chord with our core values and beliefs, making catchphrases highly effective at moulding public opinion in profound ways.As we navigate this era of digital discourse, awareness of blind spots in our social psychology is our best defence. Understanding how cues or triggers affect us can reduce their influence over time. The more aware we are of bots and how they work, the more able we are to protect ourselves from misleading rhetoric, ensuring our democratic processes remain robust and inclusive.Sophia Melanson Ricciardone received funding from York University and an Ontario Graduate Scholarship to conduct this research.
Content Creation/Detection and Monitoring/Decision Making
Life, Physical, and Social Science/Education, Training, and Library
null
null
null
null
null
null
news
null
orchard-basket added to PyPI
Basket is the essential toolkit for seamless MaaS integration.
https://pypi.org/project/orchard-basket/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-26T18:49:21Z
IntroductionBasket is the essential toolkit for seamless MaaS integration.It provides a unified interface for interacting with different MaaS services, allowing developers to easily switch between different services without changing their code. It is the first free toolkit that allows developers to use LLM model services without registering or paying for them. Know more about FreeModel.Unified interface for interacting with different MaaS servicesEasy to use and integrate with existing projectsManage API KEY for MaaS and easy to switchExtensible and customizable for local and remote servicesOne step to chat with LLM service without configurationRefer to more documentation.Installpip install orchard-basketUsageList all available MaaSbasket maas listChoose the MaaS to use.basket maas use FreeModelList all available modelsbasket model listChoose the model to use.basket model use qwen/qwen-7b-chatChat with Modelsbasket chat "What is the meaning of life?"Switch to other MaaS or modelsbasket maas use deepseekbasket model use deepseek-chatContributionPlease help us to integrate with more MaaS.
Unknown
Unknown
null
null
null
null
null
null
news
ktransformers added to PyPI
KTransformers, pronounced as Quick Transformers, is designed to enhance your Transformers experience with advanced kernel optimizations and placement/parallelism strategies.
https://pypi.org/project/ktransformers/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-29T08:03:22Z
KTransformers is a flexible, Python-centric framework designed with extensibility at its core. By implementing and injecting an optimized module with a single line of code, users gain access to a Transformers-compatibleinterface, RESTful APIs compliant with OpenAI and Ollama, and even a simplified ChatGPT-like web UI. Our vision for KTransformers is to serve as a flexible platform for experimenting with innovative LLM inference optimizations. Please let us know if you need any other features.More will come soon. Please let us know which models you are most interested in.Be aware that you need to be subject to their corresponding model licenses when using DeepSeek and QWen.RESTful API and Web UIInstall[Optional] If you want to run with website, please compile the website before execute pip install .Install ktransformers with source.pip install -r requirements-local_chat.txtpip install . --no-build-isolationStart without website:ktransformers--model_pathdeepseek-ai/DeepSeek-V2-Lite-Chat--gguf_path/path/to/DeepSeek-V2-Lite-Chat-GGUF--port10002Start with website:ktransformers--model_pathdeepseek-ai/DeepSeek-V2-Lite-Chat--gguf_path/path/to/DeepSeek-V2-Lite-Chat-GGUF--port10002--webTrueOr you want to start server with transformers, the model_path should include safetensorsktransformers--typetransformers--model_path/mnt/data/model/Qwen2-0.5B-Instruct--port10002--webTrueAccess website with url http://localhost:10002/web/index.html#/chat :More information about the RESTful API server can be found here. You can also find an example of integrating with Tabby here. Brief Injection TutorialAt the heart of KTransformers is a user-friendly, template-based injection framework. This allows researchers to easily replace original torch modules with optimized variants. It also simplifies the process of combining multiple optimizations, allowing the exploration of their synergistic effects. Given that vLLM already serves as a great framework for large-scale deployment optimizations, KTransformers is particularly focused on local deployments that are constrained by limited resources. We pay special attention to heterogeneous computing opportunities, such as GPU/CPU offloading of quantized models. For example, we support the efficient Llamafile and Marlin kernels for CPU and GPU, respectively. More details can be found here.Example UsageTo utilize the provided kernels, users only need to create a YAML-based injection template and add the call to `optimize_and_load_gguf` before using the Transformers model.withtorch.device("meta"):model=AutoModelForCausalLM.from_config(config,trust_remote_code=True)optimize_and_load_gguf(model,optimize_rule_path,gguf_path,config)...generated=prefill_and_generate(model,tokenizer,input_tensor.cuda(),max_new_tokens=1000)In this example, the AutoModel is first initialized on the meta device to avoid occupying any memory resources. Then, optimize_and_load_gguf iterates through all sub-modules of the model, matches rules specified in your YAML rule file, and replaces them with advanced modules as specified.After injection, the original generate interface is available, but we also provide a compatible prefill_and_generate method, which enables further optimizations like CUDAGraph to improve generation speed.YAML TemplateBelow is an example of a YAML template for replacing all original Linear modules with Marlin, an advanced 4-bit quantization kernel.-match:name:"^model\\.layers\\..*$"# regular expression class:torch.nn.Linear# only match modules matching name and class simultaneouslyreplace:class:ktransformers.operators.linear.KTransformerLinear# optimized Kernel on quantized data typesdevice:"cpu"# which devices to load this module when initializingkwargs:generate_device:"cuda"generate_linear_type:"QuantizedLinearMarlin"Each rule in the YAML file has two parts: match and replace. The match part specifies which module should be replaced, and the replace part specifies the module to be injected into the model along with the initialization keywords.You can find example rule templates for optimizing DeepSeek-V2 and Qwen2-57B-A14, two SOTA MoE models, in the ktransformers/optimize/optimize_rules directory. These templates are used to power the local_chat.py demo.A detailed description of the injection using DeepSeek-V2 as an example is given here.Acknowledgment and ContributorsThe development of KTransformer is based on the flexible and versatile framework provided by Transformers. We also benefit from advanced kernels such as GGUF/GGML, Llamafile, and Marlin. We are planning to contribute back to the community by upstreaming our modifications.KTransformer is actively maintained and developed by contributors from the MADSys group at Tsinghua University and members from Approaching.AI. We welcome new contributors to join us in making KTransformer faster and easier to use.
Unknown
Computer and Mathematical/Life, Physical, and Social Science
null
null
null
null
null
null
news
Science X
'Open-washing' generative AI: How Meta, Google and others feign openness
The past year has seen a steep rise in generative AI systems that claim to be open. But how open are they really? New research shows there's widespread practice of "open-washing" by companies like Meta and Google: claiming brownie points for openness while evading actual scrutiny.
https://techxplore.com/news/2024-07-generative-ai-meta-google-feign.html
https://scx2.b-cdn.net/g…/2023/openai.jpg
2024-07-03T19:32:03Z
The past year has seen a steep rise in generative AI systems that claim to be open. But how open are they really? New research shows there's widespread practice of "open-washing" by companies like Meta and Google: claiming brownie points for openness while evading actual scrutiny.The question of what counts as open source in generative AI takes on particular importance in light of the EU AI Act that regulates "open source" models differently, creating an urgent need for practical openness assessment.Almost all the major tech corporations claim to provide "open" models, but very few actually do. Andreas Liesenfeld and Mark Dingemanse of Radboud University's Center for Language Studies surveyed 45 text and text-to-image models that bill themselves as open. It provides a stark view of the purported openness of current generative AI.Their study was published recently at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT 2024), and profiled in a News brief in Nature.Avoiding scrutinyThe researchers found that corporations like Meta, Microsoft and Mistral strategically co-opt terms like "open" and "open source" while in fact shielding their models almost entirely from scientific and regulatory scrutiny. There is frequent use of terms like "open" and "open source" for marketing purposes without actually providing meaningful insight into source code, training data, fine-tuning data or architecture of systems.Building on their earlier work, the researchers put over 45 models to the test, this time also considering text-to-image generators. They find that openness is unevenly distributed and often overclaimed. Instead, they found that smaller players like AllenAI (with OLMo) and BigScience Workshop + HuggingFace (with BloomZ) often go the extra mile to document their systems and open them up to scrutiny.EU AI ActThe recently introduced EU AI Act provides special exemptions for "open source" models, but doesn't offer a clear definition of the term. This creates an incentive for open-washing: if models count as open, model providers face less onerous requirements and less public and scientific scrutiny. Liesenfeld states, "This makes it more important that we have clarity about what constitutes openness when it comes to generative AI. We don't see openness as an all-or-nothing phenomenon, but as composite (consistent of multiple elements) and gradient (it comes in degrees)."Though the EU AI Act creates more urgency, openness has long been recognized to be of key importance for innovation, science, and society. It can also build trust and understanding in AI by demystifying what it's capable of. Dingemanse says, "If a company like OpenAI claims their AI can 'pass the bar exam,' this may or may not be impressive depending on what is in the training data."OpenAI has been notoriously vague about this, probably also to avoid legal exposure, but the sheer magnitude of training data means that ChatGPT and similar next word prediction engines can do most exams in 'open book' mode, making their performance much less impressive."The work helps build a case for meaningful openness in AI and brings to light a growing number of alternatives to ChatGPT. It comes a short while after Radboud University's Faculty of Arts has released guidance on generative AI and research integrity, which calls for more critical AI literacy among researchers considering the use of generative AI.More information:Andreas Liesenfeld et al, Rethinking open source generative AI: open washing and the EU AI Act, The 2024 ACM Conference on Fairness, Accountability, and Transparency (2024). DOI: 10.1145/3630106.3659005Elizabeth Gibney, Not all 'open source' AI models are actually open: here's a ranking, Nature (2024). DOI: 10.1038/d41586-024-02012-5Citation: 'Open-washing' generative AI: How Meta, Google and others feign openness (2024, July 3) retrieved 3 July 2024 from https://techxplore.com/news/2024-07-generative-ai-meta-google-feign.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Unknown
Unknown
null
null
null
null
null
null
news
Ashraf Eassa
NVIDIA NeMo Accelerates LLM Innovation with Hybrid State Space Model Support
Today’s large language models (LLMs) are based on the transformer model architecture introduced in 2017. Since then, rapid advances in AI compute performance have enabled the creation of even larger…
https://developer.nvidia.com/blog/nvidia-nemo-accelerates-llm-innovation-with-hybrid-state-space-model-support/
https://developer-blogs.…-1920x1080-1.jpg
2024-07-17T17:32:10Z
Todays large language models (LLMs) are based on the transformer model architecture introduced in 2017. Since then, rapid advances in AI compute performance have enabled the creation of even larger transformer-based LLMs, dramatically improving their capabilities. Advanced transformer-based LLMs are enabling many exciting applications such as intelligent chatbots, computer code generation, and even chip design.Training cutting-edge LLMs requires an efficient and versatile software stack. NVIDIA NeMo provides an end-to-end platform to build, customize, and deploy LLMs. Integrated deeply into the NeMo framework is Megatron-Core, a PyTorch-based library that provides the essential components and optimizations needed to train LLMs at scale.  As model developers explore new model architectures, the NVIDIA platform continues to expand, enabling their innovations.Today, NVIDIA is announcing that both NeMo and Megatron-Core now support pre-training and fine-tuning of state space models (SSMs), respectively. Additionally, NeMo now supports training models based on the Griffin architecture, described by Google DeepMind. Transformer models excel at capturing long-range dependencies through the now-famous attention mechanism, making them the ideal choice for tasks that require global context understanding. However, the computational complexity of attention scales quadratically with sequence length, leading to large increases in training time and training cost with increasing sequence length. Additionally, during inference, attention requires storing a cache of key-value pairs (known as a KV cache), that grows linearly with sequence length. This leads to a growing memory footprint as sequence length increases. Recently, SSMs have emerged as a compelling model architecture for sequence modeling tasks as they overcome several of the limitations of attention. SSMs are a class of models that have gained popularity in the deep learning community as efficient alternatives to attention-based transformer models for sequence modeling tasks. SSMs feature the following compelling properties: Linear complexity: SSMs are linear in both computational and memory complexity, while attention is quadratic in both. This means they can model long-range dependencies in sequences much more efficiently than attention. High quality and accuracy: Like attention, SSMs look across the tokens of the input sequence, enabling models to focus on the most relevant parts. This results in comparable quality and accuracy to transformer-based models. Efficient inference: SSMs need only store constant-size vectors, rather than a KV cache, making inference more memory efficient, particularly with longer sequence lengths.To illustrate the benefits SSMs provide for longer sequence lengths, the following chart shows the relative speedup of training a layer of Mamba-2 (a state space model variant described later in this post) compared to training a transformer layer as the sequence length increases. As the sequence length increases to 256K, the Mamba-2 layer is 18x faster than the transformer layer. Figure 1. Mamba layer performance relative to transformer layer, with the Mamba advantage growing rapidly as sequence length increasesTransformer: model dimension 4,096, 32 heads. Mamba-2: model dimension 4,096, state dimension 128, 8 groups.Several SSM variants have become popular in the AI community, including Hyena, Mamba-1, and more recently, Mamba-2. Mamba-2 stands out as a recent release that achieves very strong accuracy across multiple benchmarks. At the core of Mamba-2 is a new structured state space duality (SSD) layer, which is, in practice, a reformulation of the SSM math used in the Mamba-1 model. This reformulation recasts SSM computations as matrix multiplications, allowing them to make use of the significant matrix multiplication performance of NVIDIA Tensor Cores. Therefore, compared to Mamba-1, Mamba-2 can be trained far more quickly.  Mamba-2 also offers quality and accuracy competitive with transformers on language modeling tasks and can yield even better results when a few attention layers are combined with SSD layers in a hybrid model. However, pure SSMs are not without limitations. For example, they have been shown to struggle in needle-in-a-haystack type scenarios that require precise recall of information in very long sequences. Hybrid models that combine SSMs, SSDs, RNNs, and transformers can leverage the strengths of each model architecture while mitigating their individual weaknesses. In a recent paper, researchers, including members of the NVIDIA Applied Deep Learning Research (ADLR) team, described hybrid Mamba-Transformer models. In these hybrid models, standard transformer layers and novel SSM layers can be interleaved in arbitrary configurations. For example, the 8B hybrid model described in this paper has 56 layers. Four layers are self-attention layers, 24 are Mamba-2 layers, and 28 are multilayer perceptron (MLP) layers. The layers are allocated such that a Mamba-2 layer comes first, followed by the attention layers, with MLP layers distributed evenly throughout the model.  According to the paper, the hybrid 8B Mamba-2-Hybrid model exceeds the 8B Transformer on all 12 standard tasks evaluated by the team. And, the 8B Mamba-2-Hybrid is also predicted to be up to 8x faster when generating tokens at inference time. Beyond the improved ability to perform tasks and significant performance benefits during inference, the Mamba-2-Hybrid model also shows greater compute efficiency. The chart below shows the compute needed to train the 8B Mamba-2-Hybrid model compared to the compute required to train an 8B Transformer model as the sequence length increases. At a sequence length of 2,048 tokens, the compute required for both is roughly similar, with the hybrid model showing a slight advantage. However, as sequence length scales to as many as 32,768 tokens, the compute required for the 8B Transformer model doubles, while only growing by 13% for the hybrid model. As modern language models support sequence lengths of 1M tokens and above, this advantage for the SSM-Transformer-hybrid models will only grow.Model architecture innovation is critical to delivering new levels of intelligence. In addition to world-class support for building transformer-based models, NeMo and Megatron-Core now provide the community with the ability to train SSMs and SSDs, as well as hybrid models that combine their benefits with the strengths of transformer models. With this release of NeMo, the following initial features are provided to enable the community to quickly begin experimenting: Support for SSD models, including Mamba-2.Support for RG-LRU (Griffin architecture.) Support for Transformer/SSM hybrid model combinations.Fine-tuning support for Recurrent Gemma (Griffin), pure Mamba-2 models, and the 8B Mamba-2-Hybrid models.Sharding and model parallelism support.In upcoming releases, support for additional sub-quadratic model architectures, additional performance optimizations, and support for FP8 training are planned.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
gptdb added to PyPI
GPT-DB is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
https://pypi.org/project/gptdb/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-14T21:57:34Z
What is GPT-DB?GPT-DB is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents.The purpose is to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.In the Data 3.0 era, based on models and databases, enterprises and developers can build their own bespoke applications with less code.AI-Native Data AppContentsIntroductionThe architecture of GPT-DB is shown in the following figure:The core capabilities include the following parts:RAG (Retrieval Augmented Generation): RAG is currently the most practically implemented and urgently needed domain. GPT-DB has already implemented a framework based on RAG, allowing users to build knowledge-based applications using the RAG capabilities of GPT-DB.GBI (Generative Business Intelligence): Generative BI is one of the core capabilities of the GPT-DB project, providing the foundational data intelligence technology to build enterprise report analysis and business insights.Fine-tuning Framework: Model fine-tuning is an indispensable capability for any enterprise to implement in vertical and niche domains. GPT-DB provides a complete fine-tuning framework that integrates seamlessly with the GPT-DB project. In recent fine-tuning efforts, an accuracy rate based on the Spider dataset has been achieved at 82.5%.Data-Driven Multi-Agents Framework: GPT-DB offers a data-driven self-evolving multi-agents framework, aiming to continuously make decisions and execute based on data.Data Factory: The Data Factory is mainly about cleaning and processing trustworthy knowledge and data in the era of large models.Data Sources: Integrating various data sources to seamlessly connect production business data to the core capabilities of GPT-DB.SubModuleGPT-DB-Hub Text-to-SQL workflow with high performance by applying Supervised Fine-Tuning (SFT) on Large Language Models (LLMs).gptdbs gptdbs is the official repository which contains some data appsAWEL operatorsAWEL workflow templates and agents which build upon GPT-DB.Text2SQL FinetuneMore Information about Text2SQL finetuneInstallUsage TutorialFeaturesAt present, we have introduced several key features to showcase our current capabilities:Private Domain Q&A & Data ProcessingThe GPT-DB project offers a range of functionalities designed to improve knowledge base construction and enable efficient storage and retrieval of both structured and unstructured data. These functionalities include built-in support for uploading multiple file formats, the ability to integrate custom data extraction plug-ins, and unified vector storage and retrieval capabilities for effectively managing large volumes of information.Multi-Data Source & GBI(Generative Business intelligence)The GPT-DB project facilitates seamless natural language interaction with diverse data sources, including Excel, databases, and data warehouses. It simplifies the process of querying and retrieving information from these sources, empowering users to engage in intuitive conversations and gain insights. Moreover, GPT-DB supports the generation of analytical reports, providing users with valuable data summaries and interpretations.Multi-Agents&PluginsIt offers support for custom plug-ins to perform various tasks and natively integrates the Auto-GPT plug-in model. The Agents protocol adheres to the Agent Protocol standard.Automated Fine-tuning text2SQLWe've also developed an automated fine-tuning lightweight framework centred on large language models (LLMs), Text2SQL datasets, LoRA/QLoRA/Pturning, and other fine-tuning methods. This framework simplifies Text-to-SQL fine-tuning, making it as straightforward as an assembly line process. GPT-DB-HubSMMF(Service-oriented Multi-model Management Framework)We offer extensive model support, including dozens of large language models (LLMs) from both open-source and API agents, such as LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, and many more.Privacy and SecurityWe ensure the privacy and security of data through the implementation of various technologies, including privatized large models and proxy desensitization.Support DatasourcesImageAutoDL ImageLanguage SwitchingIn the .env configuration file, modify the LANGUAGE parameter to switch to different languages. The default is English (Chinese: zh, English: en, other languages to be added later).ContributionContributors WallLicenceThe MIT License (MIT)CitationIf you find GPT-DB useful for your research or development, please cite the following paper:@article{xue2023gptdb,title={GPT-DB: Empowering Database Interactions with Private Large Language Models},author={Siqiao Xue and Caigao Jiang and Wenhui Shi and Fangyin Cheng and Keting Chen and Hongjun Yang and Zhiping Zhang and Jianshan He and Hongyang Zhang and Ganglin Wei and Wang Zhao and Fan Zhou and Danrui Qi and Hong Yi and Shaodong Liu and Faqiang Chen},year={2023},journal={arXiv preprint arXiv:2312.17449},url={https://arxiv.org/abs/2312.17449}}Contact InformationWe are working on building a community, if you have any ideas for building the community, feel free to contact us.
Digital Assistance/Process Automation
Unknown
null
null
null
null
null
null
news
Kosta Andreadis
Train and Run Your Own AI with GIGABYTE AI TOP
Train, customize, and fine-tune powerful generative AI models with GIGABYTE's groundbreaking AI TOP utility and AI TOP hardware. Here's the full details! Continue reading at TweakTown >
https://www.tweaktown.com/articles/10797/train-and-run-your-own-ai-with-gigabyte-top/index.html
https://static.tweaktown…yte-top_full.jpg
2024-07-22T21:05:03Z
Like all PC technology breakthroughs, generative AI combines cutting-edge hardware with the right software, tools, and data. In today's AI PC era, many AI models are general-purpose or trained on a specific data set, running on the cloud with powerful hardware buried in a large data center.Conducting AI training locally with security in mind is critical for individuals, groups, or businesses concerned with security and broadcasting sensitive data over a network. For some, it's also a question of where to begin - with the understanding that you'll need some serious hardware.Thankfully, GIGABYTE has created AI TOP - a combination of AI tools and hardware designed to make training and running your powerful AI at home or on a desk a reality with off-the-shelf GIGABYTE hardware and software.GIGABYTE's new AI TOP Utility is a tool built to provide a robust, user-friendly environment for local AI model training and customization for all skill sets. It supports the most common open-source LLMs or large language models and can use custom datasets. It's the ideal solution for businesses, organizations, or even individuals looking to tap into cutting-edge tech.Regarding AI, parameters are the various variables learned through the training process. They are often used as a metric to define the AI model's overall skill, accuracy, and performance. The more parameters, the better, and GIGABYTE AI TOP supports up to 236B-parameter large language models or 236 Billion. Until now, training an AI model of this size would have required tapping into the power of a data center built for generative AI training.The AI TOP Utility pairs with GIGABYTE AI TOP hardware to ensure there is enough GPU VRAM, motherboard, PSU, and SSD power for local AI training and fine-tuning. One of the most impressive aspects of GIGABYTE's new AI TOP Utility and hardware is that you don't need extensive AI programming knowledge, or any, before jumping in.Introducing the AI TOP UtilityTraditional AI training methods can be cumbersome and complicated. GIGABYTE's AI Top Utility for Linux systems has been designed as a single interface and tool that can handle AI training and fine-tuning with real-time hardware monitoring and logging tools for troubleshooting and customization. The AI Top Utility supports over 70 open-source LLM models (via Hugging Face), including Gemma 2, Llama 2, Llama 3, Baichuan 2, Distill-GPT2, and GLM4.AI Top Utility offers a clean and easy-to-follow interface. At its most basic level, you can create a new Experiment by selecting the AI model, defining the Dataset, and choosing whether you want speed, precision, or a balance between the two. You might think precision is the way to go, but results will take longer to generate, and a longer training time is involved.From there, you can drill into various training settings for more advanced options to control how the AI models behave and perform during training. GIGABYTE offers detailed breakdowns of each setting, including guides on creating AI datasets as part of the AI Top Utility manual.You've also got control over what GPU hardware and memory to utilize for training, with the ability to combine VRAM with DRAM and SSD storage for the most memory-intensive training.One of the critical features of the AI TOP Utility is the Dashboard, which offers the current progress of training and stats on system performance. From GPU and CPU measurements presented in an easy-to-read odometer fashion to a real-time look at VRAM, DRAM, and SSD Loading, you can get an immediate and clear picture of how your AI TOP system is performing.The Dashboard also provides a real-time look at the recorded logs while tracking how effectively the model learns from the dataset. The information provides a quick overview of how things are progressing or potentially stalling, depending on the configuration and hardware available.GIGABYTE's powerful AI TOP HardwareThe innovative and groundbreaking AI TOP Utility is designed to work specifically with GIGABYTE's powerful AI TOP hardware, starting with the GIGABYTE TRX50 AI TOP Motherboard. GIGABYTE also offers AI TOP GPUs, SSDs, and a PSU - all created with high-end AI performance in mind.As for the TRX50 AI TOP Motherboard, this is the beast you'll need to run powerful AI models with the AI TOP Utility. It comes equipped with 4 x PCIe 5.0 x16 slots to house up to four GPUs. In addition, you've got 8 Channel DDR5, with support for up to 2TB of system memory. And just to make it even more impressive, its Socket sTR5 supports high-end AMD Ryzen Threadripper 7000 and Threadripper PRO 7000 Series CPUs.Storage-wise, it can also house up to 4 PCIe Gen5 SSDs, features USB4 support, a Dual 10 GbE LAN for high-speed networking, and Wi-Fi 7. To ensure you get the best AI performance, the TRX50 AI TOP Motherboard also features a 16+8+4 Phases Digital VRM Solution for power delivery, thermal guards, and the same DIY-friendly scalability and GIGABYTE features like EZ-Plug and EZ-Latch that powers the company's most celebrated motherboards.For those looking to install more than one GPU into their AI TOP-powered PC, GIGABYTE offers a range of AI TOP Graphics Cards, including the GeForce RTX 4070 Ti SUPER AI TOP 16G, Radeon PRO W7900 Dual Slot AI TOP 48G, and Radeon PRO W7800 AI TOP 32G.These are all AI-ready because they feature the company's Turbo Fan cooling, which is designed to maximize airflow for multiple cards and compact dual-slot sizes with metal back plates. Running two or more GPUs like this doubles, triples, or quadruples your VRAM capacity and potential for running more complex AI training. VRAM capacity is extremely important for large-scale AI training.GIGABYTE's AI TOP 100E SSD is available in 1TB or 2TB variants and has been designed for intensive AI workloads where TBW (Terabytes Written), stability, and an expanded memory pool are essential in maintaining accuracy and performance. With a complete AI TOP system, the goal is to leverage GPU VRAM, System RAM, and an AI TOP SSD like this to offload the processing of large datasets to the SSD, which not only saves time but also overall costs.With the sheer amount of data involved in AI training, the AI TOP 100E SSD has a 150 times higher TBW rating than standard SSDs.With a Threadripper CPU, up to four GPUs, eight channels of DDR5 memory, and multiple M.2 SSDs all firing simultaneously, the final piece of the hardware puzzle is the PSU or power supply. GIGABYTE's UD1600PM PG5 AI TOP is a powerful 1600W PSU with two PCIe 16pin to PCIe 16pin connectors with additional 8pin to PCIe 16pin connectors to support up to four PCIe Gen 5.0 graphics cards. Each card received gets its own dedicated 300W power supply.With a fully modular design and an 80 PLUS Platinum certification, the UD1600PM PG5 AI TOP supports up to 200% power excursions and is described as 'sever-grade' and ultra-durable.'For more, there's GIGABYTE's AI TOP TutorAs you can see, GIGABYTE AI TOP is a comprehensive software and hardware solution that allows anyone to set up, train, and run complex AI models locally with complete control over datasets, parameters, and fine-tuning. However, one final piece is the AI TOP Tutor, a generative AI model created by GIGABYTE that can provide consultations, guidance for setting up hardware and software, optimizing hardware and training, and even technical support.Interfacing with AI TOP Tutor is done using natural language and a chat interface. To bring everything full circle, the AI TOP Tutor was trained using the AI TOP Utility and AI TOP Hardware outlined in this article.Head here for more on GIGABYTE AI TOP.
Content Creation/Process Automation
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Zeid Khater
Synthetic Data: Meet The Unsung Catalyst In AI Acceleration
Synthetic data is not a new phenomena but generative AI-based synthetic data is rapidly becoming the unsung hero of AI development. Find out why in this preview of the upcoming Technology & Innovation Summit North America.
https://www.forrester.com/blogs/synthetic-data-meet-the-unsung-catalyst-in-ai-acceleration/
https://go.forrester.com…Image_220314.jpg
2024-07-01T16:50:42Z
Synthetic data is not a new phenomenon. Rules-based synthetic data has been around longer than most people realize. It is commonly used in analytics for data augmentation, conjoint model analysis, and simulation testing. Rules-based methods, however, lack flexibility and struggle with complex data distributions. Assumptions made during rule creation dont always hold universally and manually defining rules can become impractical as datasets grow. Generative AI models such as generative adversarial networks (GANs) and variational autoencoders (VAEs) are making it easier to generate more realistic synthetic data quicklyby learning complex distributions directly from that data and generating much more realistic and higher quality synthetic data which can then train better performing AI models.  Forrester defines synthetic data as generated data of any type (e.g., structured, transactional, image, audio) that duplicates, mimics, or extrapolates from the real world but maintains no direct link to it, particularly for scenarios where real-world data is unavailable, unusable, or strictly regulated. GenAI-based synthetic data is becoming the unsung hero of AI development. For example, we have synthetic data to thankfor Microsofts Phi-1 base model which was trained on a curated “textbook-quality” synthetic dataset, rather than traditional web data exclusively, which appears to have a positive impact on mitigating toxic and biased content generation. These smaller models will continue to play a crucial role in scaling genAI implementation for industry-specific use cases.   Synthetic data is also likely to grow in popularity due to its ability to train AI models at a much faster pace by generating large, clean, relevant datasets. NVIDIA claims its NVIDIA Isaac Sim simulation application can help train [computer vision models] 100X faster.Synthetic data providers are emerging to democratize AI training and their solutions are not limited to computer vision systems. Synthetic data provider Gretel, for example, released the worlds largest open-source text-to-SQL synthetic dataset to assist developers in training their models via tabular data.  One of the most salient advantages of employing synthetic data for AI model training lies in data privacy. By generating data that is completely disconnected from the original dataset, it becomes impervious to any traceability back to its source. This attribute holds particular significance in sensitive domains such as healthcare, medical research, and financial services, where the utilization of data for AI training ishighly regulated and requires strict adherence to privacy laws and regulations. As the field of AI continues its rapid expansion, the demand for training data escalates in tandem, necessitating the establishment of robust regulatory frameworks. Synthetic data emerges as a viable solution, enabling the faster training of models to meet market demands while remaining fully compliant with regulatory constraints.  If youre curious to hear more about how to best leverage synthetic data, please join me at Forresters Technology & Innovation Summit North America in Austin, Texas September 9th 12th 2024. Ill be presenting a session on synthetic data use cases and there will be a variety of other sessions on related topics, so definitely check out the agenda here.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
mymarkup
How to Raise Your Artificial Intelligence: A Conversation with Alison Gopnik and Melanie Mitchell | Los Angeles Review of Books
Julien Crockett interviews Alison Gopnik and Melanie Mitchell about complexity and learning in AI systems, and our roles as caregivers....
https://lareviewofbooks.org/article/how-to-raise-your-artificial-intelligence-a-conversation-with-alison-gopnik-and-melanie-mitchell
https://lareviewofbooks-…tzky-rotated.jpg
2024-07-06T05:47:01Z
This interview is part of The Rules We Live By, a series devoted to asking what it means to be a human living by an ever-evolving set of rules. The series is made up of conversations with those who dictate, think deeply about, and seek to bend or break the rules we live by. ¤A GROWING FEAR and excitement for todays AI systems stem from the assumption that as they improve, somethingsomeone?will emerge: feed large language models (LLMs) enough text and, rather than merely extracting statistical patterns in data, they will become intelligent agents with the ability to understand the world.Alison Gopnik and Melanie Mitchell are skeptical of this assumption. Gopnik, a professor of psychology and philosophy studying childrens learning and development, and Mitchell, a professor of computer science and complexity focusing on conceptual abstraction and analogy-making in AI systems, argue that intelligence is much more complicated than we think. Yes, what todays LLMs can achieve by consuming huge swaths of text is impressiveand has challenged some of our intuitions about intelligencebut before we can attribute to them something like human intelligence, AI systems will need the ability to actively interact with and engage in the world, creating their own mental models about how it works.How might AI systems reach this next level? And what is needed to ensure their safe deployment? In our conversation, Gopnik and Mitchell consider various approaches, including a framework to describe our role in this next phase of AI development: caregiving.¤JULIEN CROCKETT:Lets start with the tension at the heart of AI: we understand and talk about AI systems as if they are both mere tools and intelligent actors that might one day come alive. Alison, youve argued that the currently popular AI systems, LLMs, are neither intelligent nor dumbthat those are the wrong categories by which to understand them. Rather, we should think of them as cultural technologies, like the printing press or the internet. Why is a cultural technology a better framework for understanding LLMs?ALISON GOPNIK: A very common trope is to treat LLMs as if they were intelligent agents going out in the world and doing things. Thats just a category mistake. A much better way of thinking about them is as a technology that allows humans to access information from many other humans and use that information to make decisions. We have been doing this for as long as weve been human. Language itself you could think of as a means that allows this. So are writing and the internet. These are all ways that we get information from other people. Similarly, LLMs give us a very effective way of accessing information from other humans. Rather than go out, explore the world, and draw conclusions, as humans do, LLMs statistically summarize the information humans put onto the web.Its important to note that these cultural technologies have shaped and changed the way our society works. This isnt a debunking along the lines of AI doesnt really matter. In many ways, having a new cultural technology like print has had a much greater impact than having a new agent, like a new person, in the world.MELANIE MITCHELL: We use so many different metaphors to talk about LLMs. They have been called autocomplete on steroids and stochastic parrots. The science fiction writer Ted Chiang called ChatGPT a blurry JPEG of the web. Alison calls them cultural technologies. Others use metaphors that have more to do with agency and understanding. Im not sure weve come up with the right metaphor because in some sense, LLMs are all these things. And maybe we have to say to what extent they are each of these things. Theres been such a big debate in the AI community about this, and its interesting that such smart people can so violently disagree on how to think about these systems. But it just reflects something thats been constant in the history of AI: its challenging our notions of what intelligence means.Back in the 1970s, a lot of people were saying, if a computer could play chess at a grandmaster level, that would require general humanlike intelligence. Clearly it didnt. We thought the same thing about translation. Now were being faced with systems that are conversationalists and able to act like agents that understand conversation. Does that require humanlike intelligence? It doesnt seem to. So maybe thats just the next step in how AI pushes peoples intuitions about what intelligence is.AG: There is an implicit intuitive model that everyday people (including very smart people in the tech world) have about how intelligence works: theres this mysterious substance called intelligence, and as you have more of it, you gain power and authority. But thats just not the picture coming out of cognitive science. Rather, theres this very wide array of different kinds of cognitive capacities, many of which trade off against each other. So being really good at one thing actually makes you worse at something else. To echo Melanie, one of the really interesting things were learning about LLMs is that things like grammar, which we might have thought required an independent-model-building kind of intelligence, you can get from extracting statistical patterns in data. LLMs provide a test case for asking, What can you learn just from transmission, just from extracting information from the people around you? And what requires independent exploration and being in the world?What do you think are the limits to what LLMs can achieve just by being trained on language alone? MM: Its always dangerous to say LLMs cant do something, given that language is such a rich representative medium and there is so much data out there. But, for example, theres a new push to use LLMs to help control robots. Im not convinced LLMs are going to be the solution to get a robot that can do physical activities like fold your laundry or anything really that involves both understanding motor action and how the world works at a very basic level.AG: If you look at, for instance, videos of the very best robots people are inventing, youll notice that theres a little number in the corner that says 10x or 20x or something like that, and what that means is that the video has been sped up 20 times to make it look like the robot is doing something intelligent. If you were actually watching the robot in real life, it would just look like it was incredibly painfully slow and awkward and making mistakes all the time. This is representative of whats called Moravecs paradox: things that looked as if they would be really, really hard for AI and require a lot of intelligence, like playing chess, turned out to be relatively easy. And things that look like any two-year-old can do them, like picking up an object and putting it in a pot and stirring it, are actually really hard. LLMs have made that paradox more vivid.But going back to the contrast between different kinds of intelligences, one type of intelligence is about what I call transmission: how do I pass information to and extract information from another person? A different kind of intelligence is about truth: Im in the world and the world is changingwhat AI people call a nonstationary environmentso how do I find the truth about something that we havent seen before? Science is the most dramatic example of engaging in this activity, but even very young children are good at this. LLMs are not trained to do this. We talk about how they hallucinate, but hallucination isnt really the right word. To hallucinate would mean that you recognize the difference between the truth and just a bunch of things that people have said. LLMs are not designed to make that distinction.MM: I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language. But a big part of what makes us intelligent is our ability to reflect on our own state. We have a sense for how confident we are about our own knowledge. This has been a big problem for LLMs. They have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language. Without some extra ability to ground what theyre saying in the world, they cant really know if something theyre saying is true or false.Melanie, this ties into what youve said is the biggest issue facing LLMs: that they rely on statistical associations rather than concepts. What is a concept and why is it a limitation for LLMs? MM: A concept is a mental model of some aspect of the world that has some grounding in the truth. You might have a concept of something like a unicorn, and while its not true in the real physical world, its actually true in some different kind of fictional world. I know its fictional, but I still have a concept of it, and I can answer questions about it. I think these mental models of the way the world works, which involve things that cause other things, are concepts. And this is something that I dont think LLMs have, or maybe even can develop, on the scale that humans do.AG: Another really important point about what makes mental models different from just statistical patterns is that when you have a mental model, you also have an idea that you can go out and test whether that model is right or not. I can actively go out into the world, do an experiment, and get the right kind of new data to decide whether Im right or wrong. And that, again, is something that two-year-olds do all the time (although we call it getting into everything). Certainly, intelligent nonhuman animals also do this. Its not that, in principle, AI systems cant do it, but its not what LLMs are doing.How can we give AI systems the ability to construct mental models? MM: I think it requires the ability to intervene in the world and do experiments and reason about things counterfactually, like if I had done this, what would have happened? Or if this other thing had happened, how would that affect everything? Animals who have very different bodies from our own do this kind of reasoning, but the particulars of their bodies and their sensory systems matter a lot for how they conceptualize the world. The ability to more actively interact with the world and learn, as opposed to passively sitting there waiting for more data to come in, thats going to be really important. In machine learning, people use the word curriculum for how you shape training. Do you just throw all of Wikipedia at it? Or do you let it develop more in the way a child develops?AG: In my research, weve been comparing children and various agents to ask how good each is at constructing its own curriculum. If I give you a task, like a video game level that you cant solve, can you figure out the simpler thing that you should do? For example, heres the simpler game I should play to become an expert and then, eventually, Ill be able to solve a harder level. We found that kids are surprisingly good at coming up with curricula but current AI systems are not.Theres an evolutionary argument that the time when intelligence shows up in evolution is in the Cambrian explosion. Before the explosion, you had lots of organisms like sponges living on the bottom of the ocean, and they had a wonderful life where food wafted over them and they extracted it. But what happens in the Cambrian is you start having organisms with eyes and claws, or what biologists call actuators and sensors. When you get actuators and sensors, you can perceive things and move, and thats a really different niche for an animal. Thats when you start getting neural systems and brains because you need a brain to coordinate action and sensing. And when you get a bunch of these animals together, they start trying to find prey and avoid predators. You get a perceptual system thats connected to the outside world and taking in information about the world, and a motor system thats connected to the outside world and going out and changing the world. This is a foundational kind of structure for which you need to have the kind of truth-seeking intelligence we are talking about.There are some interesting attempts within robotics and AI to use reinforcement learning to try and get systems that are motivated to find truth. Instead of just trying to get rewards like a higher score in a game, these systems are motivated to get information or to try to be more effective in the world. And I think that might be the right route to think about for something that looks like the intelligence that evolved in the Cambrian.How important for this next generation of robots and AI systems is incorporating social traits such as emotions and morality?MM: Intelligence includes the ability to use tools to augment your intelligence, and for us, the main tool we use is other people. We have to have a model of other people in our heads and be able to, from very little evidence, figure out what those people are likely to do, just like we would for physical objects in the real world. This theory of mind and ability to reason about other people is going to be essential for getting robots to work both with humans and with other intelligent robots.AG: Some things that seem very intuitive and emotional, like love or caring for children, are really important parts of our intelligence. Take the famous alignment problem in computer science: How do you make sure that AI has the same goals we do? Humans have had that problem since we evolved, right? We need to get a new generation of humans to have the right kinds of goals. And we know that other humans are going to be in different environments. The niche in which we evolved was a niche where everything was changing. What do you do when you know that the environment is going to change but you want to have other members of your species that are reasonably well aligned? Caregiving is one of the things that we do to make that happen. Every time we raise a new generation of children, were faced with this difficulty of here are these intelligences, theyre new, theyre different, theyre in a different environment, what can we do to make sure that they have the right kinds of goals? Caregiving might actually be a really powerful metaphor for thinking about our relationship with AIs as they develop.Alison, this concept of caregiving touches on a recent conversation you had with Ted Chiang about his novella The Lifecycle of Software Objects (2010). What interested you in Chiangs work, and what are the parallels with your own? AG: As always, novelists are better at conveying things than we scientists are. This is a lovely science fiction story about people who try to raise AI agents as if they were children and describes the very complicated dilemmas the human parents of these AIs experience and the ways the AIs try to both follow what their parents do and find paths for themselves. Its actually the best description of human parenting that Ive read. Now, its not like were in the ballpark of raising AIs as if they were humans. But thinking about that possibility gives us a way of understanding what our relationship to artificial systems might be. Often the picture is that theyre either going to be our slaves or our masters, but that doesnt seem like the right way of thinking about it. We often ask, Are they intelligent in the way we are? Theres this kind of competition between us and the AIs. But a more sensible way of thinking about AIs is as a technological complement. Its funny because no one is perturbed by the fact that we all have little pocket calculators that can solve problems instantly. We dont feel threatened by that. What we typically think is, With my calculator, Im just better at math.MM: Often, when these technologies first come out, people worry a lot because they think theyre going to harm us in some way. When calculators first appeared, people didnt want kids using them because they thought that would make kids not learn math. But as we figure out what these technologies are good for, we learn how to use them. That will happen with AI. Its going to be a new kind of technology thats going to augment us in many ways, just like other technologies have, but its not going to supplant us. Its not going to take away all of our jobs, because its just not that kind of thing. It doesnt have the kinds of qualities that are going to replace humans.AG: But we still have to put a lot of work into developing norms and regulations to deal with AI systems. An example I like to give is, imagine that it was 1880 and someone said, all right, we have this thing, electricity, that we know burns things down, and I think what we should do is put it in everybodys houses. That would have seemed like a terribly dangerous idea. And its trueit is a really dangerous thing. And it only works because we have a very elaborate system of regulation. Theres no question that weve had to do that with cultural technologies as well. When print first appeared, it was open season. There was tons of misinformation and libel and problematic things that were printed. We gradually developed ideas like newspapers and editors. I think the same thing is going to be true with AI. At the moment, AI is just generating lots of text and pictures in a pretty random way. And if were going to be able to use it effectively, were going to have to develop the kinds of norms and regulations that we developed for other technologies. But saying that its not the robot thats going to come and supplant us is not to say we dont have anything to worry about.Are you surprised by the rate at which were adopting AI tools like ChatGPT?MM: It depends what you mean by adopting. People use ChatGPT, but what do they use it for? I dont have a good sense of the degree to which its being adopted as a big part of peoples jobs, for instance. I think its really useful for some things like coding. But its not perfectyou have to check it.AG: Rather than the dramatic AI narrative about whats just happened with ChatGPT, I think its important to point out that the real revolution, which passed relatively unheralded, was around the year 2000 when everything became digital. Thats the change that were still reckoning with. But because it happened 20 to 30 years ago, its something we take for granted. Theres a wonderful paper in the journal Psychological Science where they asked people how harmful they thought a fictional technology from a particular date was. The researchers very cleverly connected the technologys date to the persons birthday. It turns out that if something was invented more than two years after you were born, youre much more likely to think it was harmful than if it was invented two years before you were born. Suppose I said to you, heres a technology that we know kills millions of people every year, and is an existential threat to our existence on the planet. How do you feel about it? Well, that technology exists; its called the internal combustion engine. No one is out there thinking, as they look at their family car, This thing is so terrifying. Its changed the world so much and we do have to do something about it, but it doesnt have the same kind of emotional impact that thinking about artificial intelligence does.But there are many people familiar with the technology, such as Geoffrey Hinton and Douglas Hofstadter, who are very worried about the direction AI could go in. What are they seeing in the technology that you dont think is right?MM: I wouldnt say that you shouldnt be worried about it. There are a lot of bad things that could happen with these technologies. Were already seeing some bad things happen, like what Cory Doctorow called the enshittification of the internet, where there are tons of crap out there now and its just getting worse. There are also the problems of bias, privacy, and the ridiculous amount of electricity and water that are used in data centers. I think, though, that Hintons and Hofstadters concerns are different. Hinton is worried about the systems getting out of control, becoming superintelligent, manipulating us, and having their own agency. Hofstadter, on the other hand, is more worried about dehumanizationthat the things he values most, like the creation of music and literature, will be taken over by computers.Im less worried about these more speculative risks because I dont see evidence that they are going to happen soon. I wrote a paper called Why AI Is Harder Than We Think, where I explained how people underestimate how hard it is to capture human intelligence in these machines. Theres a lot of stuff making up human intelligence that these critics arent really taking into account as part of intelligence. I find it funny that people call Hinton an artificial general intelligence expert because theres no such thing as AGI. No one knows what intelligence is, much less how its going to be captured in computers in the future.AG: I think it is admirable that some of the pioneers in the field are taking responsibility for the technology and trying to be thoughtful about its effects on the world. Its an admirable sort of Robert Oppenheimer impulse. And, as Melanie said, there certainly are bad things that could happen. But I also think its true that the people who are designing the systems have an interest in saying that these systems are really powerful and have something like general intelligence. It is also striking to me that you see much more existential anxiety among people who arent cognitive scientists, who havent studied human or animal intelligence. I think its fair to say that the consensus among people who study human intelligence is that theres a much bigger gap between human and artificial intelligence, and that the real risks we should pay attention to are not the far-off existential risks of AI agents taking over but rather the more mundane risks of misinformation and other bad stuff showing up on the internet.Are either of you worried about the extent to which AI systems are already subtly influencing our decisions, particularly online and through our phones?MM: In some areas of our lives, yes. We trade our agency for convenience. For example, I like to use GPS in my car to navigate. Its very convenient and I trust that its going to work. But it means that I dont get as good a mental map of the places where I go. Thats one of the problems with a lot of these systemsthey make things very convenient for us, which comes at a cost, and we only sort of understand the cost. There are other examples in history where weve given up certain abilities for convenience, such as books, where we dont have to memorize. Having a smartphone means that I can just look stuff up rather than ask people or go to the encyclopedia. Handing off our decision-making to algorithms has hurt us in some ways, and were starting to see the results of that now with the current state of the world.AG: The political scientist Henry Farrell has argued that weve had artificial intelligences before in the form of markets and states. A market is just a big information-processing, decision-making device. So, in a funny way, anytime I see that something costs $4.99 and I pay it, Im giving up a kind of autonomy to the force of the market, right? Im not acting as I would if I had lived in a foraging culture, for example. We have these large-scale information-processing devices, and markets and states and bureaucracies are really good examples of this, where we give up individual decision-making. Legal systems are like that too. Im not deciding whether Im going to cross the street; the traffic light is telling me whether I should cross. And, again, those things have benefits in coordinating people across very large social communities. But they also have downsides too. They can take on a life of their own, independent of all the individual people who are making decisions about them. So when we say the country has decided to invade or we say the market has collapsed, thats actually a lot of individual people making decisions, but these superindividual decision-making systems have had a kind of life of their own as long as weve been around as human beings.MM: This reminds me of Nick Bostroms paperclip apocalypse where a superintelligent AI system behaves in a psychopathic way: its given a goal and doesnt care about the consequences of its actions as long as it is able to achieve that goal. Ted Chiang wrote a piece where he argued that we already have entities that act like that now: theyre called corporations and their goal is maximize shareholder value. I think thats why Silicon Valley people often worry about what AI is going to do. Corporations maximizing shareholder value is the metaphor theyre using to think about AI systems.AG: I think Melanie is absolutely right. Its about these metaphors. Often the metaphor for an intelligent system is one that is trying to get the most power and the most resources. So if we had an intelligent AI, thats what it would do. But from an evolutionary point of view, thats not what happens at all. What you see among the more intelligent systems is that theyre more cooperative, they have more social bonds. Thats what comes with having a large brain: they have a longer period of childhood and more people taking care of children. Very often, a better way of thinking about what an intelligent system does is that it tries to maintain homeostasis. It tries to keep things in a stable place where it can survive, rather than trying to get as many resources as it possibly can. Even the little brine shrimp is trying to get enough food to live and avoid predators. Its not thinking, Can I get all of the krill in the entire ocean? That model of an intelligent system doesnt fit with what we know about how intelligent systems work.How has your work on AI changed the way you understand yourself?MM: John McCarthy, one of the pioneers of AI, and one of the people from the 1956 Dartmouth workshop who thought we would make great progress in just a summer, later said, You know, intelligence was harder than we thought. Thats just been the constant progression of my own view. And its made me want much more to talk to people in cognitive science, like Alison. I think that research in AI is going to have to get back to its original focus on the nature of intelligence as opposed to better optimization techniques.AG: As a developmental psychologist, I spend a lot of my time with little kids, and they are infinitely smarter than we think. Working with AI has made me even more impressed with the kinds of things that every two-year-old is doing. It has also made the intelligence of octopuses, brine shrimp, and all the other creatures around us more vivid. On the other hand, I would not have assumed that we could learn as much just from text as we do. That raises the question about how much of what I think is my deep knowledge about the world is really just my being able to parrot the things Ive read or heard other people say. LLMs have raised that issue in a really interesting way.¤Alison Gopnik is a professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, where she has taught since 1988. She received her BA from McGill University and her PhD from Oxford University. She is a world leader in cognitive science, particularly the study of childrens learning and development. She is the author of over 100 journal articles and several books, including the best-selling and critically acclaimed popular books The Scientist in the Crib (1999), The Philosophical Baby: What Childrens Minds Tell Us About Love, Truth and the Meaning of Life (2009), and The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children (2016). She is a fellow of the Cognitive Science Society and the American Association for the Advancement of Science and a member of the American Academy of Arts and Sciences.Melanie Mitchell is a professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour won the Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans was short-listed for the Cosmos Prize for Scientific Writing. Melanie is the recipient of the Senior Scientific Award from the Complex Systems Society, the Distinguished Cognitive Scientist Award from UC Merced, and the Herbert A. Simon Award of the International Conference on Complex Systems.¤Featured image: El Lissitzky. Proun 99, ca. 192325. Gift of Collection Société Anonyme. Yale University Art Gallery (1941.548). CC0, artgallery.yale.edu. Accessed May 25, 2024. Image has been rotated.
Unknown
Education, Training, and Library/Life, Physical, and Social Science
null
null
null
null
null
null
news
Sandeep Mishra
AI capabilities before and after LLM: What is LLM?
This content delves into how LLMs have transformed machine learning and natural language processing, comparing pre-LLM AI advancements with the new benchmarks set by these powerful models.
https://www.c-sharpcorner.com/article/ai-capabilities-before-and-after-llm-what-is-llm/
https://www.c-sharpcorne…sharp-corner.png
2024-07-19T00:00:00Z
IntroductionAI was available even before 30 November 2022, but everything changed after this date. Do you want to know why? Well, the day ChatGPT launched, it added so much value to our work and personal life, so we are not even able to think about a day without AI. But what makes ChatGPT or other AI models so special now as I told AI was even available before 30 November 2022 that LLM.What is the AI LLM Model?The AI LLM model, short for Artificial Intelligence Large Language Model, employs natural language processing and machine learning techniques to enhance research and analysis.These models make a difference because they possess the capability to understand context, summarize input, and structure information accordingly. As we understand, information lacks value without context, and these models excel in contextualizing data to enhance its relevance and usefulness.It is an open-source tool developed with the goal of providing professionals with a comprehensive and automated solution to deal with complex tasks.GPT (Generative Pre-trained Transformer) SeriesGPT-1GPT-2GPT-3GPT-4BERT (Bidirectional Encoder Representations from Transformers)BERTRoberta (A robustly optimized BERT approach)T5 (Text-To-Text Transfer Transformer)T5XLNetXLNetTransformer-XLTransformer-XLALBERT (A Lite BERT)ALBERTELECTRAELECTRAERNIE (Enhanced Representation through kNowledge Integration)ERNIE by BaiduERNIE by TencentBlenderBotBlenderBotBlenderBot 2.0BlenderBot 3.0MegatronMegatron-LMOPT (Open Pre-trained Transformer)OPTDeBERTa (Decoding-enhanced BERT with disentangled attention)DeBERTaOpen-Source AdvantageOne of the primary advantages of the AI LLM model is its open-source nature. Open-source software allows transparency, collaboration, and customization.These open-source models can be hosted and controlled by businesses, which can be trained further for their needs and purposes.Professionals can access, modify, and improve the AI LLM model's source code, enabling them to tailor it to their specific needs. This collaborative approach fosters innovation in the business community and ensures the tool stays relevant in the evolving g business landscape.These are the models which are open source.BERT (Bidirectional Encoder Representations from Transformers): Developed by Google, BERT is designed to pre-train deep bidirectional representations from the unlabeled text by joint conditioning on both left and right contexts.GPT-2 and GPT-Neo: OpenAI released the smaller versions of GPT-2 as open-source, and the GPT-Neo project by EleutherAI is an implementation of a GPT-2-like model that is also open-source.Transformer-XL: Introduced by Google Brain, Transformer-XL is designed for understanding longer texts, offering state-of-the-art performance on many NLP benchmarks.XLNet: Developed by Google and Carnegie Mellon University, XLNet is an extension of the Transformer-XL model and outperforms BERT on several NLP tasks.RoBERTa (A Robustly Optimized BERT Pretraining Approach): Developed by Facebook AI, RoBERTa builds on BERT's language masking strategy, optimizing training and data processing for more robust performance.ALBERT (A Lite BERT): Also developed by Google, ALBERT is a streamlined version of BERT that improves model scalability and training speed by reducing parameters.ELECTRA: Developed by Google, ELECTRA is a method for self-supervised language representation learning that can be used to pre-train transformer networks much more efficiently than traditional models.T5 (Text-To-Text Transfer Transformer): Developed by Google, T5 interprets all NLP tasks as a text-to-text problem, using a unified framework to handle different tasks.Fairseq: This is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. It is developed by Facebook AI Research.Hugging Face Transformers: This library provides thousands of pre-trained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages. Its models are based on frameworks like BERT, GPT-2, Roberta, T5, and others.Key Features of the AI LLM ModelNatural Language Processing: The AI LLM model leverages natural language processing capabilities to analyze and understand texts. It can interpret complex terms, identify patterns, and extract important information from vast amounts of documents efficiently.Case Law Analysis: By utilizing machine learning algorithms, the AI LLM model can analyze and predict the outcomes of legal cases based on historical data. It can assess the relevance and significance of past judgments, enabling legal professionals to make more informed decisions.Automated Document Review: The AI LLM model simplifies the document review process by automatically summarizing and categorizing documents. Its ability to swiftly identify and extract key information from lengthy contracts, agreements, or statutes saves valuable time and resources for professionals.Research Assistance: The AI LLM model acts as a virtual research assistant, offering suggestions, recommending relevant cases and statutes, and providing comprehensive analysis on specific topics. This feature streamlines research and equips professionals with the necessary information to prepare strong arguments.Benefits of the AI LLM ModelTime and Cost Efficiency: By automating labor-intensive tasks, the AI LLM model saves significant time and reduces costs associated with research and analysis. Professionals can focus on higher-value activities, such as providing expert advice and crafting strategies.Accuracy and Consistency: The AI LLM model's ability to read, analyze, and interpret texts ensures accuracy and consistency in research. Its algorithms minimize the risk of human error, providing reliable and thorough results.Enhanced Decision Making: Through its advanced data analytics capabilities, the AI LLM model aids professionals in making well-informed decisions based on comprehensive insights. It offers a broader perspective by analyzing a vast range of documents in a fraction of the time it would take manually.ConclusionThe AI LLM model is a powerful open-source tool that harnesses the potential of AI to transform research and analysis. Its natural language processing capabilities, case law analysis, document review automation, and research assistance benefit professionals immensely. By leveraging open-source intelligence, the AI LLM model empowers professionals to adapt, customize, and enhance this innovative tool, keeping it aligned with the evolving needs of the industry. With its time and cost efficiency, accuracy, and improved decision-making capabilities, the AI LLM model is set to revolutionize how professionals approach their work.
Process Automation/Information Retrieval Or Search/Content Synthesis
Legal
null
null
null
null
null
null
news
PYMNTS
Excel’s AI Makeover: Microsoft’s Billion-Dollar Bet on Smarter Spreadsheets
In the race to infuse artificial intelligence (AI) into every corner of the digital world, Microsoft has set its sights on conquering the final frontier of office productivity: the humble spreadsheet. The tech giant’s latest creation, SpreadsheetLLM, aims to change how businesses crunch numbers and make decisions. By harnessing the power of large language models […]The post Excel’s AI Makeover: Microsoft’s Billion-Dollar Bet on Smarter Spreadsheets appeared first on PYMNTS.com.
https://www.pymnts.com/artificial-intelligence-2/2024/excels-ai-makeover-microsofts-billion-dollar-bet-on-smarter-spreadsheets/
https://www.pymnts.com/w…rosoft-Excel.jpg
2024-07-17T20:19:23Z
In the race to infuse artificial intelligence (AI) into every corner of the digital world, Microsoft has set its sights on conquering the final frontier of office productivity: the humble spreadsheet.The tech giants latest creation, SpreadsheetLLM, aims to change how businesses crunch numbers and make decisions. By harnessing the power of large language models (LLMs), this AI tool, which is still in the testing stages, could transform Excel from a static grid into a dynamic, question-answering powerhouse potentially reshaping workflows for millions of users worldwide.The infinite cell-like nature and references to cells in spreadsheets make it challenging for LLMs, which have been trained using standard linear tokenization techniques, to understand the spreadsheet data model, Rogers Jeffrey Leo John, co-founder and CTO of DataChat, a no-code, generative AI platform, told PYMNTS.Cracking the Spreadsheet CodeAt the heart of SpreadsheetLLM lies SheetCompressor, an encoding framework that effectively compresses spreadsheets for use by LLMs. This breakthrough, detailed in a study on the arXiv preprint server, tackles a longstanding challenge in applying AI to spreadsheets.Microsofts SheetCompressor, uses three clever tricks to shrink spreadsheets for AI use. First, it spots and compresses repetitive data. Next, it converts information to a format (JSON) without losing details. Finally, it bundles data together with matching formats. The results are impressive. SheetCompressor cuts down the AIs workload by 96%. This could mean businesses pay just 1/25th of what they would otherwise for AI to crunch their spreadsheet numbers.Of course, AI is already useful for manipulating spreadsheets. Microsofts Excel Ideas feature uses AI to analyze data and suggest visualizations, charts and pivot tables. Users can simply select a range of data and ask Excel to recommend insights, streamlining the process of identifying trends and patterns.Google Sheets has introduced Smart Fill, which uses AI to detect patterns in data entry and automatically suggests column completions. This feature saves time on repetitive data input tasks and helps maintain consistency across large datasets.Startups like Rows and Causal are building AI-native spreadsheet alternatives. Rows, for instance, allows users to pull data from various sources using natural language queries, while Causal focuses on financial modeling with AI-assisted forecasting.Tiller Money leverages AI to categorize financial transactions automatically in spreadsheets, helping users track expenses and budgets more effectively. The system learns from user corrections to improve accuracy over time.Spreadsheet.com incorporates AI to suggest formulas and functions based on a sheets data and column headers. It can also generate charts and graphs automatically when users select data ranges.Airtables AI assistant can help users create new tables, suggest field types, and even write formulas based on natural language descriptions of what the user wants to achieve.Performance LeapMicrosofts team tested SpreadsheetLLM against proprietary models like GPT-3.5 and GPT-4 and open-source offerings like Llama 2, Llama 3, Phi-3, and Mistral-v2. The results were impressive, with GPT-4 showing a 27% improvement in table detection compared to previous methods. The researchers have also introduced a chain of spreadsheet (CoS) methodology, further refining the AIs ability to work with spreadsheet data by breaking tasks into manageable steps for the models.The implications for businesses could be far-reaching. With its massive user base, Excel has long been a cornerstone of Microsofts Office suite and a crucial tool across industries. The company aims to enhance spreadsheet functionality by integrating AI capabilities, potentially automating complex tasks, and offering new data interpretation methods.SpreadsheetLLM has the potential to transform data analysis in spreadsheets by enabling efficient user interactions and more accurate responses to plain English questions on spreadsheet data, John said.However, the technology is not without limitations. Due to token constraints, the framework doesnt account for visual elements like background colors and borders. The researchers also acknowledge that more work is needed in semantic understanding of cell contents.While direct integration of SpreadsheetLLM into Microsoft Excel isnt imminent, the research signals a clear direction for future feature enhancements. The potential to dramatically improve data analysis and spreadsheet insight generation could lead to time savings and new data-driven discoveries for businesses.The technology could have broader implications for how businesses leverage their data assets. With technologies like SpreadsheetLLM that provide ways of encoding the knowledge present in spreadsheets to LLMs, business users will now be able to leverage GenAI technologies to combine information from their spreadsheets and data warehouses to make more efficient business decisions, John predicted.For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.See More In: AI, Airtable, artificial intelligence, arXiv, Causal, chain of spreadsheet, DataChat, Excel, GenAI, generative AI, Microsoft, News, PYMNTS News, Rogers Jeffrey Leo John, Rows, SheetCompressor, spreadsheet, Spreadsheet.com, SpreadsheetLLM, Tiller Money
Content Synthesis/Decision Making/Process Automation
Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
Anirban Ghoshal
Small is big: Meta bets on AI models for mobile devices
Facebook-parent Meta has been working on developing a new small language model (SLM) compatible with mobile devices with the aim of running on-device applications while mitigating energy consumption during model inferencing tasks, a paper published by company researchers showed.  To set the context, large language models (LLMs) have a lot more parameters. For instance, Mistral-22B has 22 billion parameters while GPT-4 has 1.76 trillion parameters. In contrast, smaller language models have relatively fewer parameters, such as Microsoft’s Phi-3 family of SLMs, which have different versions starting from 3.8 billion parameters.  A parameter helps an LLM decide between different answers it can provide to queries — the more the number of parameters, the more the need for a larger computing infrastructure.However, Meta researchers believe that effective SLMs with less than a billion parameters can be developed and it would unlock the adoption of generative AI across use cases involving mobile devices, which have relatively less compute infrastructure than a server or a rack.The researchers, according to the paper, ran experiments with models, architected differently, having 125 million and 350 million parameters, and found that smaller models prioritizing depth over width enhance model performance.“Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs,” the researchers wrote.“Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models,” they added.The 125 and 350 million models, dubbed MobileLLM, according to the researchers, were as effective as large language models, such as Llama 2, in handling chat and several API calling tasks, highlighting the capability of small models for common on-device use cases. While MobileLLM is not available across any of Meta’s products for public use, the researchers have made the code and data for the experiment available along with the paper.
https://www.computerworld.com/article/2514867/small-is-big-meta-bets-on-ai-models-for-mobile-devices.html
https://www.computerworl…strip=all&w=1024
2024-07-09T12:27:30Z
However, Meta researchers believe that effective SLMs with less than a billion parameters can be developed and it would unlock the adoption of generative AI across use cases involving mobile devices, which have relatively less compute infrastructure than a server or a rack.The researchers, according to the paper, ran experiments with models, architected differently, having 125 million and 350 million parameters, and found that smaller models prioritizing depth over width enhance model performance.“Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs,” the researchers wrote.
Content Synthesis/Prediction
Unknown
null
null
null
null
null
null
news
Bill Siwicki
AI’s ability to see and hear patients holds huge promise
AI-enabled computer vision could be a game-changer, says one CEO, who also discusses the risks of generative AI and the promise of automated transcription, which is already proving itself.
https://www.healthcareitnews.com/news/ai-s-ability-see-and-hear-patients-holds-huge-promise
https://www.healthcareit…Singh%201200.jpg
2024-07-25T16:47:07Z
Artificial intelligence is fast proliferating across healthcare, with various applications large and small finding their way into workflows industry-wide.Whether it's helping clinicians during telemedicine visits, transcribing entire conversations between doctors and patients, writing notes for nurses in response to patient portal questions, helping patients triage their problems via chatbots, or any number of other applications, AI is proving itself useful to many stakeholders in healthcare.Narinder Singh has been working with AI for years. He is CEO and cofounder of LookDeep Health, a virtual sitting, virtual nursing and virtual care company. Past roles include working in Accenture's Center for Strategy Technology, a corporate strategy position in the office of the CEO at SAP, cofounder of Appirio, president of Topcoder, and vice president of engineering at webMethods.Healthcare IT News sat down with Singh to discuss how AI can help increase capacity in telemedicine, the risks posed by generative AI for hospitals and health systems, how provider organizations can overcome these risks, and the role AI is playing in scribe technologies.Q. You note that telemedicine of course removes the burden of distance from healthcare interactions. But you say it does not increase the capacity for them. How do you see AI helping here?A. Let me start with some context for why this is a key question, and perhaps the question for the future of hospital care. Every week we speak with hospitals that note patient acuity is rising or they are staff constrained, and most cite both.The U.S. population over 65 grew five times faster than the total population from 2010 to 2020 the fastest rate in more than a hundred years. This is part of a longer-term trend and highlights the rising age and associated acuity of the patients hospitals will care for in the future.At the same time, we have seen repeated projections from worrisome to disastrous for nursing and other roles in the hospital and that is independent of financial pressures that make it nearly impossible to expand staffing levels.We now have had a generation of telemedicine inside the hospital from eICU to tele-consults and now virtual sitting and virtual nursing. At a project level, there have been many successes, but at a macro level, collectively telemedicine in the hospital has had a very limited impact on care, sans one massive exception COVID.During the pandemic, we learned that seamless access via telemedicine creates flexibility that enables a system to adapt. Yet, it did not expand our resource capacity. Tele-capabilities can bridge great distances, but do not change the underlying units of work necessary to provide care.Now, AI can mean many things, but let's start with what relates to telemedicine the ability to expand our observational capacity (rather than how it impacts decision-making). Today, a nurse who covers six patients will be in any given patient's room for one to two hours. Doctors will be in an individual patient's room generally only a few minutes per day.Therefore, the vast majority of the time, a patient is without the watchful eye of a provider. This is despite the fact that so much of what is happening with the patient can only be assessed and understood at the bedside.Are they less active; trying to get out of bed; does their breathing seem more labored; did the alarm go off because the sensor slipped off their finger or the breathing tube slipped out of their neck; etc.?One branch of AI, computer vision, can let us have eyes on every patient all the time. This can help allocate the scarcest resource in the hospital the clinical attention of nurses and doctors more appropriately.We have decades of evidence that increasing clinical bandwidth positively impacts patients. Video alone even in rightly compelling areas like virtual nursing will simply repeat the disappointments of the past. With AI, we can better leverage the time and expertise of our most significant constraint.Imagine a world where AI acts as a guardian angel for patients and their caregivers. Identifying potential issues and alerting healthcare professionals before a small problem becomes a big one. This isn't just about efficiency; it's about fundamentally changing the way we deliver care.AI can provide that extra layer of support, ensuring that no patient is left unattended, even for a moment. It's not about replacing human touch but augmenting it, making our healthcare system more responsive, resilient, and, ultimately, more human.Q. You caution there are real risks posed by generative AI for hospitals and health systems. What are they?A. Generative AI can streamline prior authorizations, patient coding, and the intricate interactions between insurance and healthcare providers. However, it could also ignite an epic civil war between them.This productivity could lead to a faster but more complicated landscape of disputes, ultimately requiring more human adjudicators to resolve disputes. Instead of cutting down on admin work, it might actually increase it. Generative AI could infinitely scale the most cynical stereotypes of over usage and aggressive denial of claims.AI tools are making strides in reducing the time doctors spend on paperwork, especially outside the hospital. But in a hospital setting, the complexity of care and the lack of defined "visits" mean these tools aren't as effective yet.We have had years to learn how difficult and specific the development and application of machine learning algorithms are in hospitals. The allure of a magic approach to remove that tedious hard work and its integration into clinical workflows is tempting, but naive."Generative" patterns are relevant to many parts of healthcare operations, but they are not a golden ticket. They do not yet address the need to synthesize defined sets of information and repeatedly draw the same conclusions from them. The predictability of inputs and outputs is crucial to evaluation and certainty in clinical decision making.Q. How can hospitals and health systems overcome these risks posed by generative AI?A. On the first point related to the battles between insurers and providers, I see no immediate resolution. You simply cannot afford to have humans attempting to deal with the volume of AI-generated requests or responses, so participation in this arms race is unavoidable.However, engaging in a way that sets a foundation for evaluating and incorporating generative models into workflows offers leverage for the future. Key steps include securing PHI, ensuring checks and balances on outputs, evaluating models within and outside their scope, and not alienating your workforce with premature claims of replacing their roles for a few dollars an hour.These are just the beginning.Already we are seeing insiders like Sequoia and Goldman questioning the hype and benefit of generative AI. We will go through a valley of despair; yet focusing on the pragmatic and not falling in love with the broad proclamation will keep many an innovation team from the cutting block. Hospitals need two antagonistic mindsets.First, experimentation is essential. Generating non-clinical content (emails, communications), evaluating summarization of EHR context, improving language translation, and transcription these all are areas where generative AI can be safely tuned and targeted for improvements. These applications can free up valuable time for healthcare professionals to focus on more critical tasks.Second, hospitals must enforce rigorous evaluation and demand repeatability. For clinical scenarios, you should expect proof of any claims of capability. Even better, have an approach for continuous evaluation of AI capabilities within the solution. Concrete claims must ensure that the same set of inputs produces the same results, maintaining consistency and reliability in clinical decision making.In other industries, technologists, as Norman Vincent Peale once quipped, "shoot for the moon and settle for landing in the stars." In healthcare we have seen the disastrous implications of such strategies setting back industries for a decade or more (Theranos for blood testing, Watson for AI for cancer).You can be pragmatic without being slow the right leaders will drive that balance.Q. You have observed more than a half dozen transcription companies raising more than $30 million in the last few years. Why is this? And what role is AI playing in these scribe technologies?A. There are more than a million doctors in the United States. Their time is incredibly valuable, and a generation of being treated like both experts and entry-level data analysts has driven tremendous burnout.The math is straightforward, and now the technology is more accessible than ever. The narrative that "the time is now" is not a new one, but it may finally be becoming a reality. It's a wonderful use of technological advances.AI is playing a pivotal role in these scribe technologies by drastically improving the accuracy and efficiency of transcriptions. With AI, transcription can be done in real time, with higher accuracy and at a fraction of the cost.The challenge is that in just the last months, AI advances continue their breakneck pace of advancement redefining the starting point of building such solutions. It is clear that transcription solutions are not foundational AI models themselves; rather, they are solutions built on top of foundational AI models.The cost of developing competitive solutions has likely dropped by 95%. Better integration with clinical workflows, exceptional go-to-market models, and innovative derivative solutions remain massively important differentiators. However, the quality of difference between top solutions in the AI aspects of transcription itself will become essentially zero.As a result, in this future, it is only inertia that will prevent prices from dropping dramatically, which should be great for healthcare providers. Lower costs will make these advanced transcription solutions accessible to more practices, further reducing the administrative burden on doctors and allowing them to focus more on patient care.The surge in investment in transcription companies is a testament to the transformative potential of AI in healthcare the risks are that the commoditization of the category results in desperate over-promising in order to keep up with investor expectations.Follow Bill's HIT coverage on LinkedIn: Bill SiwickiEmail him: [email protected] IT News is a HIMSS Media publication.
Process Automation/Digital Assistance/Content Synthesis
Healthcare Practitioners and Support/Office and Administrative Support
null
null
null
null
null
null
news
minference added to PyPI
To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filling on an A100 while maintaining accuracy.
https://pypi.org/project/minference/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-02T16:30:37Z
MInference: Million-Tokens Prompt Inference for Long-context LLMs| Project Page | Paper | HF Demo |https://github.com/microsoft/MInference/assets/30883354/52613efc-738f-4081-8367-7123c81d6b19News We will present MInference 1.0 at the Microsoft Booth and ES-FoMo at ICML'24. See you in Vienna!TL;DRMInference 1.0 leverages the dynamic sparse nature of LLMs' attention, which exhibits some static patterns, to speed up the pre-filling for long-context LLMs. It first determines offline which sparse pattern each head belongs to, then approximates the sparse index online and dynamically computes attention with the optimal custom kernels. This approach achieves up to a 10x speedup for pre-filling on an A100 while maintaining accuracy.MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention (Under Review, ES-FoMo @ ICML'24)Huiqiang Jiang, Yucheng Li, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang and Lili Qiu OverviewQuick StartRequirementsTorchFlashAttention-2Triton == 2.1.0To get started with MInference, simply install it using pip:pipinstallminferenceSupported ModelsGeneral MInferencesupports any decoding LLMs, including LLaMA-style models, and Phi models.We have adapted nearly all open-source long-context LLMs available in the market.If your model is not on the supported list, feel free to let us know in the issues, or you can follow the guide to manually generate the sparse heads config.You can get the complete list of supported LLMs by running:fromminferenceimportget_support_modelsget_support_models()Currently, we support the following LLMs:How to use MInferencefor HF,from transformers import pipeline+from minference import MInferencepipe = pipeline("text-generation", model=model_name, torch_dtype="auto", device_map="auto")# Patch MInference Module+minference_patch = MInference("minference", model_name)+pipe.model = minference_patch(pipe.model)pipe(prompt, max_length=10)for vLLM,from vllm import LLM, SamplingParams+ from minference import MInferencellm = LLM(model_name, max_num_seqs=1, enforce_eager=True, max_model_len=128000)# Patch MInference Module+minference_patch = MInference("vllm", model_name)+llm = minference_patch(llm)outputs = llm.generate(prompts, sampling_params)using only the kernel,fromminferenceimportvertical_slash_sparse_attention,block_sparse_attention,streaming_forwardattn_output=vertical_slash_sparse_attention(q,k,v,vertical_topk,slash)attn_output=block_sparse_attention(q,k,v,topk)attn_output=streaming_forward(q,k,v,init_num,local_window_num)For more details, please refer to our Examples and Experiments.FAQFor more insights and answers, visit our FAQ section.Q1: How to effectively evaluate the impact of dynamic sparse attention on the capabilities of long-context LLMs?To evaluate long-context LLM capabilities using models like LLaMA-3-8B-Instruct-1M and GLM-4-9B-1M, we tested: 1) context window with RULER, 2) general tasks with InfiniteBench, 3) retrieval tasks with Needle in a Haystack, and 4) language model prediction with PG-19.We found traditional methods perform poorly in retrieval tasks, with difficulty levels as follows: KV retrieval > Needle in a Haystack > Retrieval.Number > Retrieval PassKey. The main challenge is the semantic difference between needles and the haystack. Traditional methods excel when this difference is larger, as in passkey tasks. KV retrieval requires higher retrieval capabilities since any key can be a target, and multi-needle tasks are even more complex.We will continue to update our results with more models and datasets in future versions.Q2: Does this dynamic sparse attention pattern only exist in long-context LLMs that are not fully trained?Firstly, attention is dynamically sparse, a characteristic inherent to the mechanism. We selected state-of-the-art long-context LLMs, GLM-4-9B-1M and LLaMA-3-8B-Instruct-1M, with effective context windows of 64K and 16K. With MInference, these can be extended to 64K and 32K, respectively. We will continue to adapt our method to other advanced long-context LLMs and update our results, as well as explore the theoretical basis for this dynamic sparse attention pattern.Q3: Does this dynamic sparse attention pattern only exist in Auto-regressive LMs or RoPE based LLMs?Similar vertical and slash line sparse patterns have been discovered in BERT[1] and multi-modal LLMs[2]. Our analysis of T5's attention patterns, shown in the figure, reveals these patterns persist across different heads, even in bidirectional attention.[1] SparseBERT: Rethinking the Importance Analysis in Self-Attention, ICML 2021.[2] LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference, 2024.Figure 1. The sparse pattern in T5 Encoder.Q4: What is the relationship between MInference, SSM, Linear Attention, and Sparse Attention?All four approaches (MInference, SSM, Linear Attention, and Sparse Attention) efficiently optimize attention complexity in Transformers, each introducing inductive bias differently. The latter three require training from scratch. Recent works like Mamba-2 and Unified Implicit Attention Representation unify SSM and Linear Attention as static sparse attention, with Mamba-2 itself being a block-wise sparse method. While these approaches show potential due to sparse redundancy in attention, static sparse attention may struggle with dynamic semantic associations in complex tasks. In contrast, dynamic sparse attention is better suited for managing these relationships.CitationIf you find MInference useful or relevant to your project and research, please kindly cite our paper:@article{jiang2024minference,title={MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention},author={Jiang, Huiqiang and Li, Yucheng and Zhang, Chengruidong and Wu, Qianhui and Luo, Xufang and Ahn, Surin and Han, Zhenhua and Abdi, Amir H and Li, Dongsheng and Lin, Chin-Yew and Yang, Yuqing and Qiu, Lili},journal={arXiv},year={2024}}ContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to aContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant usthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.When you submit a pull request, a CLA bot will automatically determine whether you need to providea CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructionsprovided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted the Microsoft Open Source Code of Conduct.For more information see the Code of Conduct FAQ orcontact [email protected] with any additional questions or comments.TrademarksThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsofttrademarks or logos is subject to and must followMicrosoft's Trademark & Brand Guidelines.Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.Any use of third-party trademarks or logos are subject to those third-party's policies.
Unknown
Unknown
null
null
null
null
null
null
news
Kautilya Utkarsh
Fine-Tuning in Machine Learning
This is one of the great techniques in machine learning, because it reused the pre-trained model and made it efficient for a new task with good accuracy. It also reduced the work of training a new model.
https://www.c-sharpcorner.com/article/fine-tuning-in-machine-learning/
https://www.c-sharpcorne…sharp-corner.png
2024-07-31T00:00:00Z
IntroductionMachine learning is one of the trending concepts in the modern world. We are training and developing new models day-by-day so, ensuring and maintaining the accuracy of model response is the responsibility of the developers.Understanding Fine-tuningIt is one of the farms of transfer learning in such learning a pre-trained model is reused as the initial stage of a model on a new task. In fine-tuning we make minor adjustments to the parameters of pre-trained models to tailor it to a specific task. Fine tuning is performed on the knowledge of a pre-trained model that it acquires ready and makes it more efficient than training from scratch.Working of Fine-tuningSelection of Pre-Trained Model: This is the first step for the fine-tuning process where we select a pre-trained model which is most relevant to our desired task. People generally  preferred GPT, T5 for NLP and ResNet pre-trained models.Replacing the Final Layer: The final layer of the pre-trained model, which is generally trained for different work, is now replaced by a new layer that is suitable for our new task. Rarely do we find that the final layer of the model is relevant to our new task in such a condition we ignore this step.Adjust the Model: In this step, we start a continuous training of pre-trained on the new data set until it becomes fine-tuned.Freezing Early Layers: This step involves freezing the earlier layers to prevent updates in them while training.Training Later layers: In these steps we train the later layer (i.e those layers which have more specific features) on the new dataset.Fine-tuning the Entire Model: We fine-tuned the entire model for better adaptation of new tasks, it was required in only some of the cases.Implementation Fine-Tuning BERT for Text ClassificationIn this example, we'll fine-tune the BERT model from Hugging Face's Transformers library for a text classification task.1. Install Required Librariespip install transformers torch datasets2. Import Librariesimport torchfrom transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArgumentsfrom datasets import load_dataset3. Load a Pre-trained Model and Tokenizer# Load pre-trained BERT model and tokenizermodel_name = 'bert-base-uncased'tokenizer = BertTokenizer.from_pretrained(model_name)model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)4. Prepare the DatasetLet's use the datasets library to load a sample dataset for binary classification.# Load datasetdataset = load_dataset('imdb')# Tokenize the datasetdef tokenize_function(examples):    return tokenizer(examples['text'], padding='max_length', truncation=True)tokenized_datasets = dataset.map(tokenize_function, batched=True)# Format the dataset for PyTorchtokenized_datasets.set_format('torch', columns=['input_ids', 'attention_mask', 'label'])5. Define Training Argumentstraining_args = TrainingArguments(    output_dir='./results',    evaluation_strategy='epoch',    learning_rate=2e-5,    per_device_train_batch_size=8,    per_device_eval_batch_size=8,    num_train_epochs=3,    weight_decay=0.01,)6. Initialize the Trainertrainer = Trainer(    model=model,    args=training_args,    train_dataset=tokenized_datasets['train'],    eval_dataset=tokenized_datasets['test'])7. Train the Modeltrainer.train()8. Evaluate the Modelresults = trainer.evaluate()print(results)ExplanationLoading the Model and Tokenizer: We use BertTokenizer to tokenize text and BertForSequenceClassification for the classification task. The model is initialized with pre-trained weights and modified for binary classification.Preparing the Dataset: We use the datasets library to load and tokenize the dataset. Tokenization converts text into input IDs and attention masks that the model can process.Training Arguments: We specify parameters like learning rate, batch size, and number of epochs. These control the training process.Training and Evaluation: The Trainer class simplifies the training and evaluation process. It handles the training loop and evaluation based on the specified arguments.ConclusionThis is one of the great techniques in machine learning, because it reused the pre-trained model and made it efficient for a new task with good accuracy. It also reduced the work of training a new model.
Unknown
Unknown
null
null
null
null
null
null
news
quickrag added to PyPI
A Quick Retrieval-Augmented Generation (RAG) system using transformers.
https://pypi.org/project/quickrag/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-31T05:03:45Z
QuickRAG is a Python library that implements a Retrieval-Augmented Generation (RAG) pipeline for question answering on PDF documents. It combines document processing, embedding generation, and language model inference to provide context-aware answers to user queries.FeaturesPDF processing and text extractionText chunking and embedding generationEfficient similarity search for relevant context retrievalIntegration with Hugging Face Transformers for language model inferenceSupport for quantization to optimize memory usage and inference speedInstallationpipinstallquickragUsageHere's a basic example of how to use QuickRAG:fromquickragimportQuickRAG# Initialize QuickRAGrag=QuickRAG("path/to/your/document.pdf",huggingface_token="YOUR_HUGGINGFACE_TOKEN")# Process the PDF and create embeddingsrag.process_pdf()rag.create_embeddings()# Load the language modelrag.load_llm()# Ask a questionquery="What are the macronutrients, and what roles do they play in the human body?"answer=rag.ask(query)print(f"Query: {query}")print(f"Answer: {answer}")ConfigurationQuickRAG can be customized with the following parameters:pdf_path: Path to the PDF documentembedding_model_name: Name of the sentence transformer model for embeddings (default: "all-mpnet-base-v2")llm_model_name: Name of the language model for answer generation (default: "google/gemma-2b-it")use_quantization: Whether to use quantization for the language model (default: True)huggingface_token: Your Hugging Face API tokenRequirementsPython 3.7+PyTorchTransformersSentence-TransformersPyMuPDFspaCyNumPyPandasLicenseThis project is licensed under the MIT License - see the LICENSE file for details.ContributingContributions are welcome! Please feel free to submit a Pull Request.AcknowledgementsHugging Face for their Transformers librarySentence-Transformers for the embedding modelsPyMuPDF for PDF processing
Information Retrieval Or Search/Content Synthesis
Life, Physical, and Social Science/Education, Training, and Library/Computer and Mathematical
null
null
null
null
null
null
news
disguisedata added to PyPI
A tiny tool for generating synthetic data from the original one
https://pypi.org/project/disguisedata/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-15T16:17:16Z
[!TIP]This project complements the Differential Privacy with AI & ML project available in the repository. To fully grasp the concept, make sure to read both documentations.About the PackageAuthor's WordsWelcome to the first edition of the Disguise Data Tool official documentation. I am Deniz Dahman, Ph.D., the creator of the BireyselValue algorithm and the author of this package. In the following section, you will find a brief introduction to the principal idea of the disguisedata tool, along with a reference to the academic publication on the method and its mathematical foundations. Before proceeding, I would like to inform you that I have conducted this work as an independent scientist without any funding or similar support. I am dedicated to continuing and seeking further improvements on the proposed method at all costs. If you wish to contribute in any way to this work, please find further details in the contributing section.ContributingIf you wish to support the creator of this project, you might want to explore possible ways on:Thank you for your willingness to contribute in any way possible. You can check links below for more information on how to get involved. :view options to subscribe on Dahman's Phi Services Websitesubscribe to this channel Dahman's Phi Servicesyou can support on patreonIf you prefer any other way of contribution, please feel free to contact me directly on contact.Thank youIntroductionHistory and Purpose of Synthetic DataThe concept of synthetic data has roots in scientific modeling and simulations, dating back to the early 20th century. For instance, audio and voice synthesis research began in the 1930s. The development of software synthesizers in the 1970s marked a significant advancement in creating synthetic data. In 1993, the idea of fully synthetic data for privacy-preserving statistical analysis was introduced to the research community. Today, synthetic data is extensively used in various fields, including healthcare, finance, and defense, to train AI models and conduct simulations. More importantly, synthetic data continues to evolve, offering innovative solutions to data scarcity, privacy, and bias challenges in the AI and machine learning landscape.Synthetic data serves multiple purposes. It enhances AI models, safeguards sensitive information, reduces bias, and offers an alternative when real-world data is scarce or unavailable:disguisedata __version__1.0There are numerous tools available to generate synthetic data using various techniques. This is where I introduce the disguisedata tool. This tool helps to disguise data based on a mathematical foundational concept. In particular, it relies on two important indicators in the original dataset:The norm: Initially, it captures the general norm of the dataset, involving every entry in the set. This norm is then used to scale the dataset to a range of values. It is considered the secret key used later to convert the synthetic data into the same scale as the original.The Stat: The second important indicator for the disguisedata method is the statistical distribution of the original dataset, particularly the correlation and independence across all features. These values help to adjust the tweak and level of disguise of the data. Two crucial values are the mean and variance, which indicate the level of disguise.[!IMPORTANT]This tool demonstrates the proposed method solely for educational purposes. The author provides no warranty of any kind, including the warranty of design, merchantability, and fitness for a particular purpose.Installation[!TIP]The simulation using disguisedata was conducted on three datasets, which are referenced in the section below.Data AvailabilityBreast Cancer Wisconsin (Diagnostic) Dataset available in [UCI Machine Learning Repository] at https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic , reference (Street 1993)The Dry Bean Dataset available in [UCI Machine Learning Repository] at https://doi.org/10.24432/C50S4B , reference (Koklu 2020)TeleCommunications Dataset available in [UCI Machine Learning Repository] at https://www.kaggle.com/datasets/navins7/telecommunicationsInstall disguisedatato install the package all what you have to do:pip install disguisedataYou should then be able to use the package. It might be a good idea to confirm the installation to ensure everything is set up correctly.pip show disguisedataThe result then shall be as:Name: disguisedataVersion: 1.0.0Summary: A tiny tool for generating synthetic data from the original oneHome-page: https://github.com/dahmansphi/disguisedataAuthor: Deniz Dahman's, Ph.D.Author-email: [email protected][!IMPORTANT]Sometimes, the seaborn library isn't installed during setup. If that's the case, you'll need to install it manually.Employ the disguisedata -Conditions[!IMPORTANT]It's mandatory to provide the instance of the class with the NumPy array of the original dataset, which does not include the y feature or the target variable.Detour in the disguisedata package- Build-inOnce your installation is complete and all conditions are met, you may want to explore the built-in functions of the disguisedata package to understand each one. Essentially, you can create an instance from the disguisedata class as follows:from disguisedata import Disguisedatainst = Disguisedata()Now this instance offers you access to the built-in functions that you need. Here is a screenshot:Once you have the disguisedata instance, follow these steps to obtain the new disguised data:Control and Get the data format:The first function you want to employ is feedDs using data = inst.feedDs(ds=ds). This function takes one argument, which is the NumPy dataset, and it controls the conditions and returns a formatted, scaled dataset that is ready for the action of disguise.Explore different types of Disguise:The function explor_effect allows you to explore how the disguised data differs from the original data. It is called using inst.explor_effect(data=data, mu=0.5, div=0.9). This function takes three arguments: the first is the formatted dataset returned from the previous function, the second is the value representing the difference from the original mean, and the third is the amount of deviation. These parameters determine the result of the newly generated disguised data. For a detailed explanation of each parameter's effect and purpose, refer to the academic publication on the method. Here are some outputs from the function::It's important to observe how the screenshot shows the location of the disguised data from the original dataset. The report then illustrates how the values are altered according to the parameter adjustments. Additionally, it presents the differences in the mean and standard deviation between the original and disguised data.Once the tuning of the parameters and the disguised data are acceptable, it is time to execute the disguise_data function.Generating the Disguised datathe disguise_data function basically generate the disguised data, it is implemented as Xnp = inst.disguise_data(data=data, mu=0.5, div=0.9), three arguments: the first is the formatted dataset returned from section, the second is the value representing the difference from the original mean, and the third is the amount of deviation. the result from that is as shown here:Conclusion on installation and employment of the methodIt is possible to test the results returned based on the proposed method. I used two predictive methods on the original and the disguised dataset to observe the effect on accuracy. The conclusion is that there are almost identical results between both predictions, which implies that the proposed method is effective in generating realistic disguised data that maintains privacy.Referenceplease follow up on the publication in the website to find the academic published paper
Content Creation/Process Automation
Life, Physical, and Social Science/Healthcare Practitioners and Support/Business and Financial Operations
null
null
null
null
null
null
news
null
ults added to PyPI
Uncertainty-Guided Likelihood-Tree Search
https://pypi.org/project/ults/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-04T15:45:38Z
Accompanying implementation of the following paper [ArXiv]:@article{grosse2024ults,title={Uncertainty-Guided Optimization On Large Language Model Search Trees},author={Grosse, Julia and Wu, Ruotian and Rashid, Ahmad and Hennig, Philipp and Poupart, Pascal and Kristiadi, Agustinus},journal={arXiv preprint arXiv:TODO!},year={2024}}SetupRequires Python >= 3.9.Install PyTorch with CUDA, version >= 2.0Install this package: pip install git+https://github.com/JuliaGrosse/ults.git@mainUsageSee full example here: examples/generate.py.Quickstart with the Dirichlet prior[!IMPORTANT]ULTS will first check prior_dir directory (default ./ults_priors) for a precomputed prior with your choices ofwidth (vocab size), depth (max tokens to generate), and $\alpha$ (concentration strength).If not exists, then it will compute and cache the prior --- this might take a while!However, this only needs to be done once for each each of the choices above.In the subsequent generation call, the decoding will be very quick.tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf", torch_dtype=torch.bfloat16)model.eval()text = "Moose is a"model_inputs = tokenizer(text, return_tensors="pt")-output = model.generate(- **model_inputs,- num_beams=5,- max_new_tokens=40,-)-generated_sequence = output.sequences+import ults+output = ults.generate(+ model=model,+ model_inputs=model_inputs,+ max_tokens=40,+)+generated_sequence = output.sequencegenerated_text = tokenizer.decode(generated_sequence[0])Using the Empirical PriorOn top of the default Dirichlet priors (agnostic to the LLM), ULTS can also leverageempirical priors, specific to the LLM at hand.Example precomputed empirical priors, compatible withLlama-2-7b,Mistral-7B-v0.1,and Gemma-7b, are available inexamples/ults_priors.First, gather samples of the LLM's softmax outputs from different time steps.Here we will use the greedy decoding. See examples/sample_llm_outputs.py fora complete exampleRESULT_DIR=f"./ults_priors/llm_output_samples/{DATASET_NAME}_{LLM_NAME}"# Samples of contexts from your datasetcontexts:List[str]foridx,contextinenumerate(contexts):input_ids=tokenizer(sentence[sent_key],return_tensors="pt")["input_ids"]# `n_tokens` is the max. depth of the tree that you want to optimize on# i.e., the max number of tokens you want to generate with ULTSfordinrange(n_tokens):withtorch.no_grad():outputs=torch.softmax(model(input_ids).logits,dim=-1)# Save the last softmax output (this is our empirical sample for depth `d`)outputs=outputs[0,-1,:]torch.save(outputs,f"{RESULT_DIR}/sample_index{idx}_depth{d}.pt")# Continue greedy generationindex=torch.argmax(qualities)model_input=torch.cat([model_input,index.expand(1,1)],dim=1)# Stack them together into a (n_samples*n_tokens, vocab_size) tensorimportglob,randomsample_files=glob.glob(f"{RESULT_DIR}/sample_*.pt")samples=[torch.load(sample)forsampleinsample_files]torch.save(torch.vstack(samples),f'{RESULT_DIR}/all_samples.pt')Then, when specify the prior when calling ULTS. Everything else stays the same as inexamples/generate.py.output = ults.generate( ...+ prior_kind="empirical",+ prior_empirical_llm_samples=torch.load(f'{RESULT_DIR}/all_samples.pt') ...)CaveatsCurrently doesn't support batch generation.Huggingface optimizes the average log-likelihood. It is effectively penalizes shorter sequences. Meanwhile, ULTS optimizes the total log-likelihood, so the behavior differs from Huggingface's. There is a plan to support this in ULTS, see #36.DevelopmentThis repo uses pdm as the dependency manager andthe build system.Install pdm, see: https://pdm-project.org/en/latest/Run pdm installAll dependencies will then be installed by pdm. Moreover the current repowill be installed in an editable mode.[!IMPORTANT]Before pushing your code, ensure that all tests pass and all lintingand formatting issues are resolved.Run pytest and make sure all tests pass.Run make ruff and ensure:All codes are formatted correctly.There is no linting issue.
Content Synthesis/Decision Making
Unknown
null
null
null
null
null
null
news
llm-engines added to PyPI
A unified inference engine for large language models (LLMs) including open-source models (VLLM, SGLang, Together) and commercial models (OpenAI, Mistral, Claude).
https://pypi.org/project/llm-engines/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-21T19:11:58Z
A unified inference engine for large language models (LLMs) including open-source models (VLLM, SGLang, Together) and commercial models (OpenAI, Mistral, Claude).The correctness of the inference has been verified by comparing the outputs of the models with different engines when temperature=0.0 and max_tokens=None.For example, the outputs of a single model using 3 enginer (VLLM, SGLang, Together) will be the same when temperature=0.0 and max_tokens=None.Try examples below to see the outputs of different engines.Installationpipinstallgit+https://github.com/jdf-progLLM-Engines.git# pip install -e . # for developmentUsageEnginesfromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="meta-llama/Meta-Llama-3-8B-Instruct",num_workers=1,# number of workersnum_gpu_per_worker=1,# tensor parallelism size for each workerengine="sglang",# or "vllm"use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)call_worker_func=get_call_worker_func(model_name="meta-llama/Llama-3-8b-chat-hf",engine="together",use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)fromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="gpt-3.5-turbo",num_workers=1,# number of workersnum_gpu_per_worker=1,# tensor parallelism size for each workerengine="openai",# or one of "vllm", "together", "openai", "mistral", "claude",use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)fromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="mistral-large-latest",engine="mistral",# or one of "vllm", "together", "openai", "mistral", "claude",use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)fromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="claude-3-opus-20240229",engine="claude",# or one of "vllm", "together", "openai", "mistral", "claude",use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)Multi-turn conversationfromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="meta-llama/Meta-Llama-3-8B-Instruct",num_workers=1,# number of workersnum_gpu_per_worker=1,# tensor parallelism size for each workerengine="sglang",# or "vllm"use_cache=False)messages=["Hello",# user message "Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?",# previous model response"What is the capital of France?"# user message]call_worker_func(messages,temperature=0.0,max_tokens=None)the messages should be in the format of [user_message, model_response, user_message, model_response, ...]Cacheall the queries and responses are cached in the generation_cache folder, no duplicate queries will be sent to the model.The cache of each model is saved to generation_cache/{model_name}.jsonlExample items in the cache:{"cb0b4aaf80c43c9973aefeda1bd72890":{"input":["What is the capital of France?"],"output":"The capital of France is Paris."}}The hash key here is the hash of the concatenated inputs.Chat templateFor each open-source models, we use the default chat template as follows:prompt=self.tokenizer.apply_chat_template(messages,add_generation_prompt=add_generation_prompt,tokenize=False,chat_template=chat_template,)There will be errors if the model does not support the chat template.Launch a separate vllm/sglang model workerlaunch a separate vllm workerCUDA_VISIBLE_DEVICES=0python-mvllm.entrypoints.openai.api_server--modelmeta-llama/Meta-Llama-3-8B-Instruct--dtypeauto--host"127.0.0.1"--port34200--tensor-parallel-size1--disable-log-requests&# address: http://127.0.0.1:34200launch a separate sglang workerCUDA_VISIBLE_DEVICES=1python-msglang.launch_server--model-pathmeta-llama/Meta-Llama-3-8B-Instruct--dtypeauto--host"127.0.0.1"--port34201--tp-size1&CUDA_VISIBLE_DEVICES=1python-msglang.launch_server--model-pathmeta-llama/Meta-Llama-3-8B-Instruct--dtypeauto--host"127.0.0.1"--port34201--tp-size1--disable-flashinfer&# disable flashinfer if it's not installed# address: http://127.0.0.1:34201fromllm_enginesimportget_call_worker_funccall_worker_func=get_call_worker_func(model_name="meta-llama/Meta-Llama-3-8B-Instruct",worker_addrs=["http://127.0.0.1:34200","http://127.0.0.1:34201"],# many workers can be used, will be load balancedengine="sglang",use_cache=False)response=call_worker_func(["What is the capital of France?"],temperature=0.0,max_tokens=None)print(response)# The capital of France is Paris.Test notesWhen setting temperature=0.0 and max_tokens=None, testing long generations:VLLM (fp16) can generate same outputs with hugging face transformers (fp16) generations, but not for bf16.Together AI can generate almost the same outputs with vllm (fp16, bf16) generationsSGLang's outputs outputs not consistent with others.VLLM does not support Deepseek-V2 models well, for now, refer to triton issuenote that some weird inputs will cause the models to inference forever, it's better to set timeout=30 to drop the request after certain seconds.
Unknown
Unknown
null
null
null
null
null
null
news
dialektik added to PyPI
Merge. Synthesize. Create. Dialektik generates new content by fusing ideas from diverse sources, revealing unexpected insights and perspectives.
https://pypi.org/project/dialektik/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-27T06:59:53Z
Merge. Synthesize. Create. Dialektik generates new content by fusing ideas from diverse sources, revealing unexpected insights and perspectives.FeaturesLoads and processes datasets from multiple sourcesSummarizes text into concise bullet pointsSynthesizes bullet points into detailed articlesSupports various AI models for text generationModel-agnostic design allows easy swapping of different LLMsRequirementsRequired: datasets, huggingface_hubOptional: phi-3-vision-mlx(required only if you need to create a new dataset with the provided setup() function for custom dataset processing)InstallationTo install Dialektik with core dependencies only:pip install dialektikTo install Dialektik with all dependencies, including those required for the setup() function:pip install dialektik[setup]Note: Install the full version if you plan to process custom datasets using the setup() function.SetupClone the repository:git clone https://github.com/JosefAlbersç/Dialektik.gitcd DialektikInstall the required dependencies:pip install -r requirements.txtUsageCommand Line InterfaceDialektik can be used from the command line after installation. Here are some example usages:Generate a synthesis with default settings:dialektikSpecify sources:dialektik --source arxivSet the number of bullet points per book and choose a different model:dialektik --per-book 5 --model "your-preferred-model"Run the setup function:dialektik --setupFor a full list of options, use:dialektik --helpAccessing the DatasetImportant Note: The default dataset at 'JosefAlbers/StampyAI-alignment-research-dataset' is currently being prepared (ETA: 18 hours). Please check back later if unavailable.The default dataset is to be publicly available. You do not need to set up any environment variables or run the setup() function to use dialektik with this dataset.Synthesizing contentTo generate a synthesis, simply run:fromdialektikimportsynthesizeoutput=synthesize()You can customize the synthesis process by passing optional parameters:output=synthesize(list_source=['your_source'],per_book=3,api_model="mistralai/Mistral-Nemo-Instruct-2407")(Optional) Using Custom DatasetsIf you want to use your own dataset:Prepare your dataset according to the required format.Modify the PATH_DS variable in the code to point to your dataset.If your dataset is private or requires authentication, set up the following environment variables:HF_WRITE_TOKEN: Hugging Face write token (for pushing datasets)HF_READ_TOKEN: Hugging Face read token (for accessing private datasets)Note: The setup() function provided in the code is a demonstration of how you might process a custom dataset. Different datasets may require different processing steps, so you'll need to adapt this function to your specific needs.Customizing the LLMDialektik is designed to be model-agnostic. To use a different language model:Simply pass the name of your chosen model to the synthesize() function using the api_model parameter.Modify the mistral_api() function or create a new function that interfaces with your chosen LLM.Update the synthesize() function to use your new LLM interface.The default model is "mistralai/Mistral-Nemo-Instruct-2407", but you can easily change this by passing a different api_model parameter to the synthesize() function.LicenseThis project is licensed under the MIT License.Citation
Content Creation/Content Synthesis
Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Erik Pounds
AI, Go Fetch! New NVIDIA NeMo Retriever Microservices Boost LLM Accuracy and Throughput
Generative AI applications have little, or sometimes negative, value without accuracy — and accuracy is rooted in data. To help developers efficiently fetch the best proprietary data to generate knowledgeable responses for their AI applications, NVIDIA today announced four new NVIDIA NeMo Retriever NIM inference microservices. Combined with NVIDIA NIM inference microservices for the LlamaRead Article
https://blogs.nvidia.com/blog/nemo-retriever-microservices/
https://blogs.nvidia.com…nim-featured.jpg
2024-07-23T15:15:16Z
Generative AI applications have little, or sometimes negative, value without accuracy and accuracy is rooted in data.To help developers efficiently fetch the best proprietary data to generate knowledgeable responses for their AI applications, NVIDIA today announced four new NVIDIA NeMo Retriever NIM inference microservices.Combined with NVIDIA NIM inference microservices for the Llama 3.1 model collection, also announced today, NeMo Retriever NIM microservices enable enterprises to scale to agentic AI workflows where AI applications operate accurately with minimal intervention or supervision while delivering the highest accuracy retrieval-augmented generation, or RAG.NeMo Retriever allows organizations to seamlessly connect custom models to diverse business data and deliver highly accurate responses for AI applications using RAG. In essence, the production-ready microservices enable highly accurate information retrieval for building highly accurate AI applications.For example, NeMo Retriever can boost model accuracy and throughput for developers creating AI agents and customer service chatbots, analyzing security vulnerabilities or extracting insights from complex supply chain information.NIM inference microservices enable high-performance, easy-to-use, enterprise-grade inferencing. And with NeMo Retriever NIM microservices, developers can benefit from all of this superpowered by their data.These new NeMo Retriever embedding and reranking NIM microservices are now generally available:NV-EmbedQA-E5-v5, a popular community base embedding model optimized for text question-answering retrievalNV-EmbedQA-Mistral7B-v2, a popular multilingual community base model fine-tuned for text embedding for high-accuracy question answeringSnowflake-Arctic-Embed-L, an optimized community model, andNV-RerankQA-Mistral4B-v3, a popular community base model fine-tuned for text reranking for high-accuracy question answering.They join the collection of NIM microservices easily accessible through the NVIDIA API catalog.Embedding and Reranking ModelsNeMo Retriever NIM microservices comprise two model types embedding and reranking with open and commercial offerings that ensure transparency and reliability.Example RAG pipeline using NVIDIA NIM microservices for Llama 3.1 and NeMo Retriever embedding and reranking NIM microservices for a customer service AI chatbot application.An embedding model transforms diverse data such as text, images, charts and video into numerical vectors, stored in a vector database, while capturing their meaning and nuance. Embedding models are fast and computationally less expensive than traditional large language models, or LLMs.A reranking model ingests data and a query, then scores the data according to its relevance to the query. Such models offer significant accuracy improvements while being computationally complex and slower than embedding models.NeMo Retriever provides the best of both worlds. By casting a wide net of data to be retrieved with an embedding NIM, then using a reranking NIM to trim the results for relevancy, developers tapping NeMo Retriever can build a pipeline that ensures the most helpful, accurate results for their enterprise.With NeMo Retriever, developers get access to state-of-the-art open, commercial models for building text Q&A retrieval pipelines that provide the highest accuracy. When compared with alternate models, NeMo Retriever NIM microservices provided 30% fewer inaccurate answers for enterprise question answering.Comparison of NeMo Retriever embedding NIM and embedding plus reranking NIM microservices performance versus lexical search and an alternative embedder.Top Use CasesFrom RAG and AI agent solutions to data-driven analytics and more, NeMo Retriever powers a wide range of AI applications.The microservices can be used to build intelligent chatbots that provide accurate, context-aware responses. They can help analyze vast amounts of data to identify security vulnerabilities. They can assist in extracting insights from complex supply chain information. And they can boost AI-enabled retail shopping advisors that offer natural, personalized shopping experiences, among other tasks.NVIDIA AI workflows for these use cases provide an easy, supported starting point for developing generative AI-powered technologies.Dozens of NVIDIA data platform partners are working with NeMo Retriever NIM microservices to boost their AI models accuracy and throughput.DataStax has integrated NeMo Retriever embedding NIM microservices in its Astra DB and Hyper-Converged platforms, enabling the company to bring accurate, generative AI-enhanced RAG capabilities to customers with faster time to market.Cohesity will integrate NVIDIA NeMo Retriever microservices with its AI product, Cohesity Gaia, to help customers put their data to work to power insightful, transformative generative AI applications through RAG.Kinetica will use NVIDIA NeMo Retriever to develop LLM agents that can interact with complex networks in natural language to respond more quickly to outages or breaches turning insights into immediate action.NetApp is collaborating with NVIDIA to connect NeMo Retriever microservices to exabytes of data on its intelligent data infrastructure. Every NetApp ONTAP customer will be able to seamlessly talk to their data to access proprietary business insights without having to compromise the security or privacy of their data.NVIDIA global system integrator partners including Accenture, Deloitte, Infosys, LTTS, Tata Consultancy Services, Tech Mahindra and Wipro, as well as service delivery partners Data Monsters, EXLService (Ireland) Limited, Latentview, Quantiphi, Slalom, SoftServe and Tredence, are developing services to help enterprises add NeMo Retriever NIM microservices into their AI pipelines.Use With Other NIM MicroservicesNeMo Retriever NIM microservices can be used with NVIDIA Riva NIM microservices, which  supercharge speech AI applications across industries enhancing customer service and enlivening digital humans.New models that will soon be available as Riva NIM microservices include: FastPitch and HiFi-GAN for text-to-speech applications; Megatron for multilingual neural machine translation; and the record-breaking NVIDIA Parakeet family of models for automatic speech recognition.NVIDIA NIM microservices can be used all together or separately, offering developers a modular approach to building AI applications. In addition, the microservices can be integrated with community models, NVIDIA models or users custom models in the cloud, on premises or in hybrid environments providing developers with further flexibility.NVIDIA NIM microservices are available at ai.nvidia.com. Enterprises can deploy AI applications in production with NIM through the NVIDIA AI Enterprise software platform.NIM microservices can run on customers preferred accelerated infrastructure, including cloud instances from Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure, as well as NVIDIA-Certified Systems from global server manufacturing partners including Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.NVIDIA Developer Program members will soon be able to access NIM for free for research, development and testing on their preferred infrastructure.Learn more about the latest in generative AI and accelerated computing by joining NVIDIA at SIGGRAPH, the premier computer graphics conference, running July 28-Aug. 1 in Denver. See notice regarding software product information.
Information Retrieval Or Search/Content Synthesis/Process Automation
Unknown
null
null
null
null
null
null
news
Silvia Campanioni, César Veiga, José María Prieto-González, José A. González-Nóvoa, Laura Busto, Carlos Martinez, Miguel Alberte-Woodward, Jesús García de Soto, Jessica Pouso-Diz, María de los Ángeles Fernández Ceballos, Roberto Carlos Agis-Balboa
Explainable machine learning on baseline MRI predicts multiple sclerosis trajectory descriptors
Multiple sclerosis (MS) is a multifaceted neurological condition characterized by challenges in timely diagnosis and personalized patient management. The application of Artificial Intelligence (AI) to MS holds promises for early detection, accurate diagnosis, and predictive modeling. The objectives of this study are: 1) to propose new MS trajectory descriptors that could be employed in Machine Learning (ML) regressors and classifiers to predict patient evolution; 2) to explore the contribution of ML models in discerning MS trajectory descriptors using only baseline Magnetic Resonance Imaging (MRI) studies. This study involved 446 MS patients who had a baseline MRI, at least two measurements of Expanded Disability Status Scale (EDSS), and a 1-year follow-up. Patients were divided into two groups: 1) for model development and 2) for evaluation. Three descriptors: β1, β2, and EDSS(t), were related to baseline MRI parameters using regression and classification XGBoost models. Shapley Additive Explanations (SHAP) analysis enhanced model transparency by identifying influential features. The results of this study demonstrate the potential of AI in predicting MS progression using the proposed patient trajectories and baseline MRI scans, outperforming classic Multiple Linear Regression (MLR) methods. In conclusion, MS trajectory descriptors are crucial; incorporating AI analysis into MRI assessments presents promising opportunities to advance predictive capabilities. SHAP analysis enhances model interpretation, revealing feature importance for clinical decisions.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0306999
https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0306999.g008&size=inline
2024-07-16T14:00:00Z
AbstractMultiple sclerosis (MS) is a multifaceted neurological condition characterized by challenges in timely diagnosis and personalized patient management. The application of Artificial Intelligence (AI) to MS holds promises for early detection, accurate diagnosis, and predictive modeling. The objectives of this study are: 1) to propose new MS trajectory descriptors that could be employed in Machine Learning (ML) regressors and classifiers to predict patient evolution; 2) to explore the contribution of ML models in discerning MS trajectory descriptors using only baseline Magnetic Resonance Imaging (MRI) studies. This study involved 446 MS patients who had a baseline MRI, at least two measurements of Expanded Disability Status Scale (EDSS), and a 1-year follow-up. Patients were divided into two groups: 1) for model development and 2) for evaluation. Three descriptors: 1, 2, and EDSS(t), were related to baseline MRI parameters using regression and classification XGBoost models. Shapley Additive Explanations (SHAP) analysis enhanced model transparency by identifying influential features. The results of this study demonstrate the potential of AI in predicting MS progression using the proposed patient trajectories and baseline MRI scans, outperforming classic Multiple Linear Regression (MLR) methods. In conclusion, MS trajectory descriptors are crucial; incorporating AI analysis into MRI assessments presents promising opportunities to advance predictive capabilities. SHAP analysis enhances model interpretation, revealing feature importance for clinical decisions.Citation: Campanioni S, Veiga C, Prieto-González JM, González-Nóvoa JA, Busto L, Martinez C, et al. (2024) Explainable machine learning on baseline MRI predicts multiple sclerosis trajectory descriptors. PLoS ONE 19(7): e0306999.https://doi.org/10.1371/journal.pone.0306999Editor: Asokan Govindaraj Vaithinathan, University of Bahrain, BAHRAINReceived: February 8, 2024; Accepted: June 26, 2024; Published: July 16, 2024Copyright: © 2024 Campanioni et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Data Availability: All relevant data are within the manuscript and its Supporting Information files.Funding: This research was funded by Ministry of Science and Innovation through the projects PID2022-138936OB-C31 and PID2022-138936OB-C32 (co-funded by European Regional Development Fund (FEDER), A way to make Europe, UE) given to R.C. Agís-Balboa and C. Veiga, respectively. . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Competing interests: The authors have declared that no competing interests exist.1. IntroductionMultiple Sclerosis (MS) is a complex, long-lasting condition affecting the brain and spinal cord, leading to symptoms such as vision disturbances, impaired limb movement cognitive impairment, etc [1]. The disease typically progresses from a Relapsing-Remitting (RR) phase to a Secondary-Progressive (SP) phase, resulting in worsening health and irreversible disability [2,3]. The introduction of over fifteen approved disease-modifying treatments offers the potential to delay the onset of the SP phase significantly. However, the benefits must be carefully balanced against substantial risks, especially with the most potent medications [4].Magnetic Resonance Imaging (MRI) images offer insights into brain and spinal cord lesions associated with MS, facilitating accurate diagnosis and disease tracking [5,6]. Various techniques and scoring systems, such as the Barkhof [7] and Paty [8] scores, quantitatively evaluate lesions and disease burden in the central nervous system but typically applied at a specific point in time and do not consider the patients complete medical history. To obtain a comprehensive understanding of MS, it is crucial to consider broader factors, including genetics, microbiota, lifestyle, and geographical settings [9].The absence of established prognostic markers and reliable risk scores currently complicates the accurate prediction of disease trajectories for individual patients, which is particularly challenging given the availability of treatments that can slow disease progression but come with potential adverse effects [10,11]. Early prognosis of disease trajectories could enable personalized treatment strategies, particularly for higher-risk patients, and Artificial Intelligence (AI) algorithms are gaining momentum in addressing this need in neurology [1217], to heighten diagnostic precision and refine patient care efficacy. Some studies have used Machine Learning (ML) to explore MS, for both diagnosis and prognosis, offering promising insights [14,16]. While many studies focus on cross-sectional perspective and identifying static patterns in the data, comprehending how MS changes over time requires a longitudinal perspective. This approach helps connect information across different time points and understand the holistic trajectory of each patient [17].This paper presents a novel methodology for exploring the correlation between features extracted from baseline MRI and the trajectory of MS in terms of each patients Expanded Disability Status Scale (EDSS) [18]. To achieve this, three different models are proposed to describe these trajectories, and ML tools such as XGBoost [19] and, subsequently, Shapley Additive Explanations (SHAP) [20], are applied to improve the understanding of these relationships.2. Materials2.1 DatasetIn this study, a dataset comprising 478 patients from Galicia (North-Western Spain), was utilized. The exclusion criterion was based on patient follow-up, excluding those with less than 1 year of follow-up and those with fewer than two EDSS evaluations. After applying this selection criteria, a total of 446 records were employed for the analysis, with data collected from 1987 to 2022. Finally, it was ensured that each patient included in the analysis had an associated MRI. Fig 1 shows the cohort selection schema, and S1 Table shows the patient characteristics for the selected dataset. The data utilized, which was accessed for the first time on June 20, 2016, was approved by the Autonomous Committee of Research Ethics of Galicia under the code 2016/307. Informed consent was obtained from all participants prior to any data collection procedures.To examine our database in detail, it is essential to understand the temporal structure of the data. Fig 2A illustrates how the 446 patients included in the study are distributed categorized by the duration of their follow-up, measured in years. This representation helps us comprehend how patients are distributed concerning the duration of their follow-up in the database. On the other hand, in Fig 2B, we provide an alternative way to represent the percentage of our study cohort based on the duration of follow-up in years, offering insights into the length and quality of the collected data. Additionally, it is crucial to account for the distribution of EDSS scores in these MS patients. EDSS is a common clinical scale used to evaluate disability in MS patients, offering valuable insights into their clinical condition. Scores range from 0 (no disability) to 10 (severe disability). To illustrate the behaviour of EDSS within the dataset, Fig 2C displays the distribution of values in each EDSS category, while Fig 2D illustrates the distribution of EDSS determinations per patient. This provides a clearer understanding of the distribution of EDSS scores and the frequency of assessments in the dataset.Fig 2. Temporal distribution of study cohort and follow-up duration.(A) Distribution of Patients Over Time: Follow-Up Duration in Days. (B) Temporal Distribution of Study Cohort: Percentage of Patients Across Follow-Up Duration in Years. (C) Distribution of the number of values of each EDSS category in the studied cohort. (D) Distribution of the number of determinations per patient in the studied cohort.https://doi.org/10.1371/journal.pone.0306999.g0022.2 AI tools (boosting and explainability)2.2.1. XGBoost.Gradient Boosting is a powerful ensemble learning technique widely employed in ML for enhancing predictive models. This method sequentially builds a strong predictive model by combining the outputs of weak learners, usually decision trees. The algorithm minimizes an objective function, represented by Eq 1.(1)where L() is the training loss function measuring the models performance on the training data and () is the regularization term accounting for the complexity of the model.XGBoost represents a highly efficient and scalable implementation of gradient boosted decision trees, systematically constructing additive models in a stepwise manner. This process leads to an ensemble of base learners exhibit superior prediction capabilities compared to individual classifiers. Each weak classifier is assigned a weight based on its prediction accuracy, allowing them to contribute effectively to the final prediction [19,20]. XGBoost, being an advanced implementation, introduces additional regularization measures to control overfitting. The objective function of XGBoost, aimed to be minimized, is given by Eq 2, and the regularization term is described in Eq 3.(2)(3)Where, yi is the target value of the i-th instance, i is predicted value at the t-th iteration, ft (xi) is the additive decision tree model greedily added to improve performance, and (ft) is a regularization term penalizing model complexity. N is the set of all samples in leaf m, T consists of the number of leaf nodes, and are parameters of the tree. The score of leaf m is measured by m. This regularization procedure aims to compress the weights for many features to zero, facilitating feature selection.2.2.2 SHAP.Model interpretability poses a significant challenge in the realm of ML algorithms. To address this challenge, SHAP is recognized as a potent and commonly used tool in the realm of explainable AI, serving a pivotal function in elucidating the importance and influence of input features on model predictions [21,22]. The SHAP methodology is based on a unified framework rooted in cooperative game theory, assigning the contribution of each feature to the models output. Through a quantitative approach to assess the marginal impact of features, SHAP considers all feature combinations [23]. This facilitates a thorough comprehension of feature interactions and their combined impacts on predictions [24]. The mathematical expression representing SHAP values is given by Eq 4. This holistic perspective provides valuable insights into the inner workings of complex ML models, contributing to transparency and informed decision-making regarding model behavior and feature importance.(4)where f(S) refers to the output of the XGBoost model, which is determined by a specific set of features denoted as S. The complete set of all features is represented by N. The final contributions, denoted as i, are computed by averaging the contributions across all permutations of a feature set. Subsequently, the features are sequentially incorporated into the set, and their impact is reflected in the models output change.3. MethodsTo assess the correlation between the features derived from the baseline MRI and the clinical trajectory of each patient (as assessed by EDSS), a new methodology has been developed, which is divided into several stages. The first stage involves cohort selection. The second stage is dedicated to proposing trajectory descriptors based on EDSS assessment. As a result, three descriptors were obtained (detailed in section 3.1), represented as 1, 2, and EDSS(t). Subsequently, we proceed to build AI models using these baseline MRI-derived features, gender, and age of MS onset to predict patient progression. Two models are considered for this purpose: the classical Linear Regressor (LR) [25] and the state-of-the-art XGBoost-based predictor (section 2.2.1). This allows for a dual evaluation: first, to determine if the ML approach (specifically XGBoost) outperforms the classical method (LR); and second, to validate the patient trajectory classification presented in the first section as patient descriptors. The final stage of the method is devoted to understanding, using explainable AI (referred to as SHAP in section 2.2.2), which are the main features of the model that predicts the trajectories. Fig 3 shows the methodology pipeline including all these stages.3.1 Building the MS trajectory descriptors based on time dependent EDSS assessment variationsThe first proposed trajectory descriptor aims to transform the patients categorical EDSS assessments into a single numerical variable representing the patients condition changes over time. This is achieved by combining measurements taken at various time points and normalizing them by the time interval (in days) between measurements. Mathematically, the first trajectory descriptor is based on the initial and final EDSS scores and is represented by Eq 5.(5)Where EDSS represents the variation in terms of EDSS between the initial and final measurements weighted by 0.5 for each category variation, whether increasing or decreasing, and the term T corresponds to the difference (in days) between such measurements. The result, 1, corresponds to the slope of the line that connecting those points. This approach reduces the trajectory complexity of each patient to just two points (the beginning and the end).Following the previous logic, a more comprehensive description of this score can be obtained by considering each of the EDSS assessments made for the patient. Using the same convention as in Eq 1 for T, Eq 6 describes the second trajectory descriptor.(6)The term EDSSi means the variability in EDSS between two measurements, and the term Ti corresponds to the number of days between two consecutive determinations. The summation covers EDSS variations from i = 1 to n, where n represents the last recorded change. EDSSi weighted by 0.5 on each category variation, increasing or decreasing. Consequently, the descriptor measures the time required to induce a change in EDSS values for a patient, treating them as a numerical variable.The third trajectory descriptor aims to treat EDSS assessments directly as categorical. To consider the moment in time when these assessments are taken using this approach, a map of values at specific time points is created, which is the same for all patients. This is described in Eq 7.(7)To describe patient trajectories using this model, we collect EDSS values at specific time intervals, including the initial assessment at t = 0, and subsequent evaluations at 1 year, 2 years, 5 years, and 10 years. This method enables us to predict the trajectory using classifiers that estimate the EDSS values at these specific time points. If a patients EDSS value matches that of the previous and subsequent assessments within a three-month interval in their medical history, we assume that it remains unchanged during that period. This assumption is particularly useful in cases where data for these time points are missing, as it provides additional values for the model to predict. Moreover, from a clinical point of view, the occurrence of a non-documented transient change (more often an increase than a decrease) of EDSS between two equal assessments is indeed a possibility in the clinical setting and would qualify as a (subclinical) relapse. Nevertheless, the purpose of this paper is to use baseline MRI features to predict the disability score (i.e., EDSS) at specific time points, not to foresee the annualized relapse rate (ARR). While the behaviour of ARR is a primary of most clinical trials in relapsing MS in the short and medium term (typically 12 years), disability is the most relevant feature in the long run (210 years) for either relapsing or progressive MS. These proposed descriptors aim to describe the progression of MS over time based on EDSS measures. Fig 4A depicts the variability of EDSS measures for a subset of the initial 10 patients in the study. Fig 4B showcases the behaviour of the disease progression descriptors concerning the EDSS scores over time.Fig 4. Description of the behaviour of the EDSS Variable in the dataset.(A) Evolution of EDSS for the first 10 patients of the study, where abscissa axis is time in days and ordinate, EDSS values. (B) Example of behaviour of possible descriptors of disease progression according to the EDSS score over time.https://doi.org/10.1371/journal.pone.0306999.g0043.2 AI (regressor and classifier) to predict MS trajectoriesOnce the data has been curated and pre-processed to enable integration into the AI workflow, two different sets of models are employed. The first one consists of the LR and the XGBoost regressors, used to predict the 1 and 2 trajectories, which represent the time required to change EDSS values. The second model utilizes Multiclass Logistic Regression (MLR) and the XGBoost classifiers to predict EDSS(t), forecasting the EDSS value at a specific moment (Eq 7), functioning as a classifier.In the context of hyperparameter optimization, a crucial process for identifying and selecting parameter configurations that produce the best prediction results, the Bayesian approach was employed. Specifically, the Hyperopt Library [21] was utilized to optimize XGBoost (Eqs 2 and 3) leveraging Bayesian optimization.For the analysis, the dataset is initially divided into two parts: one for model fitting and the other for evaluating predictor quality, using an 80/20 ratio. All models are fitted and hyperoptimized using the same set. Additionally, a five-fold cross-validation process is conducted, involving random resampling of the initial dataset split to assess the generalizability of the results. To address class distribution imbalance, stratified cross-validation is employed, ensuring that each fold maintains a representative proportion of the classes present. To evaluate whether the models are overfitting or underfitting, we compare the metrics (AUC-ROC, Sensitivity, Accuracy, Precision) between both the training and testing datasets. The values obtained permit us to conclude that neither overfitting nor underfitting occurs in the process.4. ResultsThis section presents the results of three main experiments that explore the relationship between the progression of MS and the baseline MRI of each patient, using the dataset described in section 2.1. In the first experiment, trajectory descriptors were obtained using the different proposed models (section 3.1). The second experiment employed data from the baseline MRI to predict the patient trajectory based on the models obtained in the first experiment. In the third experiment, SHAP was utilized to analyse and understand the key features influencing the predictions of each model. The proposed method and analysis were implemented in Python using multiple libraries, including Scikit-learn, Matplotlib, NumPy, Pandas, Hyperopt, XGBoost, and SHAP.4.1 Obtaining the trajectories descriptorsThe first experiment focuses on obtaining the descriptors for the datasets patients. 1 values, computed using Eq 5, show an average of 0.02 and a standard deviation of 1.23. For 2, values are extracted using Eq 6, revealing an average of -1.47, and a standard deviation of 12.76. Fig 5A displays the behaviour of 1 and 2 across all patients in our dataset. There is consensus in considering MS has followed a mild (termed benign by some authors) course when EDSS score is 3.0, after a disease duration of at least 10years [26], whereas aggressive MS might be defined as reaching an EDSS score of 6.0 within 10 years of disease onset [27]. By calculating the 1 values for patients who meet these criteria, weve segmented the dataset into three groups: mild, average, and aggressive trajectories of the disease, as show in Fig 5B.Fig 5. Behaviour of descriptors 1 and 2 in the dataset.(A) Results of 1 and f 2 for the Entire Patient Cohort. (B) Classification of disease trajectories in 1 analysed patients. In both at right a zoom of the area of interest.https://doi.org/10.1371/journal.pone.0306999.g005Regarding the extraction and quantification of EDSS(t) assessment for the specific moments as described in Eq 3. Results of this process are provided in Table 1, in terms of the number of patients in the study that has an EDSS value at each specific time.4.2. Predicting patient trajectory descriptors using baseline MRIThe dataset described in 2.1 was employed to evaluate the presented AI methods to predict the patient trajectories, as described in section 3.2.4.2.1 Regressor model.To predict the trajectory descriptors 1 and 2 based on the baseline MRI, patient age, and sex, two regression models were employed. In Table 2, a comparison is presented between the classic LR model and the XGBoost model. This comparison includes default hyperparameters and the best-performing Bayesian hyperparameter-tuned model, and it measures performance in terms of Mean Absolute Error (MAE). The results displayed on the table, demonstrate the potential for AI methods to significantly reduce prediction errors for both trajectory descriptors when compared to the classical LR method.Prediction of benignant and aggressive evolutions. Following the prediction of the 1 trajectory descriptor, the results could be categorized according to the criteria described in section 4.1 to predict whether the disease course is classified as benign or aggressive. This method allows for the evaluation of the regressors ability to differentiate between the clinical categories of disease progression. Table 3 presents the results comparing the criteria forecasted by the regressor with the actual values from patients followed for ten years or more, belonging to the test group.4.2.2 Classifier model.To make predictions based on the time descriptor EDSS(t), a classifier is needed. Table 4 compares the MLR with optimized XGBoost models using various metrics, including Area Under the Curve Receiver Operating Characteristic (AUC-ROC), Sensitivity, Precision, and Accuracy. The most promising results were achieved with the XGBoost model, with AUC-ROC values ranging from 0.7354 for EDSS(0) to the highest result of 0.9136 obtained for EDSS(1). Fig 6 displays the AUC-ROC curves generated by applying XGBoost to each EDSS(t) timestamp.Fig 6. ROC Curves for XGBoost applied to each analysed EDSS(t) Timestamp.Table 4 presents the metrics derived from the testing dataset. To assess the models generalization performance, analogous metrics were calculated for the training dataset. These results are provided as supplementary material in S2 Table. A comparison between the metrics presented in Table 4 and those in S2 Table reveals a notable similarity in values for both the training and testing datasets. This congruence suggests that the models neither suffer from overfitting nor underfitting.https://doi.org/10.1371/journal.pone.0306999.g0064.3. Using SHAP to explain the ML models (regressor and classifier)After obtaining the model, we employed SHAP (as described in section 2.2.2) to interpret the best-performing regressor identified in Table 2. In this case, we utilized the hyperoptimized version of XGBoost with predictors 1 and 2. Using this technique enables us to identify and rank the importance of features. The analysis reveals that "Age at onset" is the most crucial feature for predictors 1 and 2. Fig 7A and 7C displays plots of the ranking of the 20 most important variables. Fig 7B and 7D illustrates the impact of features on the model output for individuals in the validation dataset. The X-axis displays features sorted by the sum of SHAP value magnitudes across all samples, indicating higher importance at the extremes. The Y-axis shows how much each feature affects the models predictions using SHAP values. The colours, from red to blue, stand for high to low values of these features.Fig 7. Relevance and SHAP analysis of the 20 most important features of the model XGBoost regressor to predict 1 y 2.(A-B) Relevance and SHAP analysis of the 20 most important clinical variables extracted from the XGBoost regressor to predict 1. (C-D) Relevance and SHAP analysis of the 20 most important clinical variables extracted from the XGBoost regressor to predict 2.https://doi.org/10.1371/journal.pone.0306999.g007In the section dedicated to the classifier, a similar approach to the regressor was followed, employing the SHAP technique to assess and elucidate the performance of ML models. Fig 8 displays the ranking of the top 20 features that exert the most influence on the models classification decisions, thereby contributing to a deeper understanding of variable importance and overall model performance.Fig 8. Relevance and SHAP analysis for those 20 most important clinical variables extracted from the XGBoost classifier to predict EDSS(t).(A) Analysis to predict EDSS(0). (B) Analysis to predict EDSS(1). (C) Analysis to predict EDSS(2). (D) Analysis to predict EDSS(5). (E) Analysis to predict EDSS(10).https://doi.org/10.1371/journal.pone.0306999.g008In the SHAP analysis of the classifier, we observed variations in the influence of each variable across different classes and time points (as shown in Fig 8). Interestingly, while the age of onset exhibited reduced influence compared to other predictors, the number of lesions greater than nine ("Nb lesions/Brain (> = 9)") detected in the baseline MRI emerged as the most influential variable.5. DiscussionThe objective of this study is not to make the best predictor of the trajectory of MS, but rather to explore the amount of information provided by the baseline MRI for predicting the evolution of MS. We propose a method for creating trajectory descriptors 1 and 2 (section 3) which help us understand how MS patients EDSS scores change over time. 1 simplifies the description, by connecting the first and last EDSS assessments with a straight line (see Fig 4D), but it may lose important information and does not consider variations between measurements. In contrast, 2 considers these variations over time and the changes in EDSS scores between consecutive measurements. Both methods are weighted by the time between measurements. The hyperoptimized XGBoost model showed the lowest MAE, suggesting it is better at predicting patient trajectories. For 1, the MAE is 8.62 percent relative to the standard deviation, while for 2, it is 40.60 percent relative to the standard deviation. This difference can be attributed to the fact that 2 considers intermediate changes in EDSS values, including relapses in patients, which can lead to randomly occurring elevated atypical EDSS values, potentially making predictions for this descriptor more challenging. This method is a useful tool for quickly characterizing disease behaviour over time, but it introduces an error when converting categorical EDSS measurements into numbers, assuming all transitions between categories carry the same weight.To address the limitations of 1 and 2, we present an alternative method for constructing the EDSS(t) trajectory descriptor. This new approach treats the variable as categorical, focusing on how patients change over time. The descriptor is based on EDSS measurements at five specific time points, with varying patient counts: 446, 400, 377, 352, and 279, respectively. As shown in Table 1, EDSS values are not evenly distributed across categories, with most samples having values below EDSS = 4. Consequently, we only considered categories with a minimum of ten occurrences at each time point. Table 4 compares the performance of the MLR against optimized XGBoost models using various metrics. Notably, the XGBoost model exhibited the most promising results, demonstrating AUC-ROC values ranging from 0.7354 for EDSS(0) to a peak of 0.9136 for EDSS(1). The predictions for EDSS(0) showed slightly lower performance compared to other time points. There are plausible reasons to think that was due to an imbalance in the sample distribution at the initial stage, where category "2" represented 28% of all samples, while category "1.5" accounted for only 3%. To handle this imbalanced-ness of the problem, several actions have been taken to mitigate the effects as using a schema for validation considering a 5-fold cross validation approach. As this initial imbalance in debut conditions produces suboptimal outcomes of the estimator only at this specific initial time point, the task of exploring techniques to address this intrinsic data imbalance is posed as an open future research study that potentially mitigate such issues improving the overall performance of predictive models for EDSS trajectory descriptors.Several works have been published in recent years within the same domain, focusing on predicting patient evolution using MRI studies [2830]. These studies aim to forecast disease progression at various time points according to the EDSS scale. While the prevailing literature reports AUC values ranging from 0.71 to 0.89, our findings span from 0.74 to 0.91, contingent upon the forecasted year for disease trajectory. While the resulting metrics from these works align with ours, comparisons are somewhat heterogeneous due to differences in input variables, prediction different time points, and considerations of the EDSS scale. Moreover, our proposed methodology focuses on forecasting disease progression solely utilizing derived features extracted from baseline MRI scans.It is interesting to remark that while baseline MRI studies are a good prognostic predictor for MS, as demonstrated by the research community [31,32], the performance disparity between the classifier models EDSS(0) predicting the initial EDSS level and the EDSS(1) at one year and later, could be understood as a contribution of clinical variables such as treatment, genetics and environmental factors, to the clinical evolution and assessment of the patient.The implementation of explainable AI methods facilitated the discovery of the core factors influencing precise decisions within the ML model. This process renders complex models understandable and accessible, even to those without advanced technical or medical knowledge. In section 4.3, SHAP was utilized to interpret the ML models, both for the regressor and the classifier. This analysis provided essential information about the internal performance of each developed predictor and descriptor, including the classification of feature importance and insights into how the values of each feature impact predictions. For the trajectory descriptors 1 and 2, significant influence was highlighted, particularly related to the age at disease diagnosis (age at onset), as observed in Fig 7. This observation aligns with findings from previous studies [3335]. In the analysis of EDSS(t) using the XGBoost-based classifier, Fig 8 illustrates how the influence of each variable changes depending on the class being analysed for each measured time point. It is noteworthy that, in this predictor, the age of onset does not exert as much influence as in the previously analysed predictors. Instead, the most influential variable is the number of lesions greater than nine ("Nb lesions/Brain (> = 9)") detected in the baseline MRI. We found that the variable with the most significant impact in the classification models is the number of lesions greater than nine ("Nb lesions/Brain (> = 9)") in the baseline MRI. While previous works have not specifically analysed the number of lesions to predict EDSS, there are studies that have examined the prediction of EDSS at 10 years based on brain lesion volume [36], as well as others that have investigated the spatial distribution of lesions [37]. Therefore, brain lesions emerge as a crucial parameter to consider in predicting the progression of MS. The incorporation of the SHAP tool represents a significant advancement towards transparency and understanding in the context of AI and predictive modelling. This allows healthcare professionals to comprehend how the model generates prediction and make informed decisions.In the next coming years AI will have a great impact on the clinic when it comes to making clinical decisions, prevention / diagnosis / prognosis, therapeutic efficacy, etc. This work makes an intensive use of AI algorithms for producing prognosis and decisions tools, intentionally derived exclusively from the baseline MRI, to measure the amount of information for prediction patient evolution at the debut. Enhancing the models effectiveness could be accomplished by
Content Synthesis/Prediction/Decision Making
Healthcare Practitioners and Support/Life, Physical, and Social Science
null
null
null
null
null
null
news
Bryan Mechell
Litigation Perspective: Strategies for Licensing Software that Leverages Artificial Intelligence
As software products and services increasingly take advantage of the emerging capabilities of artificial intelligence (AI), software developers and companies that license software face evolving legal risks and contractual considerations. Software developers and licensees that fail to negotiate clear software license agreements that account for unique aspects of licensing AI-powered software may find themselves facing unexpected liability or costly software license dispute litigation. When drafting and negotiating software license agreements, parties should carefully consider the legal implications of developing and using software that incorporates AI.
https://ipwatchdog.com/2024/07/12/litigation-perspective-strategies-licensing-software-leverages-artificial-intelligence/id=178802/
https://ipwatchdog.com/w…_725301738_S.jpg
2024-07-12T14:15:55Z
While the evolving capabilities of AI bring increased functionality and features, they also raise important legal considerations for parties negotiating software license agreements.As software products and services increasingly take advantage of the emerging capabilities of artificial intelligence (AI), software developers and companies that license software face evolving legal risks and contractual considerations. Software developers and licensees that fail to negotiate clear software license agreements that account for unique aspects of licensing AI-powered software may find themselves facing unexpected liability or costly software license dispute litigation. When drafting and negotiating software license agreements, parties should carefully consider the legal implications of developing and using software that incorporates AI.Software developers and licensees encounter two common types of AI-powered software products implicated in licensing agreements: (1) software products that leverage third-party AI services hosted offsite; and (2) software products that leverage custom-built AI, deployed either in the cloud or within on premises infrastructure. Each type of AI integration gives rise to important strategic considerations and risks that stem from using AI services, including intellectual property rights, data privacy, security and breach concerns, gatekeeper responsibilities, service performance guarantees, and evolving legal and regulatory landscapes. In this article, we discuss important issues parties should consider when negotiating master service agreements (MSAs), statements of work, and other license agreements that involve AI. This article also offers insights and recommendations to proactively manage risks and negotiate favorable contract terms.Software Products That Leverage Third-Party AI ServicesSoftware companies rely upon powerful AI services from tech giants like Microsoft, Google, and Amazon to enhance their products with capabilities such as natural language processing (NLP) and predictive analytics. While incorporating third-party AI services can provide compelling features, professionals tasked with managing licensing of these assets should carefully consider potential risks associated with their use.The following figure illustrates a scenario where a customer licenses software from a Software Vendor that leverages third-party AI functionalityin this example, an OpenAI large language model (LLM) running on Microsoft Azure.In the above example, the Customer has a software license agreement that addresses the scope of the relationship with Software Vendor, as well as a separate agreement with Microsoft that addresses the scope of services provided by Microsoft Azure. On top of that, OpenAI publishes a list of representations and promises describing how it uses (or does not use) client data in connection with the OpenAI LLM. The Software Vendor, Microsoft, and OpenAI all provide some form of functionality relating to the software product licensed by the Customer, which includes handling and processing confidential customer data. This raises important considerations when licensing the software product, including allocation of liability relating to software functionality, responsibility for data privacy and security, and IP rights.A.   Errors or Damages Caused by Third-Party AIConsider the following hypothetical. A healthcare software company that sells its products to hospitals and medical service providers uses a third-party AI model, like Google Vertex AI, to analyze medical images for early disease detection. Due to issues with how the AI model was trained, the software misclassifies thousands of X-raysleading to numerous false positives, unnecessary patient anxiety, follow-up tests, and in a few cases, unneeded invasive procedures. Even if the healthcare software company includes a reliance clause in its license agreementstating, for example, that the software provider cannot guarantee the accuracy of third-party AI servicesa court may still impose a duty on the healthcare software company as an informed intermediary with specialized knowledge in AI and healthcare to protect end-users from known risks in AI technology. See Moll v. Intuitive Surgical, Inc., 2014 WL 1389652, at *4 (E.D. La. April 1, 2014) (holding that using a software product like a medical robot does not remove the software user / service provider from the scope of liability). By deciding to integrate a particular AI service, the healthcare software company could be seen as endorsing its capabilities. It is therefore critical that software developers not only include license terms addressing third-party AI functionality, but also carefully consider potential legal risks where special duties may attach.B.  Data Privacy and Security RisksWhen a software product or service integrates with a third-party AI service, data flows in multiple directions(1) from software vendor systems to AI, through their AI models, and back again; and (2) from software vendor systems to client systems, and back. This expanded data journey can increase privacy and security risks.Looking at Microsoft Azure as an exemplar, Microsoft states that Azure OpenAI maintains strict data privacy and security measures for customer interactions. SeeData, privacy, and security for Azure OpenAI Service. Microsoft also represents that customer prompts, completions, embeddings, and training data are kept confidential and are not shared with other customers, OpenAI, or used to improve any models or services. While Azure OpenAI handles prompts, generated content and data, Microsoft states that it does not use this information to automatically enhance models. Customers can fine-tune models with their own data, but these customized models remain exclusively available to the specific customer who created them.In the hypothetical involving the healthcare software company, imagine if an authentication flaw in a third-party AIs API allowed a hacker group to intercept the data stream, exposing thousands of medical images and associated protected health information (PHI). Claims of HIPAA/GDPR/CCPA violations and potential multi-million-dollar penalties from regulators are on the horizon. Even if an AI provider like Microsoft takes responsibility for the specific vulnerability, the healthcare software provider could still face liability on the basis that the healthcare company has a heightened duty to secure personal data through adequate vetting of third-party partners and end-to-end encryption. See e.g., In re Anthem, Inc. Data Breach Litig., 162 F. Supp. 3d 953, 101011 (N.D. Cal. 2016) (finding the plaintiffs could pursue breach of contract claims as third-party beneficiaries because the contract terms established that the defendant could be held to privacy standards above and beyond the standards required under federal law). If the operative service contract with the software vendor includes a clause representing that the software vendor will follow industry best practices for safeguarding PHI, this could impose further liability on a software vendor in this scenario.C.  IP ConsiderationsBeyond errors and security risks, software that relies on third-party AI also introduces potential complexities associated with protecting intellectual property rights. For example, poorly worded software license agreements may leave ambiguity over ownership rights to the AI models inputs and the outputs they generate.Scenarios where copyrighted works are used to train AI LLMs to allegedly create infringing derivative works are already the subject of contentious litigation. See, e.g., Kadrey and Silverman et al. v. Meta Platforms, Inc., 3:23-cv-03417 (N.D. Cal., July 7, 2023) (plaintiffs allege that LLaMAs outputs (or portions of the outputs) are similar enough to the plaintiffs books to be infringing derivative works). Considering the healthcare software company scenario described earlier, imagine that the licensed software utilizes AI services to generate data visualization charts and dashboards for medical service providers tailored to patient data. The AI provider could potentially exploit the software vendors proprietary code and the end customers confidential data to enhance its AI model for competitors of the software vendors and end customer. The AI provider might also assert intellectual property rights over outputs generated by the AI services, even when those outputs are derived using software vendor code and end customer data inputs. This could have a substantial impact on the software providers leverage in the competitive marketplace, and increases the possibility that confidential customer information is used without permission.Infringement liability is also an important consideration. If the AI service is found to have infringed third-party IP rights through techniques like training-data scraping, the software vendor could be liable for resulting copyright violations.AI provider terms and conditions regarding IP rights vary. For example, Anthropic lets its users retain all right, title, and interestincluding any intellectual property rights in the input or the prompts. In addition, Anthropic disclaims rights to customer content and states that customers own all outputs generated, assigning any potential rights in outputs to the customer. However, Anthropics commitment not to train its models on customer content explicitly mentions only Customer Content from paid Services and is subject to customers compliance with Anthropics terms of service. SeeAnthropic s Terms of Services. Parties leveraging AI need to carefully consider implications relating to IP rights.Software Products That Leverage Custom-Built AI, Either On-Premises or in the CloudSoftware products that rely on proprietary AI solutions deployed on-premises or in the software providers cloud can allow for increased flexibility and control over features, as well as greater control over access to confidential data. At the same time, the party responsible for providing and maintaining the underlying infrastructure that houses the AI services faces heightened risks relating to data governance, system integration, and product/service quality.The following figure illustrates a scenario where a Customer licenses software that leverages custom-built AI functionality hosted either (1) on premises on Customer IT infrastructure or (2) in the cloud by the software vendor.In the above scenario, the services responsible for providing AI functionality reside either in the Customers or the Software Vendors IT infrastructure. The location where the AI services reside is important, as the entity responsible for managing that infrastructure may incur gatekeeping responsibilities tied to the    use of the AI service. This gatekeeping duty can carry significant liability risks. The arrangement and location of the AI functionality also raises important questions regarding performance guarantees.A.   Gatekeeper RoleAssume Software Vendor sells expense management software that uses custom-tailored NLP AI hosted on the Software Vendors cloud to scan invoices and automate payments. The NLP AI model ultimately misinterprets handwritten figures, causing a client to overpay a vendor by $5 million. While the Software Vendor could argue that their NLP API simply passed along raw outputs and it was the Customers responsibility to scrutinize those outputs before acting on them, a court could find that a decision aid technology vendor has a duty to implement appropriate safeguards and human oversight checkpoints. The fact that the AI services are hosted on the Software Vendors cloud heightens the risk of this potential outcome.As another example, assume Software Vendor sells automated hiring and resume screening software that leverages custom-built AI hosted on-premises in the Customers IT infrastructure. This kind of tool should be designed to prevent illegal discrimination and bias from impacting hiring decisions. See Mobley v. Workday Inc., No. 3:23-cv-00770 (N.D. Cal, Feb. 21, 2023) (EEOC filed suit against human resources software firm Workday alleging that it violated federal anti-bias laws by using AI-powered software to screen out job applicants for racially discriminatory reasons). The Customer in this scenario needs to consider the risks associated with hosting and relying on automated software that leverages AIwhich has known issues tied to generating responses that exhibit bias and errors. Customers utilizing such AI solutions should consider dedicated human oversight teams reviewing outputs for compliance with ethical guidelines.Finally, assume Software Vendor sells software solutions to FinTech companies that use AI to detect financial crimes, payment fraud, and identity theft. The Customerand potentially the Software Vendor, depending on the nature of the license agreementmay have a gatekeeping duty to validate AI outputs and correct false positives that stem from any racial or religious biases before freezing accounts or reporting individuals to authorities.B.  Service Level Agreements (SLAs) and Performance GuaranteesThe transient, evolving nature of AI requires a more nuanced approach to uptime guarantees commonly included in service level agreements. Consider, for example, Software Vendor sells AI-powered software that monitors data centers, dynamically detects anomalies, and predicts system failures. Certain AI systems are susceptible to natural performance degradations over time that occur as real-world data distributions shift, deviating from those on which the static AI model was initially trained. If the Software Vendor provides guarantees for software uptimecommonly included in a service-level agreementdegradations on software performance caused by changes in third-party AI models could violate software uptime promises. In a potential legal dispute over breach of a service level agreement with uptime requirements, a court might conclude that for a product that touts AI as a key selling point over traditional algorithms, the AI-powered product must remain continually tuned and calibrated to maintain a reasonable level of predictive or analytical performance. For traditional software, uptime means computational availability, but for AI solutions, uptime might need to account for the availability of accurate, effective outputs from the AI models themselves.Lessons Learned from LitigationBest PracticesSoftware that leverages AI functionality often handles personal information, financial data, intellectual property, and other sensitive information. This raises important liability considerations for software vendors and companies that license AI-powered software. The following list offers some best practices for parties seeking to proactively manage risks when writing and negotiating software license agreements:A.   For Software Products That Use Third-Party AICarefully scrutinize broad as is clauses for third-party components, as they may offer less protection than anticipated.Rigorously test any AI service before integrationand document these efforts.Negotiate stronger indemnification terms with third-party AI service providers, especially for enterprise clients.Identify and provide notice of functions that rely on external AI services, and clearly articulate limitations on capabilities.Clearly articulate IP ownership rights associated with AI-generated content, including ownership of inputs and outputs, as well as rights associated with trained AI models and use across different deployment environments.Regularly audit third-party AI performance, and provide customers with direct links to the third partys performance metrics and incident reports.Ensure that any marketing materials accurately describe AI-related capabilities and limitations.Memorialize procedures for secure data storage, retention periods, and deletion processes.Ensure the AI systems data practices adhere to data privacy laws like GDPR and CCPA, and update these practices as more jurisdictions put new laws in place.B.  For Software Products That Use Custom-Built AIArticulate whether AI software is hosted on-premises on Customer IT infrastructure or in the cloud by the software vendor, and detail responsibilities for data protection, security, and performance.Explicitly outline the scope of any gatekeeping responsibility over AI solutions to comply with legal and ethical requirements.Establish concrete metrics for reasonable AI performance that align with the parties expectations as well as known issues with AI performance, such as training data drift.While the evolving capabilities of AI bring increased functionality and features, they also raise important legal considerations for parties negotiating software license agreements. As software incorporating AI becomes more common, disputes over software license terms are likely to increase. Software vendors and licensees alike should understand and carefully consider the risks associated with licensing AI software. Those unwilling to embrace this responsibility could face significant business and legal repercussions as the move fast and break things ethos collides with the general publics demands for safe, reliable, accountable, and ethical use of AI.Image Source: Deposit PhotosAuthor: bsd_studioImage ID: 725301738Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the authors employer, clients or the sponsors of IPWatchdog.com.
Content Synthesis/Prediction
Legal/Management
null
null
null
null
null
null
news
Andrew Tarantola
Apple is tackling one of the most frustrating aspects with AI today
An Apple research team has developed a new method to ease users into the quirks of upgraded AI models.
https://www.digitaltrends.com/computing/apple-ai-muscle-model-upgrade-user-experience/
https://www.digitaltrend…e=1200%2C630&p=1
2024-07-15T18:19:03Z
AppleAs companies like Google, Anthropic, and OpenAI update and upgrade their AI models, the way that those LLMs interact with users is sure to change as well. However, getting used to the new system can become a hassle for users who then have to adjust how they pose their queries in order to get the results they’ve come to expect. An Apple research team has developed a new method to streamline that upgrade transition while reducing inconsistencies between the two versions by as much as 40%.As part of their study, “MUSCLE: A Model Update Strategy for Compatible LLM Evolution,” published July 15, the researchers argue that when upgrading their models, developers tend to focus more on upping the overall performance, rather than making sure that the transition between models is seamless for the user. That includes making sure that negative flips, wherein the new model predicts the incorrect output for a test sample that was correctly predicted by the older model, are kept to a minimum.This is because, the study authors argue, each user has their own quirks, quibbles, and personalized ways of interacting with chatbots. Having to continually adjust and adapt the manner in which they interact with a model can become an exhausting affair — one that is antithetical to Apple’s desired user experience.The research team even argues that incorrect predictions by the AI should remain between versions, “There is value in being consistent when both models are incorrect,” they wrote. “A user may have developed coping strategies on how to interact with a model when it is incorrect.”Apple presents MUSCLEA Model Update Strategy for Compatible LLM EvolutionLarge Language Models (LLMs) are frequently updated due to data or architecture changes to improve their performance. When updating models, developers often focus on increasing overall performance… pic.twitter.com/ATm2zM4Poc&mdash; AK (@_akhaliq) July 15, 2024To address this, the researchers first developed metrics by which to measure the degree of regression between models and then developed a strategy to minimize their occurrence. The result is MUSCLE, a strategy that doesn’t require developers to retrain the entire base model and instead relies on the use of training adapters. Adapters small AI modules that can integrate at different points along the overall LLM.Developers can then fine-tune these specific modules instead of the entire model. This enables the model as a whole to perform distinct tasks at a fraction of the training cost and with only a small increase in the number of parameters. They’re essentially plug-ins for large language models that allow us to fine-tune specific sections of the overall AI instead of the whole thing.The research team upgraded LLMs including Meta’s Llama and Microsoft’s Phi as part of their study, using specific math queries as samples, and found that negative flips occurred as much as 60% of the time. By incorporating the MUSCLE strategy, the team wasn’t able to fully eliminate negative flips, but they did manage to reduce their occurrence by as much as 40% compared to the control.
Unknown
Unknown
null
null
null
null
null
null
news
Ike Nnoli
New NVIDIA Digital Human Technologies Enhance Customer Interactions Across Industries
Generative AI is unlocking new ways for enterprises to engage customers through digital human avatars. At SIGGRAPH, NVIDIA previewed James, an interactive digital human that can connect with people using emotions, humor and more. James is based on a customer-service workflow using NVIDIA ACE, a reference design for creating custom, hyperrealistic, interactive avatars. Users will Read Article
https://blogs.nvidia.com/blog/digital-humans-siggraph-2024/
https://blogs.nvidia.com…v-1200x628-1.jpg
2024-07-29T20:30:41Z
Generative AI is unlocking new ways for enterprises to engage customers through digital human avatars.At SIGGRAPH, NVIDIA previewed James, an interactive digital human that can connect with people using emotions, humor and more. James is based on a customer-service workflow using NVIDIA ACE, a reference design for creating custom, hyperrealistic, interactive avatars. Users will soon be able to talk with James in real time at ai.nvidia.com.NVIDIA also showcased at the computer graphics conference the latest advancements to the NVIDIA Maxine AI platform, including Maxine 3D and Audio2Face-2D for an immersive telepresence experience.Developers can use Maxine and NVIDIA ACE digital human technologies to make customer interactions with digital interfaces more engaging and natural. ACE technologies enable digital human development with AI models for speech and translation, vision, intelligence, lifelike animation and behavior, and realistic appearance.Companies across industries are using Maxine and ACE to deliver immersive virtual customer experiences.Meet James, a Digital Brand AmbassadorBuilt on top of NVIDIA NIM microservices, James is a virtual assistant that can provide contextually accurate responses.Using retrieval-augmented generation (RAG), James can accurately tell users about the latest NVIDIA technologies. ACE allows developers to use their own data to create domain-specific avatars that can communicate relevant information to customers.James is powered by the latest NVIDIA RTX rendering technologies for advanced, lifelike animations. His natural-sounding voice is powered by ElevenLabs. NVIDIA ACE lets developers customize animation, voice and language when building avatars tailored for different use cases.NVIDIA Maxine Enhances Digital Humans in TelepresenceMaxine, a platform for deploying cutting-edge AI features that enhance the audio and video quality of digital humans, enables the use of real-time, photorealistic 2D and 3D avatars with video-conferencing devices.Maxine 3D converts 2D video portrait inputs into 3D avatars, allowing the integration of highly realistic digital humans in video conferencing and other two-way communication applications. The technology will soon be available in early access.Audio2Face-2D, currently in early access, animates static portraits based on audio input, creating dynamic, speaking digital humans from a single image. Try the technology at ai.nvidia.com.Companies Embracing Digital Human ApplicationsHTC, Looking Glass, Reply and UneeQ are among the latest companies using NVIDIA ACE and Maxine across a broad range of use cases, including customer service agents, and telepresence experiences in entertainment, retail and hospitality.At SIGGRAPH, digital human technology developer UneeQ is showcasing two new demos.The first spotlights cloud-rendered digital humans powered by NVIDIA GPUs with local, in-browser computer vision for enhanced scalability and privacy, and animated using the Audio2Face-3D NVIDIA NIM microservice. UneeQs Synapse technology processes anonymized user data and feeds it to a large language model (LLM) for more accurate, responsive interactions.The second demo runs on a single NVIDIA RTX GPU-powered laptop, featuring an advanced digital human powered by Gemma 7B LLM, RAG and the NVIDIA Audio2Face-3D NIM microservice.Both demos showcase UneeQs NVIDIA-powered efforts to develop digital humans that can react to users facial expressions and actions, pushing the boundaries of realism in virtual customer service experiences.HTC Viverse has integrated the Audio2Face-3D NVIDIA NIM microservice into its VIVERSE AI agent for dynamic facial animation and lip sync, allowing for more natural and immersive user interactions.Hologram technology company Looking Glass Magic Mirror demo at SIGGRAPH uses a simple camera setup and Maxines advanced 3D AI capabilities to generate a real-time holographic feed of users faces on its newly launched, group-viewable Looking Glass 16-inch and 32-inch Spatial Displays.Reply is unveiling an enhanced version of Futura, its cutting-edge digital human developed for Costa Crocieres Costa Smeralda cruise ship. Powered by Audio2Face-3D NVIDIA NIM and Riva ASR NIM microservices, Futuras speech-synthesis capabilities tap advanced technologies including GPT-4o, LlamaIndex for RAG and Microsoft Azure text-to-speech services.Futura also incorporates Replys proprietary affective computing technology, alongside Hume AI and MorphCast, for comprehensive emotion recognition. Built using Unreal Engine 5.4.3 and MetaHuman Creator with NVIDIA ACE-powered facial animation, Futura supports six languages. The intelligent assistant can help plan personalized port visits, suggest tailored itineraries and facilitate tour bookings.In addition, Futura refines recommendations based on guest feedback and uses a specially created knowledge base to provide informative city presentations, enhancing tourist itineraries. Futura aims to enhance customer service and offer immersive interactions in real-world scenarios, leading to streamlined operations and driving business growth.Learn more about NVIDIA ACE and NVIDIA Maxine. Discover how accelerated computing and generative AI are transforming industries and creating new opportunities for innovation by watching NVIDIA founder and CEO Jensen Huangs fireside chats at SIGGRAPH.See notice regarding software product information.
Digital Assistance/Content Synthesis/Personalization
Sales and Related/Office and Administrative Support
null
null
null
null
null
null
news
fzliu
What Happened to Bert and T5?
Article URL: https://www.yitay.net/blog/model-architecture-blogpost-encoders-prefixlm-denoisingComments URL: https://news.ycombinator.com/item?id=41009803Points: 5# Comments: 0
https://www.yitay.net/blog/model-architecture-blogpost-encoders-prefixlm-denoising
http://static1.squarespace.com/static/636628d6a9d980577a9faf5b/t/6696ab3d88dbc84f00375267/1721150271478/?format=1500w
2024-07-19T18:54:26Z
Few months ago I was also writing a long tweet-reply to this tweet by @srush_nlp at some point. Then the tweet got deleted because I closed the tab by accident.¯\_()_/¯ I promised to write it as a blog post some day. So here it is!This will be the first part of a series of blog posts I plan to write about model architectures in the era of LLMs (I hope). A quick primer (Skip connection to next section if you feel confident)There are mainly three overarching paradigms of model architectures in the past couple of years. Encoder-only models (e.g., BERT), Encoder-Decoder models (e.g., T5) and decoder-only models (e.g., GPT series). People get confused a lot about this and people often have tons of misconceptions about these dichotomies and architectures so Im hoping this post will help.The first thing to really understand is that encoder-decoder models are actually still autoregressive models. A decoder in an encoder-decoder model is literally and fundamentally still a causal decoder. Instead of pre-filling a decoder model, some text can be offloaded to an encoder, which is then sent to the decoder via cross-attention. Yes, T5 models are also language models!A variant of this is a Prefix Language model or PrefixLM architecture, which does almost the same thing minus the cross attention (and some other small details like sharing weights between encoder/decoder plus not having no encoder bottleneck). PrefixLMs are also sometimes known as non-casual decoders. In short, encoder-decoders, decoder-only models and PrefixLMs are not that different altogether!In the latest excellent lecture by Hyung Won, he masterfully gave explanations about the relationship between these models. You can check it out here. Good stuff.Meanwhile, encoder-only models such as the OG BERT does denoising differently (i.e., in-place) and to some extent, rely on classification task heads to do anything useful with the base model after pretraining. Denoising objective was later adopted in models like T5 in an adapted style using the sequence to sequence format.To this end, it is worth noting that denoising in T5 is not exactly a new objective function per se (in the machine learning sense) but rather a data transformation across inputs, i.e., you can also train span corruption objective with a causal decoder too by the way!People always assume encoder decoder models have to be denoising models partly because of the overly representative T5 model. However, this is not always true. You can train an encoder-decoder with a regular language modeling task (i.e.m, CLM). Conversely you can also train a causal decoder with the span corruption task. As Ive said earlier, this is mostly a data transformation.It is also worth to note that, generally speaking, an Encoder-Decoders of 2N parameters has the same compute cost as a decoder-only model of N parameters which gives it a different FLOP to parameter count ratio. This is just like model sparsity that is split across inputs and targets.This is nothing new, and I didnt come up with anything new here. Its already in the 2019 T5 paper and reemphasized in the UL2 paper. For now, glad to get this out of the way. Now onto objectives.On the denoising objective (does it not work? does it not scale? is it too easy?)The denoising objective Im referring to is any variation of the span corruption task. This is sometimes known as infilling or fill in the blank. There are variations on how to express it (i.e., span length, randomness, sentinel tokens etc). But you get the gist. While the denoising objective in BERT style models are mostly in-place, (e.g., classification head on top of mask tokens), the slightly more modern way is to do it T5-style, ala data transformation that can be processed by an encoder-decoder or decoder-only model. In such a data transformation, masked tokens are just moved to the back for the model for prediction.The primary goal of pretraining is to build a useful internal representation that can be aligned for downstream tasks in the most efficient and effective way possible. The better the internal representations, the easier to use these learned representations for anything useful later. The simple next word prediction  causal language modeling objective is known to do this very well and has served as the bread and butter of the LLM revolution. The question now at hand is whether the denoising objective is just as good.From publicly available information, we know that T5-11B works pretty well even after being aligned/SFT-ed (Flan-T5 XXLs MMLU score is 55+, which is more than decent for a model of this scale and of that time). Hence, we can make some conclusion that the transfer process (pretraining -> alignment) of the denoising objective works relatively reasonably well at this scale.My take is that denoising objectives are great but pretty insufficient as a standalone objective. A big drawback is because of a reason which we could call less loss exposure. In denoising objectives, only a small amount of tokens are being masked and gets learned as a result (i.e., taken into account in the loss). Conversely, in regular language modeling, this is close to 100%. This makes for pretty low sample efficiency per FLOP which makes denoising objectives hugely disadvantaged on a flop-basis comparison. Another drawback is that denoising objectives are more unnatural than regular language modeling since it reformats the input/output in a strange way, making them a little awkward for few-shot learning. (Its still possible to massage these models to do reasonably okay on few-shot tasks though). Hence, I believe denoising objectives should pretty much only be used as a complementary objective to regular language modeling.The early days of unification and why xBERTs went extinctThe gradual phasing out of BERT-like models was an interesting phase that not many people talk about these days. It was subtle. This could also explain why we dont see any mega large scale BERT models running around anymore. The reason? It was largely a matter of unification and shift in task/modeling paradigms. BERT-style models are cumbersome, but the real deprecation of BERT models was because people wanted to do all tasks at once, which led to a better way of doing denoising - using autoregressive models.During 2018-2021, there was an implicit paradigm shift of single task finetuning to massively multi-task models. This slowly gravitated us towards the unified SFT models that we see today that are universal and general purpose. It was simply so hard to do this with BERT. I dont think this has anything much to do with denoising at all. People simply found a way to re-express denoising pretraining tasks if they wanted to use such a model (i.e., T5) which made BERT-style models pretty much deprecated at this point because there is a strictly better alternative.To be even more concrete, encoder-decoder and decoder-only models were able to express multiple tasks at once without the need for task specific classification heads. For Encoder-Decoders, if the decoder was getting in the way, researchers and engineers also began to find out that yanking out the encoder performed just as competitive as a BERT encoder. Moreover, it also retains the same bidirectional attention benefit that made BERT competitive over GPT models at small (often production) scale. The value of denoising objectivesDenoising pretraining objective also learns to predict the next word in a similar way to regular language modeling. However, different from regular causal language modeling, a data transformation is applied to the sequence such that the model learns to fill in the blanks instead of simply predicting the naturally occuring left-to-right text. Notably, denoising objectives are sometimes also called infilling tasks that are sometimes mashed together into pretraining together with regular language modeling tasks.While the exact configuration and implementation details can vary, modern LLMs today may use a combination of language modeling and infilling in some capacity. It is actually interesting how this mixture of LM + infilling seemed to have been propagated around the same time (e.g., UL2, FIM, GLM, CM3), with many groups bringing their own flavor of this mixture in some way. On a side note, the largest publicly disclosed & reported model trained in this style is likely the PaLM-2 model.It is also worth noting that pretraining task mixtures could be also stacked sequentially and does not necessarily have to be mixed concurrently, i.e., Flan-T5 originally trains on 1T span corruption tokens and switches out to 100B tokens of prefix language modeling objective before flan instruction tuning. To some extent, this qualifies as a mixed denoised/LM objective model. To be clear, prefix language modeling objective (not to be confused with architecture) is simply casual language modeling with a split point randomly determined and sent to the input side (with no loss and non-casual masking). On a sides note, infilling could have originated from the world of code LLMs, where filling in the blank was more of a feature desired by coding applications. Meanwhile, UL2 was motivated more by unifying the class of tasks that denoising objectives and bidirectional LLMs do well on with inherently generative tasks (such as summarization or open-ended generation). An advantage of this autoregressive style of denoising shift to the back is that it allows the model to not only learn longer range dependencies but also implicitly benefit from non-explicit bidirectional attention (since you would already have seen the future in order to fill in the blank).Anecdotal experience is that denoising objectives learn representations that are better at certain classes of tasks, sometimes in a more sample efficient way. In the U-PaLM paper, we showed how a small amount of span corruption up-training changes behavior and emergence on a set of BIG-Bench tasks. On top of that, finetuning models trained with this objective generally result in better supervised fine-tuning models, especially at smaller scale.When it comes to single-task finetuning, you can see the OG PaLM-1 62B model gets defeated by a much smaller T5 model. Bidirectional attention + denoising objective packs a punch at a relatively small scale! Im sure many practitioners see this happen these days as well, especially in production.What about bidirectional attention?Bidirectional attention is an interesting inductive bias for language models - one that is commonly conflated with objectives and model backbones. The usefulness of inductive biases changes at different compute regions and could have different effects on scaling curves at different compute regions. That said, it could be true that bidirectional doesnt seem to matter that much at larger scales compared to smaller scales, or have different impacts on different tasks or modalities. For instance, PaliGemma makes use of PrefixLM architectures.As Hyung won also points out in his lecture, PrefixLM models (decoder-only models with bidirectional attention) also have an issue with caching and are an intrinsic drawback with this type of architecture. However, I think there are many ways to work around this flaw which is out of scope for this post.Pros and Cons of Encoder-Decoder architecturesEncoder-decoder architectures actually have some pros vs regular decoder-only models. The first case is where the encoder side is not restricted by a causal mask. To some extent, you can go crazy with the attention layers by doing aggressive pooling or any form of linear attention without worrying about the autoregressive design restriction. This is a good way to offload not so important context to an encoder. You can also make the encoder smaller, which is neat. One example of how encoder-decoder architecture was necessary was in Charformer, where we could go crazy on the encoder and mitigate the speed drawbacks of byte-level models. Encoder-side innovations allow quick wins without worrying about major resigns of the causal mask. Meanwhile, one negative point of the encoder-decoder versus PrefixLM is that inputs and targets have to have fixed allocated budgets. For instance, if the input budget is 1024 tokens, the encoder side has to be padded to this value which causes a lot of potential for wasted compute. Conversely, in PrefixLM, input and targets can just be directly concatenated which mitigates this problem.Relevance to models today and key takeawaysThe ability to reason with inductive biases both at architectural and from a pre-training perspective is a critical aspect of being a competent LLM researcher and practitioner today. Understanding the fundamental nuances helps one to extrapolate and continue to innovate.Here are my key takeaways:Encoder-decoder and decoder-only models are both autoregressive models that have implementation-level differences and pros/cons. They are subtly different inductive biases. Optimal usage really depends on downstream use-case and pretty much application constraints. Meanwhile, for most LLM usage and niche use-cases aside, BERT style encoder models are mostly considered deprecated.Denoising objectives are mostly complementary to CLM. They have just found their way as supporting objectives in pretraining. Training CLM with denoising objectives usually helps in some way. While this happens very frequently in code models (i.e., code infilling), it is not uncommon (though not mandatory) for general purpose models today still pretrain with CLM with some denoising objective.Bidirectional attention helps a lot at smaller scales but is generally optional at larger scales. This is mostly anecdotal. I see bidirectional attention has a form of inductive bias, just like many other types of modifications to Transformer models.Finally, on to recap, we dont see any scaled up xBERTs running around: BERT models got deprecated in favor of more flexible forms of denoising (autoregressive) T5 models. This is largely due to paradigm unification where people would like to perform any task with a general purpose model (as opposed to task specific model). Meanwhile, autoregressive denoising gets sometimes folded as side objectives to casual language models. Final words & AcknowledgementsIts really fun to write about model architecture. I hope to write more about LLMs and AI research in general. I thank Hyung Won Chung and Vinh Tran for feedback on this post.
Unknown
Unknown
null
null
null
null
null
null
news
arrakis-mi added to PyPI
A mechanistic interpretability library for nerds.
https://pypi.org/project/arrakis-mi/
https://pypi.org/static/…er.abaf4b19.webp
2024-07-15T10:40:59Z
Interpretability is a relatively new field where everyday something new is happening. Mechanistic Interpretbaility is one of the approach to reverse engineer neural networks and understand what is happening inside these black-box models.Mechanistic Interpretability is a really exciting subfield of alignment, and recently, a lot has been happening in this field - especially at Anthropic. To look at the goal of MI at Anthropic, read this post. The core operation involved in MI is loading a model, looking at it's weights and activations, and doing some operations on them and producing results.I made Arrakis to deeply understand Transformer based models(maybe in future I will try to be model agnostic). The first thought that should come to mind is Why not use Transformer Lens? Neel Nanda has already made significant progress in that. I made Arrakis as I wanted to have a library which can do more than just get the activations - I wanted a more complete library where researchers can do experiments, and track their progress. Think of Arrakis as a complete suite to conduct MI experiments, where I try to get the best of both Transformer Lens and Garcon. More features will be added as I understand how to made this library more useful for the community, and I need feedback for that.Tools and DecomposibilityRegardless of what research project you are working on, if you are not keeping track of things, it gets messy really easily. In a field like MI, where you are constantly looking at all different weights and biases, and there are a lot of moving parts - it gets overwhelming fairly easily. I've experienced this personally, and being someone who is obsessed with reducing experimentation time and get results quickly, I wanted to have a complete suite which makes my workload easy.Arrakis is made so that this doesn't happen. The core principle behind Arrakis is decomposibility. Do all experiments with plug-and-play tools(will be much clear in the walkthrough). This makes experimentation really flexible, and at the same time, Arrakis keeps track of different versions of the experiments by default. Everything in Arrakis is made in this plug and play fashion. I have even incorporated a graphing library(on top of several popular libraries) to make graphing a lot easier.I really want feedback and contributions on this project so that this can be adapted by the community at large.Arrakis WalkthroughLet's understand how to conduct a small experiment in Arrakis. It is easy, reprodcible and a lot easy to implement.Step 1: Install the packageAll the dependencies of the project are maintained through poetry.pipinstallarrakis-miStep 2: Create HookedAutoModelHookedAutoModel offers a convinient way to import models from Huggingface directly(with Hooks). Everything just works out of the box. First, create a HookedConfig for the model you want to support with the required parameters. Then, create a HookedAutoModel from the config. As of now, these models are supported :["gpt2","gpt-neo","gpt-neox","llama","gemma","phi3","qwen2","mistral","stable-lm",]As mentioned, the core idea behind Arrkis is decompsibility, so a HookedAutoModel is a wrapper around Huggingface PreTrainedModel class, with a single plug and play decorator for the forward pass. All the model probing happens behind the scenes, and is pre-configured.fromarraki.src.core_arrakis.activation_cacheimport*config=HookedAutoConfig(name="llama",vocab_size=50256,hidden_size=8,intermediate_size=2,num_hidden_layers=4,num_attention_heads=4,num_key_value_heads=4)model=HookedAutoModel(config)Step 3: Set up Interpretability BenchAt it's core, the whole purpose of Arrakis is to conduct MI experiment. After installing, derive from the BaseInterpretabilityBench and instantiate an object(exp in this case). This object provides a lot of function out-of the box based on the "tool" you want to use for the experiment, and have access to the functions that the tool provides. You can also create your own tool(read about that here )fromarrakis.src.core_arrakis.base_benchimportBaseInterpretabilityBenchclassMIExperiment(BaseInterpretabilityBench):def__init__(self,model,save_dir="experiments"):super().__init__(model,save_dir)self.tools.update({"custom":CustomFunction(model)})exp=MIExperiment(model)Apart from access to MI tools, the object also provies you a convinient way to log your experiments. To log your experiments, just decorate the function you are working with @exp.log_experiment, and that is pretty much it. The function creates a local version control on the contents of the function, and stores it locally. You can run many things in parallel, and the version control helps you keep track of it.# Step1: Create a function where you can do operations on the [email protected]_experiment# This is pretty much it. This will log the experiment.defattention_experiment():print("This is a placeholder for the experiment. Use as is.")return4# Step 2: Then, you run the function, get results. This starts the experiment.attention_experiment()# Step 3: Then, we will look at some of the things that logs keep a track ofl=exp.list_versions("attention_experiment")# This gives the hash of the content of the experiment.print("This is the version hash of the experiment: ",l)# Step 4: You can also get the content of the experiment from the saved json.print(exp.get_version("attention_experiment",l[0])['source'])# This gives the content of the experiment.Apart from these tools, there are also @exp.profile_model(to profile how much resources the model is using) and @exp.test_hypothesis(to test hypothesis). Support of more tools will be added as I get more feedback from the community.Step 4: Create you experimentsBy default, Arrakis provides a lot of Anthropic's interpretability experiments(Monosemanticity, Residual Decomposition, Read Write Analysis and a lot more). These are provided as tools, so in your experiments, you can plug and play with them and conduct your experiments. Here's an example of how you can do that.# Making functions for Arrakis to use is pretty easy. Let's look it in action.# Step 1: Create a function where you can do operations on the model. Think of all the tools you might need for it.# Step 2: Use the @exp.use_tools decorator on it, with additional arg of the tool.# Step 3: The extra argument gives you access to the function. [email protected]_tools("write_read")# use the `exp.use_tools()` decorator.defread_write_analysis(read_layer_idx,write_layer_idx,src_idx,write_read=None):# pass an additional argument.# Multi-hop attention (write-read)# use the extra argument as a tool.write_heads=write_read.identify_write_heads(read_layer_idx)read_heads=write_read.identify_read_heads(write_layer_idx,dim_idx=src_idx)return{"write_heads":write_heads,"read_heads":read_heads}print(read_write_analysis(0,1,0))# Perfecto!Step 5: Visualize the ResultsGenerating plots is Arrakis is also plu and play, just add the decorator and plots are generated by default. Read more about the graphing docs herefromarrakis.src.graph.base_graphimport*# Step 1: Create a function where you can want to draw plot.# Step2: Use the @exp.plot_results decorator on it(set the plotting lib), with additional arg of the plot spec. Pass input_ids here as well(have to think on this)# Step3: The extra argument gives you access to the fig. Done.exp.set_plotting_lib(MatplotlibWrapper)# Set the plotting [email protected]_results(PlotSpec(plot_type="attention",data_keys="h.1.attn.c_attn"),input_ids=input_ids)# use the `exp.plot_results()` decorator.defattention_heatmap(fig=None):# pass an additional argument.returnfigattention_heatmap()# Done.plt.show()These are three upper level classes in Arrakis. One is the InterpretabilityBench where you conduct experiments, the second is the core_arrakis where I've implemented some common tests for Transformer based model and the third is the Graphing.List of ToolsThere is a lot of what's happening inside the core_arrakis. There are a lot of tools that we can use, which we'll deal with one by one. We'll understand what they do and how to use Arrakis to test them. These tools are supported as of now(please contribute more!)Go to their respective pages and read about what they mean and how to use Arrakis to conduct experiments.Extending ArrakisApart from all of these tool, it is easy to develop tools on your own which you can use for your experiment. These are the steps to do so:Step 1: Make a class which inherits from the BaseInterpretabilityToolfromarrakis.src.core_arrakis.base_interpretimportBaseInterpretabilityToolclassCustomTool(BaseInterpretabilityTool):def__init__(self,model):super().__init__(model)self.model=modeldefcustom_function(self,*args,**kwargs):# do some computationspassdefanother_custom_function(self,*args,**kwargs):# do another calcualtionspassThe attribute model is a wrapper around Huggingface PreTrainedModel with many additional features which makes easier for experimentation purposes. The reference for model is given here. Write your function that utilizes the ActivationCache and get the intermediate activations.Step 2: In the derived class from BaseInterpretabilityBench, add your custom tool in the following manner.fromsrc.bench.base_benchimportBaseInteroretabilityBench# Import the custom tool here.classExperimentBench(BaseInterpretabilityBench):def__init__(self,model,save_dir="experiments"):super().__init__(model,save_dir)self.tools.update({"custom":CustomTool(model)})exp=ExperimentBench(model)# where model is an instance of HookedAutoModelAnd that is pretty much it. Now, in order to use it in a function, just do the following:@exp.use_tools("custom")deftest_custom_function(args,kwargs,custom):# the final argument should be the same name as the tool key. custom.custom_function()custom.another_custom_function()test_custom_function(args,kwargs)Adding your own tool is really easy in Arrakis. Read the API reference guide to see how to implement your own functions. Open a PR for tools that are not implemented and I can add it quickly.How to Start?For just starting out, consider going through the files demo.ipynb to get an overview of the library, test_graphs.py and test_new_model.py to test the model and the graphs(run from the command line)
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
danielhanchen
Show HN: Finetune Llama-3.1 2x faster in a Colab
Just added Llama-3.1 support! Unsloth https://github.com/unslothai/unsloth makes finetuning Llama, Mistral, Gemma & Phi 2x faster, and use 50 to 70% less VRAM with no accuracy degradation.There's a custom backprop engine which reduces actual FLOPs, and all kernels are written in OpenAI's Triton language to reduce data movement.Also have an 2x faster inference only notebook in a free Colab as well! https://colab.research.google.com/drive/1T-YBVfnphoVc8E2E854...Comments URL: https://news.ycombinator.com/item?id=41050468Points: 1# Comments: 0
https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
https://colab.research.g…avicon_256px.png
2024-07-23T20:30:20Z
null
Unknown
Unknown
null
null
null
null
null
null
news
ozgrozer
Show HN: AI Renamer (Rename files and images by their content)
A Node.js CLI tool that uses Ollama models (Llama, Gemma, Llava etc.) to intelligently rename files and images in a specified directoryComments URL: https://news.ycombinator.com/item?id=40907877Points: 1# Comments: 1
https://github.com/ozgrozer/ai-renamer
https://opengraph.githubassets.com/7f01d896c4878e97c19c7426371bfa725a42f6e881594436608a607cfb3e1d2f/ozgrozer/ai-renamer
2024-07-08T18:22:30Z
A Node.js CLI tool that uses Ollama models (Llama, Gemma, Phi etc.) to intelligently rename files in a specified directoryRename imagesai-renamer-images-1080p.mp4Rename filesai-renamer-1080p.movYou need to have Ollama and at least one LLM (Llama, Gemma etc.) installed on your systemRun with NPXRun with NPM# Install it globallynpm install -g ai-renamer# Run itai-renamer /pathnpx ai-renamer --helpOptions: -h, --help Show help [boolean] --version Show version number [boolean] -c, --set-case Set the case style (e.g. camelCase, pascalCase, snakeCase, kebabCase) [string] -m, --set-model Set the Ollama model to use (e.g. gemma2, llama3) [string] -x, --set-chars Set the maximum number of characters in the new filename (e.g. 25) [number] -l, --set-language Set the output language (e.g. English, Turkish) [string]To get the model name to use in --set-modelai-renamer uses change-case library for case styling# value: resultcamelCase: twoWordscapitalCase: Two WordsconstantCase: TWO_WORDSdotCase: two.wordskebabCase: two-wordsnoCase: two wordspascalCase: TwoWordspascalSnakeCase: Two_WordspathCase: two/wordssentenceCase: Two wordssnakeCase: two_wordstrainCase: Two-WordsFeel free to contribute. Open a new issue, or make a pull request.GPL-3.0
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Sophia Melanson Ricciardone
How AI bots spread misinformation online and undermine democratic politics
Consider a typical morning routine: coffee in hand, you peruse Twitter (now rebranded as X) to catch up on the news. Headlines appear among a flurry of tweets on everything from memes about political leaders to cultural Marxism, free speech, making America great again and draining the swamp.
https://phys.org/news/2024-07-ai-bots-misinformation-online-undermine.html
https://scx2.b-cdn.net/g…23/twitter-x.jpg
2024-07-25T15:20:01Z
Consider a typical morning routine: coffee in hand, you peruse Twitter (now rebranded as X) to catch up on the news. Headlines appear among a flurry of tweets on everything from memes about political leaders to cultural Marxism, free speech, making America great again and draining the swamp.Before your day has even begun, a burst of disparate ideas coalesces in your mind in response to the appearance of a single word or catchphrase. It's a scenario repeated daily, where snippets of information mold themselves onto our views and biases, influencing how we interpret online discourse and those who engage in it.In the heated space of contemporary politics, popularized words and catchphrases wield a lot of influence. Controversial rallying cries like "build the wall" and "Trudeau must go" regularly appear on social media, punctuating debates and discourse with an emotionally palpable fervor.These phrases are more than mere words; they are ideological shorthand that seek to galvanize people and spark outrage online like never before.But, in our increasingly digitized world, how do we know whether the accounts we interact with online are other human beings or bots? And given the powerful influence this kind of rhetoric can have, what impact do these bots have on our decision-making and democratic processes?AI botsMy Ph.D. research focused on the rise of "botaganda"online content circulated by automated accounts, or bots, for electioneering purposes.Bots are automated accounts on social media that can be used to post tweets, like and share content or follow users on social media without needing a person to do it manually.Scholars have highlighted how bots "could be used to covertly exploit weaknesses in [a person's] character and persuade them to take action against their own best interest."The advent of artificial intelligence and machine learning has certainly equipped us with several advantages in contemporary life, but it has also made independent political thought much harder to achieve. It is increasing the prevalence of digital misinformation, and demands that we exercise vigilance to ensure we can make informed decisions.Understanding the social psychology that makes us susceptible to catchphrases like "drain the swamp" is integral to combating the impact of misinformation circulated online. Our social brains are susceptible to these kinds of linguistic triggers in three important ways:The fact that humans mimic and synchronize communication styles when interacting;We're more likely to remember frequently repeated words and;Our brains are more likely to connect unrelated ideas when words or phrases frequently recur together over and over again in the same context.When we unwittingly engage with bots that repeatedly use certain phrases and terms, this subtly reinforces their association and impact.What I foundI conducted statistical analysis on how bot-generated content influenced Canadian Twitter users during the SNC Lavalin scandal from March 14 to April 9, 2019. My study found strong correlations between bot-generated and human tweets, suggesting people engaged closely with AI-generated content and stored it in memory for easy retrieval and recall.My analysis shows that bot-circulated tweets shared a high degree of similarity with human-generated tweets. The similarity in the emotional salience of bot-generated and human-generated tweets was significantly pronounced.I first used Spearman's phi coefficient, a statistical tool, to measure how strongly bot tweets related with human tweets. I then applied linear regression to understand this relationship in more detail and to see if changes in bot tweets affected changes in human tweets.The results show there is a strong correlation between bot and human tweets, and that the content of bot tweets significantly influences linguistic aspects of human-generated tweets. In simpler terms, human tweets replicated bot tweets to a high degree.On March 14, 2019, for instance, bot tweets shared 75 percent similarity with human tweets, which increased to 92 percent similarity by March 28. The emotional words used in bot tweets were reproduced in human tweets just over 97 percent of the time on March 14. Though the reproduction of emotional words in human tweets decreased over time, the similarity remained significant. This underscores how closely bot- and human-generated content can mirror each other.Sometimes, bots simply mirror human activity because machine learning and generative AI technology are designed to replicate our behaviors. However, my analysis aims to understand not just how similar bot tweets and human tweets are, but also which one influences the other and in what way.My examination of prevalent words and phrases like "obstruction of justice," "Trudeau's scandal" and "Liberal coverup" propagated by bots were replicated at high frequency in human-generated tweets, both in unique and retweeted tweets.Social mimicryThis study's findings support the idea that we are inclined to mimic language structures of tweets posted by bots, which gains traction among users engaged in a resulting ecology of tweets posted both by peers and computer programs, a phenomenon observed in human-computer interactions.Research underscores our innate tendency to mimic the communication patterns of those around us, particularly those who share our values and beliefsa phenomenon also evident in our digital interactions. This inclination shapes how we express ourselves online, influencing our language, intonation and even how we make arguments.Popular words and catchphrases that appear at high frequency within online contexts help our brains organize and remember large amounts of information more easily.But how do these phrases achieve such resonance?We know that repeatedly hearing specific words or phrases significantly enhances our ability to recall them more effortlessly, especially when they consistently appear within the same context.In the landscape of our social brains, single words and phrases don't carry much weight by themselves. What makes them powerful tools for spreading misinformation is their knack for evoking mental connections to familiar ideas, shaping how we understand current political developments.Consider this: when several disparate concepts are cleverly woven into a memorable catchy phrase, they merge those underlying concepts in our minds, making them feel logical and familiar. For example: "Make America Great Again."This fusion is especially potent when these ideas strike a chord with our core values and beliefs, making catchphrases highly effective at molding public opinion in profound ways.As we navigate this era of digital discourse, awareness of blind spots in our social psychology is our best defense. Understanding how cues or triggers affect us can reduce their influence over time. The more aware we are of bots and how they work, the more able we are to protect ourselves from misleading rhetoric, ensuring our democratic processes remain robust and inclusive.Provided byThe ConversationThis article is republished from The Conversation under a Creative Commons license. Read the original article.Citation:How AI bots spread misinformation online and undermine democratic politics (2024, July 25)retrieved 25 July 2024from https://phys.org/news/2024-07-ai-bots-misinformation-online-undermine.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Detection and Monitoring/Information Retrieval Or Search
Life, Physical, and Social Science/Education, Training, and Library
null
null
null
null
null
null
news
JosefAlbers
Show HN: Phi-3-MLX – Language and Vision Models for Apple Silicon
Phi-3-MLX is an open-source framework that brings the latest Phi-3 models to Apple Silicon using the MLX framework. It supports both the Phi-3-Mini-128K language model (updated July 2, 2024) and the Phi-3-Vision multimodal model, enabling a wide range of AI applications.Key features:1. Apple Silicon Optimization: Leverages MLX for efficient execution on Apple hardware.2. Flexible Model Usage: - Phi-3-Mini-128K for language tasks - Phi-3-Vision for multimodal capabilities - Seamless switching between language-only and multimodal tasks3. Advanced Generation Techniques: - Batched generation for multiple prompts - Constrained (beam search) decoding for structured outputs4. Customization Options: - Model and cache quantization for resource optimization - (Q)LoRA fine-tuning for task-specific adaptation5. Versatile Agent System: - Multi-turn conversations - Code generation and execution - External API integration (e.g., image generation, text-to-speech)6. Extensible Toolchains: - In-context learning - Retrieval Augmented Generation (RAG) - Multi-agent interactionsThe framework's flexibility unlocks new potential for AI development on Apple Silicon. Some unique aspects include:- Easy switching between language-only and multimodal tasks- Custom toolchains for specialized workflows- Integration with external APIs for extended functionalityPhi-3-MLX aims to provide a user-friendly interface for a wide range of AI tasks, from text generation to visual question answering and beyond.GitHub: https://github.com/JosefAlbers/Phi-3-Vision-MLXDocumentation: https://josefalbers.github.io/Phi-3-Vision-MLX/I would love to hear your thoughts on potential applications for this framework and any suggestions for additional features or integrations.Comments URL: https://news.ycombinator.com/item?id=41002393Points: 1# Comments: 0
https://github.com/JosefAlbers/Phi-3-Vision-MLX
https://opengraph.githubassets.com/b46de8e824710d5d2ae836679e287ce131e37a328d02e394e4fe49508ff8729d/JosefAlbers/Phi-3-Vision-MLX
2024-07-19T06:17:07Z
Phi-3-MLX is a versatile AI framework that leverages both the Phi-3-Vision multimodal model and the recently updated (July 2, 2024) Phi-3-Mini-128K language model, optimized for Apple Silicon using the MLX framework. This project provides an easy-to-use interface for a wide range of AI tasks, from advanced text generation to visual question answering and code execution.Support for the newly updated Phi-3-Mini-128K (language-only) modelIntegration with Phi-3-Vision (multimodal) modelOptimized performance on Apple Silicon using MLXBatched generation for processing multiple promptsFlexible agent system for various AI tasksCustom toolchains for specialized workflowsModel quantization for improved efficiencyLoRA fine-tuning capabilitiesAPI integration for extended functionality (e.g., image generation, text-to-speech)Install and launch Phi-3-MLX from command line:pip install phi-3-vision-mlxphi3vTo instead use the library in a Python script:fromphi_3_vision_mlximportgenerategenerate('What is shown in this image?', 'https://collectionapi.metmuseum.org/api/collection/v1/iiif/344291/725918/main-image')# A list of prompts for batch generationprompts= [ "Explain the key concepts of quantum computing and provide a Rust code example demonstrating quantum superposition.", "Write a poem about the first snowfall of the year.", "Summarize the major events of the French Revolution.", "Describe a bustling alien marketplace on a distant planet with unique goods and creatures.""Implement a basic encryption algorithm in Python.",]# Generate responses using Phi-3-Vision (multimodal model)generate(prompts, max_tokens=100)# Generate responses using Phi-3-Mini-128K (language-only model)generate(prompts, max_tokens=100, blind_model=True)# Model quantizationgenerate("Describe the water cycle.", quantize_model=True)# Cache quantizationgenerate("Explain quantum computing.", quantize_cache=True)The constrain function allows for structured generation, which can be useful for tasks like code generation, function calling, chain-of-thought prompting, or multiple-choice question answering.fromphi_3_vision_mlximportconstrain# Define the promptprompt="Write a Python function to calculate the Fibonacci sequence up to a given number n."# Define constraintsconstraints= [ (100, "\n```python\n"), # Start of code block (100, " return "), # Ensure a return statement (200, "\n```")], # End of code block# Apply constrained decoding using the 'constrain' function from phi_3_vision_mlx.constrain(prompt, constraints)The constrain function can also guide the model to provide reasoning before concluding with an answer. This approach can be especially helpful for multiple-choice questions, such as those in the Massive Multitask Language Understanding (MMLU) benchmark, where the model's thought process is as crucial as its final selection.prompts= [ "A 20-year-old woman presents with menorrhagia for the past several years. She says that her menses have always been heavy, and she has experienced easy bruising for as long as she can remember. Family history is significant for her mother, who had similar problems with bruising easily. The patient's vital signs include: heart rate 98/min, respiratory rate 14/min, temperature 36.1°C (96.9°F), and blood pressure 110/87 mm Hg. Physical examination is unremarkable. Laboratory tests show the following: platelet count 200,000/mm3, PT 12 seconds, and PTT 43 seconds. Which of the following is the most likely cause of this patients symptoms? A: Factor V Leiden B: Hemophilia A C: Lupus anticoagulant D: Protein C deficiency E: Von Willebrand disease", "A 25-year-old primigravida presents to her physician for a routine prenatal visit. She is at 34 weeks gestation, as confirmed by an ultrasound examination. She has no complaints, but notes that the new shoes she bought 2 weeks ago do not fit anymore. The course of her pregnancy has been uneventful and she has been compliant with the recommended prenatal care. Her medical history is unremarkable. She has a 15-pound weight gain since the last visit 3 weeks ago. Her vital signs are as follows: blood pressure, 148/90 mm Hg; heart rate, 88/min; respiratory rate, 16/min; and temperature, 36.6 (97.9). The blood pressure on repeat assessment 4 hours later is 151/90 mm Hg. The fetal heart rate is 151/min. The physical examination is significant for 2+ pitting edema of the lower extremity. Which of the following tests o should confirm the probable condition of this patient? A: Bilirubin assessment B: Coagulation studies C: Hematocrit assessment D: Leukocyte count with differential E: 24-hour urine protein"]constrain(prompts, constraints=[(30, ' The correct answer is'), (10, 'X.')], blind_model=True, quantize_model=True)The constraints encourage a structured response that includes the thought process, making the output more informative and transparent:< Generated text for prompt #0 >The most likely cause of this patient's menorrhagia and easy bruising is E: Von Willebrand disease. The correct answer is Von Willebrand disease.< Generated text for prompt #1 >The patient's hypertension, edema, and weight gain are concerning for preeclampsia. The correct answer is E: 24-hour urine protein.The choose function provides a straightforward way to select the best option from a set of choices for a given prompt. This is particularly useful for multiple-choice questions or decision-making scenarios.fromphi_3_vision_mlximportchooseprompt="What is the capital of France? A: London B: Berlin C: Paris D: Madrid E: Rome"result=choose(prompt)print(result) # Output: 'C'# Using with custom choicescustom_prompt="Which color is associated with stopping at traffic lights? R: Red Y: Yellow G: Green"custom_result=choose(custom_prompt, choices='RYG')print(custom_result) # Output: 'R'# Batch processingprompts= [ "What is the largest planet in our solar system? A: Earth B: Mars C: Jupiter D: Saturn", "Which element has the chemical symbol 'O'? A: Osmium B: Oxygen C: Gold D: Silver"]batch_results=choose(prompts)print(batch_results) # Output: ['C', 'B']Training a LoRA Adapterfromphi_3_vision_mlximporttrain_loratrain_lora( lora_layers=5, # Number of layers to apply LoRAlora_rank=16, # Rank of the LoRA adaptationepochs=10, # Number of training epochslr=1e-4, # Learning ratewarmup=0.5, # Fraction of steps for learning rate warmupdataset_path="JosefAlbers/akemiH_MedQA_Reason")Generating Text with LoRAgenerate("Describe the potential applications of CRISPR gene editing in medicine.", blind_model=True, quantize_model=True, use_adapter=True)Comparing LoRA Adaptersfromphi_3_vision_mlximporttest_lora# Test model without LoRA adaptertest_lora(adapter_path=None)# Output score: 0.6 (6/10)# Test model with the trained LoRA adapter (using default path)test_lora(adapter_path=True)# Output score: 0.8 (8/10)# Test model with a specific LoRA adapter pathtest_lora(adapter_path="/path/to/your/lora/adapter")fromphi_3_vision_mlximportAgent# Create an instance of the Agentagent=Agent()# First interaction: Analyze an imageagent('Analyze this image and describe the architectural style:', 'https://images.metmuseum.org/CRDImages/rl/original/DP-19531-075.jpg')# Second interaction: Follow-up questionagent('What historical period does this architecture likely belong to?')# End the conversation: This clears the agent's memory and prepares it for a new conversationagent.end()# Ask the agent to generate and execute code to create a plotagent('Plot a Lissajous Curve.')# Ask the agent to modify the generated code and create a new plotagent('Modify the code to plot 3:4 frequency')agent.end()# Request the agent to generate an imageagent('Draw "A perfectly red apple, 32k HDR, studio lighting"')agent.end()# Request the agent to convert text to speechagent('Speak "People say nothing is impossible, but I do nothing every day."')agent.end()fromphi_3_vision_mlximportadd_text# Define the toolchain as a stringtoolchain=""" prompt = add_text(prompt) responses = generate(prompt, images) """# Create an Agent instance with the custom toolchainagent=Agent(toolchain, early_stop=100)# Run the agentagent('How to inspect API endpoints? @https://raw.githubusercontent.com/gradio-app/gradio/main/guides/08_gradio-clients-and-lite/01_getting-started-with-the-python-client.md')fromphi_3_vision_mlximportVDBimportdatasets# Simulate user inputuser_input='Comparison of Sortino Ratio for Bitcoin and Ethereum.'# Create a custom RAG tooldefrag(prompt, repo_id="JosefAlbers/sharegpt_python_mlx", n_topk=1): ds=datasets.load_dataset(repo_id, split='train') vdb=VDB(ds) context=vdb(prompt, n_topk)[0][0] returnf'{context}\n<|end|>\n<|user|>\nPlot: {prompt}'# Define the toolchaintoolchain_plot=""" prompt = rag(prompt) responses = generate(prompt, images) files = execute(responses, step) """# Create an Agent instance with the RAG toolchainagent=Agent(toolchain_plot, False)# Run the agent with the user input_, images=agent(user_input)# Continued from Example 2 aboveagent_writer=Agent(early_stop=100)agent_writer(f'Write a stock analysis report on: {user_input}', images)fromphi_3_vision_mlximportbenchmarkbenchmark()TaskVanilla ModelQuantized ModelQuantized CacheLoRA AdapterText Generation8.46 tps51.69 tps6.94 tps8.58 tpsImage Captioning7.72 tps33.10 tps1.75 tps7.11 tpsBatched Generation103.47 tps182.83 tps38.72 tps101.02 tps(On M1 Max 64GB)API references and additional information are available at:https://josefalbers.github.io/Phi-3-Vision-MLX/This project is licensed under the MIT License.
Content Creation/Content Synthesis/Prediction/Process Automation
Unknown
null
null
null
null
null
null
news
Tobias Mann
A friendly guide to containerization for AI work
Save the headaches, ship your dependencies Hands on  One of the biggest headaches associated with AI workloads is wrangling all of the drivers, runtimes, libraries, and other dependencies they need to run.…
https://www.theregister.com/2024/07/07/containerize_ai_apps/
https://regmedia.co.uk/2…ntainer_ship.jpg
2024-07-07T15:37:14Z
Hands on One of the biggest headaches associated with AI workloads is wrangling all of the drivers, runtimes, libraries, and other dependencies they need to run.This is especially true for hardware-accelerated tasks where if you've got the wrong version of CUDA, ROCm, or PyTorch there's a good chance you'll be left scratching your head while staring at an error.If that weren't bad enough, some AI projects and apps may have conflicting dependencies, while different operating systems may not support the packages you need. However, by containerizing these environments we can avoid a lot of this mess by building images that have been configured specifically to a task and - perhaps more importantly - be deployed in a consistent and repeatable manner each time.And because the containers are largely isolated from one another, you can usually have apps running with conflicting software stacks. For example you can have two containers, one with CUDA 11 and the other with 12, running at the same time.This is one of the reasons that chipmakers often make containerized versions of their accelerated-computing software libraries available to users since it offers a consistent starting point for development.In this tutorial we'll be looking at a variety of ways that containerization can be used to assist in the development and/or deployment of AI workloads whether they be CPU or GPU accelerated.This guide assumes that you:Are running on Ubuntu 24.04 LTS (Other distros should work, but your mileage may vary).Have the latest release of Docker Engine installed and a basic understanding of the container runtime.Are running Nvidia's proprietary drivers, if applicable. While there are a number of container environments and runtimes, we'll be looking specifically at Docker for its simplicity and broad compatibility. Having said that, many of the concepts shown here will apply to other containerization runtimes such as Podman, although the execution may be a little different.Unlike virtual machines, you can pass your GPU through to as many containers as you like, and so long as you don't exceed the available vRAM you shouldn't have an issue.For those with Intel or AMD GPUs, the process couldn't be simpler and simply involves passing the right flags when spinning up our container.For example, let's say we want to make make your Intel GPU available to an Ubuntu 22.04 container. You'd append --device /dev/dri to the docker run command. Assuming you're on a bare metal system with an Intel GPU, you'd run something like:docker run -it --rm --device /dev/dri ubuntu:22.04Meanwhile, for AMD GPUs you'd append --device /dev/kfddocker run -it --rm --device /dev/kfd --device /device/dri ubuntu:22.04Note: Depending on your system you'll probably need to run this command with elevated privileges using sudo docker run or in some cases doas docker run.If you happen to be running one of Team Green's cards, you'll need to install the Nvidia Container Toolkit before you can expose it to your Docker containers.To get started, we'll add the software repository for the toolkit to our sources list and refresh Apt. (You can see Nvidia's docs for instructions on installing on RHEL and SUSE-based distros here.)curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.listsudo apt updateNow we can install the container runtime and configure Docker to use it.sudo apt install -y nvidia-container-toolkitWith the container toolkit installed, we just need to tell Docker to use the Nvidia runtime by editing the /etc/docker/daemon.json file. To do this, we can simply execute the following:sudo nvidia-ctk runtime configure --runtime=dockerThe last step is to restart the docker daemon and test that everything is working by launching a container with the --gpus=all flag.sudo systemctl restart dockerdocker run -it --rm --gpus=all ubuntu:22.04Note: If you have multiple GPUs you can specify which ones to expose by using the gpus=1 or gpus '"device=1,3,4"' flags.Inside the container, you can then run nvidia-smi and you should see something similar appear on your screen.+-----------------------------------------------------------------------------------------+| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 ||-----------------------------------------+------------------------+----------------------+| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. || | | MIG M. ||=========================================+========================+======================|| 0 NVIDIA RTX 6000 Ada Gene... Off | 00000000:06:10.0 Off | Off || 30% 29C P8 9W / 300W | 8045MiB / 49140MiB | 0% Default || | | N/A |+-----------------------------------------+------------------------+----------------------++-----------------------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=========================================================================================|| 0 N/A N/A 941 C python3 7506MiB || 0 N/A N/A 40598 C /usr/local/bin/python3 528MiB |+-----------------------------------------------------------------------------------------+One of the most useful applications of Docker containers when working with AI software libraries and models is as a development environment. This is because you can spin up as many containers as you need and tear them down when you're done without worrying about borking your system.Now, you can just spin up a base image of your distro of choice, expose our GPU to it, and start installing CUDA, ROCm, PyTorch, or Tensorflow. For example, to create a basic GPU accelerated Ubuntu container youd run the following (remember to change the --gpus or --device flag appropriately) to create and then access the container.docker run -itd --gpus=all -p 8081:80 -v ~/denv:/home/denv --name GPUtainer ubuntu:22.04docker exec -it GPUtainer /bin/bashThis will create a new Ubuntu 22.04 container named GPUtainer that:Has access to your Nvidia GPUExposes port 80 on the container as port 8081 on your hostMounts /home/denv in container as a denv folder in your host's home directory for easy file transferContinues running after you exitWhile building up a container from scratch with CUDA, ROCm, or OpenVINO can be useful at times, its also rather tedious and time consuming, especially when there are prebuilt images out there that'll do most of the work for you.For example, if we want to get a basic CUDA 12.5 environment up and running we can use a nvidia/cuda image as a starting point. To test it run:docker run -it --gpus=all -p 8081:80 -v ~/denv:/home/denv --name CUDAtainer nvidia/cuda:12.5.0-devel-ubuntu22.04Or, if youve got and AMD card, we can use one of the ROCm images like this ROCm/dev-ubuntu-22.04 one.docker run -it --device /dev/kfd --device /device/dri -p 8081 -v ~/denv:/home/denv -name ROCmtainer ROCm/dev-ubuntu-22.04Meanwhile, owners of Intel GPU should be able to create a similar environment using this OpenVINO image.docker run -it --device /dev/dri:/dev/dri -p 8081 -v ~/denv:/home/denv -name Vinotainer openvino/ubuntu22_runtime:latestBy design, Docker containers are largely ephemeral in nature, which means that changes to them wont be preserved if, for example, you were to delete the container or update the image. However, we can save any changes committing them to a new image.To commit changes made to the CUDA dev environment we created in the last step wed run the following to create a new image called "cudaimage".docker commit CUDAtainer cudaimageWe could then spin up a new container based on it by running:docker run -itd --gpus=all -p 8082:80 -v ~/denv:/home/denv --name CUDAtainer2 cudaimageConverting existing containers into reproducible images can be helpful for creating checkpoints and testing out changes. But, if you plan to share your images, it's generally best practice to show your work in the form of a dockerfile.This file is essentially just a list of instructions that typically tells Docker how to turn an existing image into a custom one. As with much of this tutorial, if you're at all familiar with Docker or the docker build command most of this should be self explanatory.For those new to generating Docker images, we'll go through a simple example using this AI weather app we kludged together in Python. It uses Microsoft's Phi3-instruct LLM to generate a human-readable report from stats gathered from Open Weather Map every 15 minutes in the tone of a TV weather personality.<prclass="wrap_text">import json import time from typing import Dict, Anyimport requests import torch from transformers import pipeline, BitsAndBytesConfig# Constants ZIP_CODE = YOUR_ZIP_CODE API_KEY = "YOUR_OPEN_WEATHER_MAP_API_KEY" # Replace with your OpenWeatherMap API key WEATHER_URL = f"http://api.openweathermap.org/data/2.5/weather?zip={ZIP_CODE}&appid={API_KEY}" UPDATE_INTERVAL = 900 # seconds# Initialize the text generation pipeline quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) pipe = pipeline("text-generation", "microsoft/Phi-3-mini-4k-instruct", device_map="auto", model_kwargs={"quantization_config": quantization_config})def kelvin_to_fahrenheit(kelvin: float) -> float: """Convert Kelvin to Fahrenheit.""" return (kelvin - 273.15) * 9/5 + 32def get_weather_data() -> Dict[str, Any]: """Fetch weather data from OpenWeatherMap API.""" response = requests.get(WEATHER_URL) response.raise_for_status() return response.json()def format_weather_report(weather_data: Dict[str, Any]) -> str: """Format weather data into a report string.""" main_weather = weather_data['main'] location = weather_data['name'] conditions = weather_data['weather'][0]['description'] temperature = kelvin_to_fahrenheit(main_weather['temp']) humidity = main_weather['humidity'] wind_speed = weather_data['wind']['speed']return (f"The time is: {time.strftime('%H:%M')}, " f"location: {location}, " f"Conditions: {conditions}, " f"Temperature: {temperature:.2f}°F, " f"Humidity: {humidity}%, " f"Wind Speed: {wind_speed} m/s")def generate_weather_report(weather_report: str) -> str: """Generate a weather report using the text generation pipeline.""" chat = [ {"role": "assistant", "content": "You are a friendly weather reporter that takes weather data and turns it into short reports. Keep these short, to the point, and in the tone of a TV weather man or woman. Be sure to inject some humor into each report too. Only use units that are standard in the United States. Always begin every report with 'in (location) the time is'"}, {"role": "user", "content": f"Today's weather data is {weather_report}"} ] response = pipe(chat, max_new_tokens=512) return response[0]['generated_text'][-1]['content']def main(): """Main function to run the weather reporting loop.""" try: while True: try: weather_data = get_weather_data() weather_report = format_weather_report(weather_data) generated_report = generate_weather_report(weather_report) print(generated_report) except requests.RequestException as e: print(f"Error fetching weather data: {e}") except Exception as e: print(f"An unexpected error occurred: {e}")time.sleep(UPDATE_INTERVAL) except KeyboardInterrupt: print("\nWeather reporting stopped.")if __name__ == "__main__": main()Note: If you are following along, be sure to set your zip code and Open Weather Map API key appropriately.If you're curious, the app works by passing the weather data and instructions to the LLM via the Transformers pipeline module, which you can learn more about here.On its own, the app is already fairly portable with minimal dependencies. However, it still relies on the CUDA runtime being installed correctly, something we can make easier to manage by containerizing the app.To start, in a new directory create an empty dockerfile alongside the weather_app.py Python script above. Inside the dockerfile we'll define which base image we want to start with, as well as the working directory we'd like to use.FROM nvidia/cuda:12.5.0-devel-ubuntu22.04WORKDIR /ai_weatherBelow this, we'll tell the Dockerfile to copy the weather_app.py script to the working directory.ADD weather_app.py /ai_weather/From here, we simply need to tell it what commands it should RUN to set up the container and install any dependencies. In this case, we just need a few Python modules, as well as the latest release of PyTorch for our GPU.RUN apt updateRUN apt upgrade -yRUN apt install python3 python3-pip -yRUN pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124RUN pip3 install requests accelerate transformersRUN pip3 install bitsandbytes>=0.39.0 -qFinally, we'll set the CMD to the command or executable we want the container to run when it's first started. With that, our dockerfile is complete and should look like this:FROM nvidia/cuda:12.5.0-devel-ubuntu22.04WORKDIR /ai_weatherADD weather_app.py /ai_weather/RUN apt updateRUN apt upgrade -yRUN apt install python3 python3-pip -yRUN pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124RUN pip3 install requests accelerate transformersRUN pip3 install bitsandbytes>=0.39.0 -qCMD ["/bin/bash", "-c", "python3 weather_app.py"]Now all we have to do is convert the dockerfile into a new image by running the following, and then sit back and wait.docker build -t aiweather .After a few minutes, the image should be complete and we can use it to spin up our container in interactive mode. Note: Remove the --rm bit if you don't want the container to destroy itself when stopped.docker run -it --rm --gpus=all aiweatherAfter a few seconds the container will launch, download Phi3 from Hugging Face, quantize it to 4-bits precision, and present our first weather report. "In Aurora, the time is 2:28 PM, and it's a hot one! We've got scattered clouds playing hide and seek, but don't let that fool you. It's a scorcher at 91.69°F, and the air's as dry as a bone with just 20% humidity. The wind's blowing at a brisk 6.26 m/s, so you might want to hold onto your hats! Stay cool, Aurora!"Naturally, this is an intentionally simple example, but hopefully it illustrates how containerization can be used to make running AI apps easier to build and deploy. We recommend taking a look at Docker's documentation here, if you need anything more intricate.Like any other app, containerizing your AI projects has a number of advantages beyond just making them more reproducible and easier to deploy at scale, it also allows models to be shipped alongside optimized configurations for specific use cases or hardware configurations.This is the idea behind Nvidia Inference Microservices NIMs for short which we looked at back at GTC this spring. These NIMs are really just containers built by Nvidia with specific versions of software such as CUDA, Triton Inference Server, or TensorRT LLM that have been tuned to achieve the best possible performance on their hardware.And since they're built by Nvidia, every time the GPU giant releases an update to one of its services that unlocks new features or higher performance on new or existing hardware, users will be able to take advantage of these improvements simply by pulling down a new NIM image. Or that's the idea anyway.Over the next couple of weeks, Nvidia is expected to make its NIMs available for free via its developer program for research and testing purposes. But before you get too excited, if you want to deploy them in production you're still going to need a AI Enterprise license which will set you back $4,500/year per GPU or $1/hour per GPU in the cloud.We plan to take a closer look at Nvidia's NIMs in the near future. But, if an AI enterprise license isn't in your budget, there's nothing stopping you from building your own optimized images, as we've shown in this tutorial. ®Editor's Note: Nvidia provided The Register with an RTX 6000 Ada Generation graphics card to support this story and others like it. Nvidia had no input as to the contents of this article.
Content Creation
Unknown
null
null
null
null
null
null
news
Dom Couldwell
Generative AI and Data: Using Knowledge Graphs and RAG Together
Generative AI has huge potential, but it also faces problems. If generative AI creates information that is not factually accurate in response to a user request – resulting in so-called hallucinations – it can have a big impact on users. Relying on large language model (LLM) training data on its own is not enough to prevent […]The post Generative AI and Data: Using Knowledge Graphs and RAG Together appeared first on DATAVERSITY.
https://www.dataversity.net/generative-ai-and-data-using-knowledge-graphs-and-rag-together/
https://d3an9kf42ylj3p.c…well_600x448.png
2024-07-23T07:25:00Z
Generative AI has huge potential, but it also faces problems. If generative AI creates information that is not factually accurate in response to a user request  resulting in so-called hallucinations it can have a big impact on users. Relying on large language model (LLM) training data on its own is not enough to prevent hallucinations. According to the Vectara Hallucination Leaderboard, GPT 4 Turbo has a hallucination rate of 2.5%, followed by Snowflake Arctic at 2.6% and Intel Neural Chat 7B at 2.8%.To deal with this potential issue and improve results, retrieval augmented generation (RAG) allows users to leverage their company data through vector searches. However, RAG is not perfect either. When companies have documents that often reference each other or if the same data is repeated across different documents, it can reduce the effectiveness of the purely vector-search-based approach.The issue here is that RAG focuses on information similar to the question prompt in order to return results. This makes it harder to answer questions that involve multiple topics or that require multiple hops, as vector search finds results matching the prompt but cannot jump to other linked results. As an example, say that you have a product catalog with files on each product. Some of those products may be very similar, with minor differences in terms of size or additional functionality depending on which version you look at. When a customer asks about a product, you would want your LLM to respond with the right information around the category and around any specific product features too. You would not want your LLM to recommend one product that doesnt have the right features when another in the same line does. Product documentation may also reference other information, e.g., by having a link in the document which means the chunk returned may not offer the end user the full picture.To overcome the potential problem around including the right level of detail, we can combine RAG with knowledge graphs, so that we can point to more specific files with the right data for a response. A knowledge graph represents distinct entities as nodes within the graph and then edges indicate relationships between the specific entities. For instance, a knowledge graph can provide connections between nodes to represent conditions and facts that might otherwise be confusing to the LLM because they might otherwise seem similar.When used for RAG, entities relevant to the question are extracted, and then the knowledge sub-graph containing those entities and the information about them is retrieved. This approach allows you to extract multiple facts from a single source that are associated with a variety of entities within the knowledge graph. It also means you can retrieve just the relevant facts from a given source rather than the whole chunk, which might include irrelevant information.Alongside this, it means that you can deal with the problem of having multiple sources that include some of the same information. In a knowledge graph, each of these sources would produce the same node or edge. Rather than treating each of these sources as a distinct fact and then retrieving multiple copies of the same data, that repeated data will be treated as one node or edge and thus retrieved only once. In practice, this means that you can then either retrieve a wider variety of facts to include in the response, or allow your search to focus only on facts that appear in multiple sources.Knowledge graphs also make it easier to find related information that is relevant for a request, even when it might be two or three steps away from the initial search. In a conventional RAG approach, you would have to carry out multiple rounds of querying to get the same level of response, which is more expensive from a computation standpoint and potentially more expensive in terms of cost too.To create and use a knowledge graph as part of your overall generative AI system, you have several options. For instance, you may want to import an existing set of data that you know is accurate already. Alternatively, you can create your own knowledge graph from your data directly, which can be beneficial when you want to curate your information and check that it is accurate. However, this can be time-intensive and difficult to keep updated when you have a large amount of data, or when you want to add new information quickly.One interesting approach you can use is to employ your LLM to extract information from your content and summarize the data. This automated approach can make it easier to manage information at scale, while still providing you with that up to date knowledge graph that you need. As an example, you can use LangChain and LLMGraphTransformer to take a set of existing unstructured data, apply a structure, and then organize that data. You can then use prompt engineering and knowledge engineering to improve the automated extraction process into a relevant knowledge graph.Once you create the knowledge graph, you will have to store it so it can be accessed and used for requests. At this point, you have two options to use a dedicated graph database to store the whole graph, or add the knowledge graph to your existing database. While it may seem intuitive to use a graph database to store your knowledge graph, it isnt actually necessary. Running on a full graph database is worthwhile if you are planning to run full graph queries using the likes of Gremlin or Cypher. However, graph databases are designed for more complex queries searching for paths with specific sequences of properties, i.e., graph analytics. That overhead is simply overkill for retrieving sub-knowledge graph results in these circumstances, and it opens the door for a host of other problems, such as queries that go off the rails in terms of performance.Retrieving the sub-knowledge graph around a few nodes is a simple graph traversal, so you may not need the full capabilities of a dedicated graph database. When traversals are often only to a depth of two or three hops, any additional information is not likely to be relevant to the specific vector search query in any case. This means that your requests will normally be expressed as a few rounds of simple queries (one for each step) or a SQL join. In effect, the simpler you can keep your queries, the better the quality of the results that you can then provide to your LLM.Adopting these simpler, coarse grained knowledge graphs eliminates the need for a separate graph database and makes it easier to use knowledge graphs with RAG. It also makes the operational side for your data easier, as you can carry out transactional writes to both the graph and other data in the same place. This should have a side benefit of making it easier to scale up the amount of data that you have for querying too. For projects where you have a lot of data that you want to make available for generative AI, RAG is the natural choice. However, you may need to combine RAG with other techniques to improve your accuracy in responses. Using knowledge graphs with RAG enables you to get over the issue of having multiple similar documents or content assets. By looking at how you can combine these data techniques, you can deliver better results for your users while not having to implement and manage multiple different data platforms.
Content Synthesis/Decision Making/Detection and Monitoring
Unknown
null
null
null
null
null
null
news
gaocegege
Should AI workloads run on Kubernetes?
Article URL: https://blog.skypilot.co/ai-on-kubernetes/Comments URL: https://news.ycombinator.com/item?id=41066340Points: 4# Comments: 1
https://blog.skypilot.co/ai-on-kubernetes/
https://blog.skypilot.co…nner_minimal.png
2024-07-25T08:47:04Z
Kubernetes is the de-facto standard for deploying applications and microservices. However, AI workloads are different. Developing AI models is an interactive and resource-intensive process, requiring a fundamentally different approach to the deployment and management of resources.In this blog, we:Discuss the strengths and weaknesses of Kubernetes for AI workloads.Introduce SkyPilot to run AI easily and cost-effectively on Kubernetes and beyond.Provide a step-by-step guide to run the entire AI lifecycle – from development to training to serving – on your Kubernetes cluster with SkyPilot.Kubernetes was not built for AI#Kubernetes is an excellent option for hosting general purpose microservices and offers extensive cluster management features for administrators.However, AI workloads have unique requirements that Kubernetes was not designed for.AI development requires interactivity#Building and deploying AI requires a fundamentally different process than deploying microservices. Developing models and cleaning data are iterative processes that require frequent changes to code with rapid iteration cycles.This is in contrast to the fire-and-forget deployment of microservices, where a single deployment can run for long durations without any intervention.AI is resource hungry#Not only does AI require beefy GPUs for training, keeping costs in check also requires handling diverse resource types that may be spread across geographies and providers. No wonder OpenAI operates infrastructure across multiple regions and clouds.On the other hand, Kubernetes is designed for use in a single tightly networked cluster. The performance of etcd (the underlying data store for Kubernetes) degrades when run across regions.As a result, your resources get locked-in to a single region, limiting availability and increasing costs.AI has strict scheduling requirements#AI training has strict scheduling requirements. Large scale distributed training runs require gang scheduling, where all resources must be allocated at the same time to make progress.Kubernetes does not support gang scheduling out of the box, making it difficult to run distributed training jobs efficiently.Kubernetes has a steep learning curve#The Kubernetes Learning Curve. Source: r/kubernetesAI engineers need to work closely with the infrastructure to develop and deploy models. However, Kubernetes is not known for being friendly to AI engineers and data scientists. Yet they are forced to learn Kubernetes and all the complexities that come with it – containerization, managing pods, services and more.As a result, Kubernetes has been the subject of many discussions, complaints and even memes.AI engineers should not struggle with infrastructure. Instead, they should focus on their core strengths – wrangling data, developing models and evaluating them.But&mldr; Kubernetes also has many strengths#It’s not all doom and gloom. Kubernetes provides an excellent set of features that can be very helpful for AI workloads – autoscaling, fault recovery, resource management, and production readiness.However, leveraging these features for AI on Kubernetes requires a deep understanding of the Kubernetes ecosystem and a dedicated team to develop and maintain highly specialized tooling.SkyPilot: AI on Kubernetes and beyond#SkyPilot is a framework to run AI on any infrastructure. For Kubernetes users, it builds on the core strengths of Kubernetes while providing not only a simpler interface, but also a more cost-effective and available infrastructure layer that goes beyond a single Kubernetes cluster.Here are some key benefits of using SkyPilot on your Kubernetes cluster:Simpler developer experience & built-in interactivity#SkyPilot abstracts away the complexities of Kubernetes and provides a simple interface to run AI workloads.SkyPilot provides a seamless experience for interactive development with SSH access and integrations with VSCode and Jupyter notebooks. Developers can focus on building models and not worry about the underlying infrastructure.Example: iterative model development#Interactive workflows benefit from faster iteration with SkyPilot. For example, a common workflow for AI engineers is to iteratively develop and train models by tweaking code and hyperparameters by observing the training runs.With Kubernetes, a single iteration is a multi-step process involving building a Docker image, pushing it to a registry, updating the Kubernetes YAML and then deploying it.With SkyPilot, a single sky launch takes care of everything. Behind the scenes, SkyPilot provisions pods, installs all required dependencies, executes the job, returns logs and provides SSH access to debug.Iterative Development with SkyPilot vs Kubernetes. With Kubernetes, every change requires tedious updates to docker images and multiple steps to update the training run. With SkyPilot, all you need is sky launch.Example: serving models#A common task after training models is serving.Consider serving Gemma with vLLM:Serving Gemma with vLLM on SkyPilot vs Kubernetes. SkyPilot's YAML spec is 3x shorter and easier to read for AI engineers.Intelligent orchestration to maximize team velocity#When running in a multi-tenant Kubernetes cluster, SkyPilot intelligently schedules GPUs across users. Each user gets their own isolated environment, ensuring that their workloads do not interfere with others while maximizing resource utilization.No more manual scheduling of GPUs – SkyPilot takes care of it for you.Intelligent orchestration in multi-tenant environments with SkyPilot. Manual scheduling relies on slow and inefficient coordination between users over slack and spreadsheets. With SkyPilot, each user requests resources from SkyPilot which allocates GPUs efficiently to maximize utilization.Ran out of GPUs? SkyPilot finds more at the lowest cost#GPUs are in short supply. SkyPilot is designed to maximize availability of your AI workloads by finding GPUs across your Kubernetes cluster, clouds and regions. It can automatically recover from GPU failures, spot instance preemptions and other failures.If your on-prem cluster is running out of resources, SkyPilot can burst to the cloud and find availability wherever it exists to ensure your workloads are not blocked.SkyPilot's failover mechanism finds you GPUs wherever they are available. If all GPUs on your Kubernetes cluster are in use, SkyPilot can burst your job to the clouds, finding GPU capacity at the lowest cost.When you use the clouds, every sky launch invokes SkyPilot’s optimizer which finds the most cost-effective resources across all your infra. SkyPilot also supports spot instances on the cloud and can mix spot and on-demand instances to reduce costs by upto 6x while ensuring availability.== Optimizer ==Estimated cost: $0.0 / hourConsidered resources (1 node):---------------------------------------------------------------------------------------------------------- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN---------------------------------------------------------------------------------------------------------- Kubernetes 2CPU--8GB--1T4 2 8 T4:1 kubernetes 0.00 Azure Standard_NC4as_T4_v3 4 28 T4:1 eastus 0.53 AWS g4dn.xlarge 4 16 T4:1 us-east-1 0.53 GCP n1-highmem-4 4 26 T4:1 us-central1-a 0.59----------------------------------------------------------------------------------------------------------Launching a new cluster 'dev'. Proceed? [Y/n]:SkyPilot's optimizer shows you all resources available to you across all your infra and picks the most cost-effective choice to run your workload.Unified interface for all your infra#SkyPilot provides a unified interface to run AI workloads across on-prem, cloud and hybrid environments. The same YAML specification works across 12+ cloud providers and even your on-prem Kubernetes cluster. Once your jobs are running, sky status gives a unified view of all your resources across clouds.$ sky statusClustersNAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMANDsky-serve-controller-2ea485ea 1 hr ago 1x Kubernetes(4CPU--4GB, ports=['30001-30020']... UP 10m (down) sky serve up -n llama2 ll...sky-jobs-controller-2ea485ea 1 hr ago 1x AWS(4CPU--4GB) UP 10m (down) sky jobs launch -c bert ...Managed jobsIn progress tasks: 1 RUNNINGID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS1 - bert 2x[A100:1] 3 mins ago 3m 26s 2m 18s 0 RUNNINGServicesNAME VERSION UPTIME STATUS REPLICAS ENDPOINTllama2 1 34m 44s READY 3/3 35.225.61.44:30001Service ReplicasSERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGIONllama2 1 1 http://34.173.84.219:8888 1 hr ago 1x Kubernetes({'T4': 1}) READY kubernetesllama2 2 1 http://35.199.51.206:8888 1 hr ago 1x GCP([Spot]{'T4': 1}) READY us-east4llama2 3 1 http://34.31.108.35:8888 1 hr ago 1x Kubernetes({'T4': 1}) READY kubernetes* To see detailed service status: sky serve status -a* 1 cluster has auto{stop,down} scheduled. Refresh statuses with: sky status --refreshSkyPilot provides a single pane of glass to manage all your workloads — development, training and serving — across all your infrastructure.SkyPilot runs like any other application on Kubernetes. Behind the scenes, it creates pods to provide compute resources to jobs and uses Kubernetes services or ingresses to expose them when required. It can also integrate with other Kubernetes controllers such as Kueue.This means all your existing Kubernetes tooling for monitoring, logging and alerting can be used with SkyPilot.Your existing tooling, such as the Kubernetes dashboard, can be used to monitor pods and other Kubernetes resources created by SkyPilot.Guide: Running AI on Kubernetes with SkyPilot#Let’s dive deeper into how to run the entire AI lifecycle – from development to training to serving – on Kubernetes with SkyPilot.Getting started#To get started, install the latest version of SkyPilot along with Kubernetes dependencies:pip install skypilot-nightly[kubernetes]Next, we need to connect a Kubernetes cluster to SkyPilot.If you already have a Kubernetes cluster, all you need is a valid kubeconfig file. Make sure your credentials are set up in ~/.kube/config and you can access your Kubernetes cluster. You can test this by running kubectl get nodes.If you do not have a Kubernetes cluster, run sky local up. This will to set up a local Kubernetes cluster for development and testing purposes.Run sky check to verify your cluster and SkyPilot are set up correctly:# Checks if your Kubernetes credentials are set up correctly sky check kubernetesYou should see Kubernetes under the list of enabled clouds. If not, SkyPilot will show the reason why it is not enabled and suggest corrective steps. Refer to our docs for more details on how to set up Kubernetes.For more advanced setups, SkyPilot can also be configured to use a custom namespace and service account.We are now ready to launch your first SkyPilot cluster!Connect to a GPU pod with SSH and VSCode#Initial phases of AI development require extensive interactive development on a GPU. SkyPilot lets you create “clusters”, which are a collection of pods on Kubernetes. Lets spin up a GPU enabled SkyPilot development cluster and connect to it with SSH and VSCode.First, let’s see what GPUs are available on the cluster with sky show-gpus --cloud kubernetes:$ sky show-gpus --cloud kubernetesKubernetes GPUsGPU QTY_PER_NODE TOTAL_GPUS TOTAL_FREE_GPUST4 1 4 4V100 1, 2 4 4Listing GPU availability on your Kubernetes cluster with SkyPilot. SkyPilot shows the GPU type, total capacity and currently available quantities.To launch a cluster with a GPU for development, use sky launch:# Launch a cluster named 'dev' with 1 NVIDIA T4 GPU. If you do not have a GPU, remove the --gpus flag.sky launch -c dev --gpus T4:1SkyPilot will run its optimizer to find the lowest cost and show you the cheapest option for running your development cluster:== Optimizer ==Estimated cost: $0.0 / hourConsidered resources (1 node):---------------------------------------------------------------------------------------------------------- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN---------------------------------------------------------------------------------------------------------- Kubernetes 2CPU--8GB--1T4 2 8 T4:1 kubernetes 0.00 Azure Standard_NC4as_T4_v3 4 28 T4:1 eastus 0.53 AWS g4dn.xlarge 4 16 T4:1 us-east-1 0.53 GCP n1-highmem-4 4 26 T4:1 us-central1-a 0.59----------------------------------------------------------------------------------------------------------Launching a new cluster 'dev'. Proceed? [Y/n]:Your SkyPilot cluster will be launched as a pod in your Kubernetes cluster. SkyPilot will take care of launching the pod, installing dependencies, setting up SSH and more.Once it is provisioned, you can connect to it over SSH or VSCode.Connecting over SSH#SkyPilot automatically configures your ssh config to add an alias for the dev cluster. Accessing your cluster is as simple as running ssh <cluster>:Connecting with VSCode#Another common use case for interactive development is to connect your local VSCode to a remote cluster and directly edit code that lives on the cluster. This is supported by simply connecting VSCode to the cluster with the cluster name.Click on the top bar, type: > remote-ssh, and select Remote-SSH: Connect Current Window to Host...Select the cluster name (e.g., dev) from the list of hosts.Connecting VSCode to a remote development cluster.Running Jupyter notebooks#You can also launch jupyter on the cluster to get GPU-enabled notebooks. Connect to the machine and forward the port used by jupyter notebook:ssh -L 8888:localhost:8888 devInside the cluster, you can run the following commands to start a Jupyter session:pip install jupyterjupyter notebookIn your local browser, you should now be able to access localhost:8888 and use GPUs in your notebook:Running GPU powered Jupyter Notebook on a development cluster.Distributed training with SkyPilot#Once you have developed your model, you can train your model at scale with SkyPilot’smanaged jobs feature.Let’s define our training job. As an example, we will be using torch distributed data parallel (DDP) to train a BERT question-answering model on the SQuAD dataset on 2 A100 GPUs spread across two nodes:envs:WANDB_API_KEY: # TODO:Fill with your own wandb token, or use --env to pass.resources:accelerators:A100:1# Run on two nodes with 1 GPU eachnum_nodes:2setup:| git clone https://github.com/huggingface/transformers.git -b v4.30.1 cd transformers pip install -e . cd examples/pytorch/question-answering/ pip install -r requirements.txt torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 pip install wandbrun:| cd transformers/examples/pytorch/question-answering/ NUM_NODES=`echo "$SKYPILOT_NODE_IPS" | wc -l` HOST_ADDR=`echo "$SKYPILOT_NODE_IPS" | head -n1` torchrun \ --nnodes=$NUM_NODES \ --nproc_per_node=$SKYPILOT_NUM_GPUS_PER_NODE \ --master_port=12375 \ --master_addr=$HOST_ADDR \ --node_rank=${SKYPILOT_NODE_RANK} \ run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 50 \ --max_seq_length 384 \ --doc_stride 128 \ --report_to wandb \ --run_name $SKYPILOT_TASK_ID \ --output_dir ~/checkpoints \ --save_total_limit 10 \ --save_steps 1000You can also configure your checkpoints to be logged to a persistent volume or to a cloud bucket by changing the --output_dir flag.To launch this job, save the above YAML to a file (e.g., bert.yaml) and run:sky jobs launch -n bert bert.yamlSkyPilot will provision a controller pod that will orchestrate the training job. The controller will create two pods spread across two nodes, each requesting one Nvidia A100 GPU. It will also ensure gang scheduling so that all resources are allocated at the same time.After provisioning, SkyPilot runtime will install the required dependencies and execute the training script. Additionally, it will automatically recover from any failures, including GPU errors and NCCL timeouts.You can monitor job status and view logs with sky jobs queue and sky jobs logs:$ sky jobs queueFetching managed job statuses...Managed jobsIn progress tasks: 1 RUNNINGID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS1 - bert 2x[A100:1] 3 mins ago 3m 26s 2m 18s 0 RUNNINGIf you are using Weights and Biases (W&B) for logging, you can view the training progress on your W&B dashboard too:Monitoring training progress for SkyPilot jobs on the W&B dashboard.If required, you can cancel jobs with sky jobs cancel. SkyPilot will terminate the podsand clean up any resources used by the job.Serving GenAI on Kubernetes with SkyServe#Finally, after training the model, you can serve it with SkyPilot’s SkyServe library.Bring any serving framework: vLLM, TGI, FastAPI, &mldr; etc.SkyServe provides a single endpoint for your service, backed by replicas across all your infra to ensure high availability and lowest costs.SkyServe manages autoscaling and load balancing and allows custom load balancing policies.For example, to serve Google’s open source Gemma model through an OpenAI compatible endpoint, we can run vLLM on Kubernetes with this SkyPilot YAML:envs:MODEL_NAME:google/gemma-2b-itHF_TOKEN: # TODO:Fill with your own huggingface token, or use --env to pass.resources:image_id:docker:vllm/vllm-openai:latestaccelerators:T4:1ports:8000service:readiness_probe:path:/v1/chat/completionspost_data:model:$MODEL_NAMEmessages:- role:usercontent:Hello! What is your name?max_tokens:1replicas:3run:| conda deactivate python3 -c "import huggingface_hub; huggingface_hub.login('${HF_TOKEN}')" python3 -m vllm.entrypoints.openai.api_server --model $MODEL_NAME --host 0.0.0.0 --dtype halfThe service section specifies the readiness probe to check if the model is ready to serve and the number of replicas to run. You can specify autoscaling policies, configure rolling updates and more. Refer to the docs for more details.Save the above snippet as vllm.yaml, make sure HF_TOKEN is filled in and make sure you have access to the model.Launch the service with sky serve up:sky serve up -n vllm vllm.yamlSkyPilot will launch a controller that will act as the load balancer and manage the service replicas. This load balancer will provide one unified endpoint, while behind the scenes it will provision 3 pods across your Kubernetes cluster, each requesting 1 T4 GPU and using the vLLM container image for fast startup. Once the readiness probe passes on the replicas, the unified endpoint will load balance requests across the replicas.To see the status of your service, run sky serve status:$ sky serve statusServicesNAME VERSION UPTIME STATUS REPLICAS ENDPOINTvllm 1 3m 53s READY 3/3 34.44.26.104:30001Service ReplicasSERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGIONvllm 1 1 http://34.30.184.120:8000 5 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesvllm 2 1 http://34.27.200.138:8000 5 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesvllm 3 1 http://34.70.146.169:8000 3 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesSkyServe provides a unified endpoint for your service, backed by replicas provisioned across your infra.SkyServe exposes a unified endpoint address which your applications can now connect to and get completions from the Gemma model. Behind this endpoint, SkyPilot will manage the service replicas, autoscale them based on load and ensure high availability.For example, let’s use curl on the endpoint to get completions from the Gemma model:$ENDPOINT=$(sky serve status --endpoint vllm)$ curl http://$ENDPOINT/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "google/gemma-2b-it", "messages": [ { "role": "user", "content": "Hello! What is your name?" } ], "max_tokens": 25 }'Using curl to get completions from the Gemma model served by SkyServe.Click to see output.{"id":"cmpl-79dc510b6e484352b74b056f6dc36028","object":"chat.completion","created":1719526198,"model":"google/gemma-2b-it","choices":[{"index":0,"message":{"role":"assistant","content":"Hello! My name is Gemma, and I'm here to assist you with any questions or tasks you may have. ","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":16,"total_tokens":41,"completion_tokens":25}}Scaling and updating services#You can also manually scale and update a service. For example, to scale the vLLM service to 5 replicas, update the replicas field in the YAML to 5 and run:$ sky serve update -n vllm vllm.yamlSkyPilot will automatically scale the service to 5 replicas. If the Kubernetes cluster runs outof resources, SkyPilot will burst to the cloud to ensure the service remains available, even under high load.$ sky serve statusServicesNAME VERSION UPTIME STATUS REPLICAS ENDPOINTvllm 1 18m 48s READY 5/5 34.44.26.104:30001Service ReplicasSERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGIONvllm 1 1 http://34.30.184.120:8000 20 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesvllm 2 1 http://34.27.200.138:8000 20 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesvllm 3 1 http://34.70.146.169:8000 18 mins ago 1x Kubernetes({'T4': 1}) READY kubernetesvllm 4 1 http://3.182.116.201:8000 3 mins ago 1x GCP({'T4': 1}) READY us-central1vllm 5 1 http://3.182.101.130:8000 2 mins ago 1x GCP({'T4': 1}) READY us-central1Scaling the vLLM service to 5 replicas. Since the Kubernetes cluster was out of resources, SkyPilot launched replicas on GCP and seamlessly exposes them through the common endpoint.Conclusion#Kubernetes was designed for microservices and running AI workloads on it can be challenging.SkyPilot builds on the strengths of Kubernetes to run the complete AI lifecycle through a unified interface, while providing the ability to burst beyond the Kubernetes cluster to the cloud for additional capacity.In doing so, SkyPilot guarantees high availability and low costs for your AI workloads.Learn more:To receive latest updates, please star and watch the project’s GitHub repo, follow @skypilot_org, or join the SkyPilot community Slack.
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
Mercy A. Kuo
China’s Bid to Lead the World in AI
Insights from Huey-Meei Chang.
https://thediplomat.com/2024/07/chinas-bid-to-lead-the-world-in-ai/
https://thediplomat.com/…07-05-150838.jpg
2024-07-05T15:05:00Z
The Diplomat author Mercy Kuo regularly engages subject-matter experts, policy practitioners, and strategic thinkers across the globe for their diverse insights into U.S. Asia policy.  This conversation with Huey-Meei Chang Senior China Science and Technology Specialist at Georgetown University’s Center for Security and Emerging Technology (CSET) and co-editor with William C. Hannas of Chinese Power and Artificial Intelligence: Perspectives and Challenges (Routledge, 2023) is the 422nd in The Trans-Pacific View Insight Series. Evaluate the viability of Chinas plan to lead the world in artificial intelligence by 2030. It is plausible, even probable. The usual predictors are talent, computing strength, and data availability. Lets look at them in reverse order.China is said to have an edge on data, needed to feed the LLMs [large language models] that many believe will dominate AI through the end of the decade. Chinese data quality is a problem, but the data can be cleaned. In any event, the entire world is running out of training data, so call this even.Computing strength turns on the availability of high-end chips, where China lags. Efforts to restrict Chinas access to graphic processing units (GPUs) have succeeded temporarily, but China has workarounds, such as substituting quantity, third-party purchases, writing efficient algorithms, outsourcing the training, investing in R&D, and, hypothetically, stealing. If the fate of a reborn Huawei is illustrative, we may erase our advantage. But for now, the U.S. has an edge.For AI talent, China is the winner as measured by published papers, conference presentations, number of graduates, university ranking, and number of patents. More than a third of U.S. AI researchers hail from China, which has its own slippery dynamic. In sum, the race to lead in AI is a draw with the momentum in Chinas favor.Examine the core elements of Chinas 2017 New Generation AI Development Plan. The PRC has a history of publicizing its plans and the 2017 document does not disappoint. While aspirational, these ambitions guide funding and should be taken seriously.All the strengths and weaknesses of industrial policy are on display, including support for AI research and its needs in education and talent. Six broad areas get attention: fostering innovation, infusing AI into the economy, integration into society, military-civil fusion, building a safe AI infrastructure, and planning for next-generation AI, i.e., general AI and associated megaprojects.The terms brain and brain-inspired appear in multiple places, reflecting Chinas view of its importance in AI research. Also, references to foreign dependencies building links, going global (), attracting foreign support via Thousand Talents and other venues occupy much of the narrative. Three other points are worrisome: state-backing for world AI leadership, a commitment mostly absent in the U.S.; the call to merge () human and artificial intelligence, which carries its own special risks; and Chinas goal of achieving a first mover advantage () which in an AI context might be irreversible.Identify Chinas three research areas leading to advanced general intelligence. We omit the convoluted discussion about what these terms mean separately and together. Roughly, advanced general AI is human-level AI (artificial general intelligence) or AI that exceeds human levels at most tasks (artificial superintelligence).There are multiple (theoretical) paths to AGI/ASI, and we wont pretend to know everything happening inside China. However, three areas show potential based on statements by credible practitioners, the quality of their work, and visible investment and infrastructure.The first is Chinas work in symbolic and sub-symbolic AI to replicate cognition. Literature surveys of Chinese researchshow most of the hard problems of brain emulation planning, continual learning, creativity, intuition, sensemaking are being studied, efforts that are brain-inspired in a derivative sense.The second approach is making mathematical models of the physical processes that produce these elements of mind in lay terms, brain modeling. Pu Mumings Mesoscopic Connectome Project and the HUST [Huazhong University of Science and Technology] – Suzhou Institute for Brainmatics are examples.Finally, a major part of Chinas BCI [brain-computer interface] research aims at cognitive enhancement of healthy persons. Several such institutes acknowledge AGI as a goal, while BCI pioneer Gao Xiaorong (Tsinghua) sees BCI as a link to superintelligence.Describe Chinas generative approaches to advanced AI. We emphasized Chinas alternative paths to advanced AI at the risk of slighting its significant mainstream research in machine learning and its generative offspring: large language models. LLMs have taken the world by storm due to their success as chatbots, at writing code, designing proteins, doing translation, and the popularity of ChatGPT and other such products that process language at the Turing Test level. China has been quick to follow. Some 100 LLM models were released in 2023 alone, with the count now at around 300. Originality is a moot point (Kai-Fu Lees 01.AI release is modeled on Metas open-source architecture) what matters is that the performance is roughly comparable. Problems involving tokenization, data quality, and GPUs are not dealbreakers.Whether LLMs are a path to general AI is hotly debated in China, perhaps more so than in the West. Some top AI scientists (Zeng Yi, Chinese Academy of Sciences Institute of Automation, and Zhu Songchun, Beijing Institute for General Artificial Intelligence) are working on brain-inspired small data models, which at some point may marry up with LLMs whereupon genuine AGI becomes a real possibility.Assess Chinas emerging advanced AI potential vis-à-vis U.S. AI advancements. China is on a par with or ahead of the U.S. on many predictors of AI performance and has the will, wherewithal, savvy, and support to push ahead. Notions about free markets, globalization, and the role of political freedom in innovation will have little impact on what happens.The one area, besides chips, where China acknowledges a deficit is basic science, which if true is offset by Chinas ability to tap foreign sources, which is nearly impossible to control. Western universities and technology companies for the most part understand the need to protect their IP but the problem is huge and solutions conflict with our tradition of openness. The PRCs ability to exploit these vulnerabilities dates from 1956, when then-Premier Zhou Enlai instructed Chinas S&T managers to build an intelligence apparatus for foreign transfers. That enterprise is still blossoming.Another challenge for the West, beyond shedding its hubris, is its attachment to the one-horse paradigm of LLMs, which may end up at a dead end. China lacks this encumbrance.The Diplomat author Mercy Kuo regularly engages subject-matter experts, policy practitioners, and strategic thinkers across the globe for their diverse insights into U.S. Asia policy.  This conversation with Huey-Meei Chang Senior China Science and Technology Specialist at Georgetown University’s Center for Security and Emerging Technology (CSET) and co-editor with William C. Hannas of Chinese Power and Artificial Intelligence: Perspectives and Challenges (Routledge, 2023) is the 422nd in The Trans-Pacific View Insight Series. Evaluate the viability of Chinas plan to lead the world in artificial intelligence by 2030. It is plausible, even probable. The usual predictors are talent, computing strength, and data availability. Lets look at them in reverse order.China is said to have an edge on data, needed to feed the LLMs [large language models] that many believe will dominate AI through the end of the decade. Chinese data quality is a problem, but the data can be cleaned. In any event, the entire world is running out of training data, so call this even.Computing strength turns on the availability of high-end chips, where China lags. Efforts to restrict Chinas access to graphic processing units (GPUs) have succeeded temporarily, but China has workarounds, such as substituting quantity, third-party purchases, writing efficient algorithms, outsourcing the training, investing in R&D, and, hypothetically, stealing. If the fate of a reborn Huawei is illustrative, we may erase our advantage. But for now, the U.S. has an edge.For AI talent, China is the winner as measured by published papers, conference presentations, number of graduates, university ranking, and number of patents. More than a third of U.S. AI researchers hail from China, which has its own slippery dynamic. In sum, the race to lead in AI is a draw with the momentum in Chinas favor.Examine the core elements of Chinas 2017 New Generation AI Development Plan. The PRC has a history of publicizing its plans and the 2017 document does not disappoint. While aspirational, these ambitions guide funding and should be taken seriously.All the strengths and weaknesses of industrial policy are on display, including support for AI research and its needs in education and talent. Six broad areas get attention: fostering innovation, infusing AI into the economy, integration into society, military-civil fusion, building a safe AI infrastructure, and planning for next-generation AI, i.e., general AI and associated megaprojects.The terms brain and brain-inspired appear in multiple places, reflecting Chinas view of its importance in AI research. Also, references to foreign dependencies building links, going global (), attracting foreign support via Thousand Talents and other venues occupy much of the narrative. Three other points are worrisome: state-backing for world AI leadership, a commitment mostly absent in the U.S.; the call to merge () human and artificial intelligence, which carries its own special risks; and Chinas goal of achieving a first mover advantage () which in an AI context might be irreversible.Identify Chinas three research areas leading to advanced general intelligence. We omit the convoluted discussion about what these terms mean separately and together. Roughly, advanced general AI is human-level AI (artificial general intelligence) or AI that exceeds human levels at most tasks (artificial superintelligence).There are multiple (theoretical) paths to AGI/ASI, and we wont pretend to know everything happening inside China. However, three areas show potential based on statements by credible practitioners, the quality of their work, and visible investment and infrastructure.The first is Chinas work in symbolic and sub-symbolic AI to replicate cognition. Literature surveys of Chinese researchshow most of the hard problems of brain emulation planning, continual learning, creativity, intuition, sensemaking are being studied, efforts that are brain-inspired in a derivative sense.The second approach is making mathematical models of the physical processes that produce these elements of mind in lay terms, brain modeling. Pu Mumings Mesoscopic Connectome Project and the HUST [Huazhong University of Science and Technology] – Suzhou Institute for Brainmatics are examples.Finally, a major part of Chinas BCI [brain-computer interface] research aims at cognitive enhancement of healthy persons. Several such institutes acknowledge AGI as a goal, while BCI pioneer Gao Xiaorong (Tsinghua) sees BCI as a link to superintelligence.Describe Chinas generative approaches to advanced AI. We emphasized Chinas alternative paths to advanced AI at the risk of slighting its significant mainstream research in machine learning and its generative offspring: large language models. LLMs have taken the world by storm due to their success as chatbots, at writing code, designing proteins, doing translation, and the popularity of ChatGPT and other such products that process language at the Turing Test level. China has been quick to follow. Some 100 LLM models were released in 2023 alone, with the count now at around 300. Originality is a moot point (Kai-Fu Lees 01.AI release is modeled on Metas open-source architecture) what matters is that the performance is roughly comparable. Problems involving tokenization, data quality, and GPUs are not dealbreakers.Whether LLMs are a path to general AI is hotly debated in China, perhaps more so than in the West. Some top AI scientists (Zeng Yi, Chinese Academy of Sciences Institute of Automation, and Zhu Songchun, Beijing Institute for General Artificial Intelligence) are working on brain-inspired small data models, which at some point may marry up with LLMs whereupon genuine AGI becomes a real possibility.Assess Chinas emerging advanced AI potential vis-à-vis U.S. AI advancements. China is on a par with or ahead of the U.S. on many predictors of AI performance and has the will, wherewithal, savvy, and support to push ahead. Notions about free markets, globalization, and the role of political freedom in innovation will have little impact on what happens.The one area, besides chips, where China acknowledges a deficit is basic science, which if true is offset by Chinas ability to tap foreign sources, which is nearly impossible to control. Western universities and technology companies for the most part understand the need to protect their IP but the problem is huge and solutions conflict with our tradition of openness. The PRCs ability to exploit these vulnerabilities dates from 1956, when then-Premier Zhou Enlai instructed Chinas S&T managers to build an intelligence apparatus for foreign transfers. That enterprise is still blossoming.Another challenge for the West, beyond shedding its hubris, is its attachment to the one-horse paradigm of LLMs, which may end up at a dead end. China lacks this encumbrance.
Content Creation/Content Synthesis/Decision Making/Information Retrieval Or Search
Unknown
null
null
null
null
null
null
news
Alyssa Hughes
Abstracts: July 29, 2024
LongRoPE is making it possible to extend language model context windows, including for the Microsoft Phi-3 family of SMLs, while maintaining performance. Learn about the work, featured at #icml2024, with podcast guest and Senior Researcher Li Lyna Zhang.
https://www.microsoft.com/en-us/research/podcast/abstracts-july-29-2024/
https://www.microsoft.co…_FB_1200x627.jpg
2024-07-29T16:40:18Z
LI LYNA ZHANG: Thank you for having me.HUIZINGA: So let’s start with a brief overview of your paper. Tell us about the issue your research addresses and why it matters.ZHANG: OK, so this paper is about how to effectively extend the context window of large language models beyond 2 million tokens. Why this is important? Because enabling longer input contexts can improve LLM capabilities. Right now, some LLMs can only handle a limited context window of 4K tokens, which is about 10 pages in a book. With our method, we can push LLM context window to over 2 million tokens. That means you can put all seven Harry Potter books to the LLM and ask any question about this story! Another important thing is that our method is super efficient. It requires minimal changes to the LLM architectures, and most existing optimizations can be reused. Therefore, our method can be easily applied in real production.HUIZINGA: So it sounds like what you’re working on is improving the memory span of artificial intelligence or large language models. So what’s already been done in this field, and what unique contributions does your work bring?ZHANG: Well, there has been a lot of work in building long-context LLMs. For example, pretraining with an efficient model architecture, using RAG (retrieval-augmented generation), and extending the context window with RoPE positional interpolation. Our approach uses the last technique. Let me briefly explain it. RoPE stands for rotary positional embedding, which encodes token position information for transformer models. When we pretrain an LLM, we set a context window size, and all token positions have a predefined range of RoPE values. Extending for a longer context window introduces new token positions that can be out of this predefined range, thus leading to out-of-distribution issues and making fine-tuning difficult. RoPE positional interpolation solves this by downscaling positional embeddings to fit within the pretrained range. However, positional embeddings like RoPE exhibit non-uniform information entropy in transformer models. Existing approaches do not effectively handle these non-uniformities during RoPE interpolation, leading to information loss and limiting the context window size. Our method addresses this challenge; therefore, it can achieve the longest context window size.HUIZINGA: OK, so, Li, how would you describe the methodology you used for this work, and how did you go about conducting the research?ZHANG: OK. So our method is to interpolate the RoPE positional embedding. It has three main steps. First, we introduce an efficient evolution search algorithm to perform non-uniform RoPE positional interpolation. Second, we propose progressive context window extension strategy. It begins by searching for a 256K length on the pretrained LLM and fine-tuning it at this length. Then, based on the fine-tuned 256K LLM, we did a second search for new RoPE interpolations to achieve 2048K context window size. Finally, since long-context LLMs will drop performance at its original context window, we readjusted the non-uniform positional interpolation at a 4K length to recover the short-context-window performance.HUIZINGA: Let’s talk about findings. Tell us how things worked out for you and what you found as a result of your experiments.ZHANG: Yeah. Our study verified two important non-uniformities in LLM context window extension. We identified that lower RoPE dimensions and initial token positions require less interpolation because they contain crucial and high-frequency information. Higher RoPE dimensions require more interpolation because these are sparse and low-frequency information.HUIZINGA: So work in the lab is always interesting, but deployment in real-world settings is often another story. If everything is successful, Li, who benefits most from your LongRoPE research?ZHANG: Well, our work significantly improves LLM’s capabilities to handle long context in real-world applications, such as long-context retrieval, code debugging, and even multi-modality LLM applications. Moreover, our method achieves this with minimal modifications to the RoPE positional embedding. Therefore, it can be widely applied to production. We have integrated LongRoPE into Microsoft Phi-3 128K family, which are the first long-context LLMs in its class. Before LongRoPE, Phi models have only 2K context window.HUIZINGA: So who is your primary user?ZHANG: I think any users who want to use the long-context LLMs, they can be our audience.HUIZINGA: So it’s a wide audience.ZHANG: Yeah, its a wide audience.HUIZINGA: It’s about now that I always ask the golden nugget question. If you wanted to leave our listeners with one key takeaway from this research, what would it be?ZHANG: Well, if there’s one key takeaway from our work, it must be our key findings that non-uniformities in rotary positional embedding are crucial for LLM context window extension. And if you want to build a high-quality long-context LLM, LongRoPE is all you need to know!HUIZINGA: Talk about what’s left to do in this field in terms of open questions and outstanding challenges. What’s next on your research agenda, Li?ZHANG: So far, there are still a couple of big questions in this field. First, it’s challenging to achieve both strong long and short capabilities at the same time. Although we have managed to recover some of the short performance for long-context LLM, it has not recovered 100 percent. We are trying different approaches to close these gaps. Second, we want to figure out how we can use these long-context LLMs to solve more challenging tasks, and then we can push this model to work harder and smarter for us.[MUSIC]HUIZINGA: Well, Li Lyna Zhang, thanks for joining us today, and to our listeners, thanks for tuning in. If you want to read this paper, you can find a link at aka.ms/abstracts, or you can find it on arXiv. See you next time on Abstracts![MUSIC FADES]
Unknown
Unknown
null
null
null
null
null
null
news
Sara Ejaz, Raheel Baig, Zeeshan Ashraf, Mrim M. Alnfiai, Mona Mohammed Alnahari, Reemiah Muneer Alotaibi
A deep learning framework for the early detection of multi-retinal diseases
Retinal images play a pivotal contribution to the diagnosis of various ocular conditions by ophthalmologists. Extensive research was conducted to enable early detection and timely treatment using deep learning algorithms for retinal fundus images. Quick diagnosis and treatment planning can be facilitated by deep learning models’ ability to process images rapidly and deliver outcomes instantly. Our research aims to provide a non-invasive method for early detection and timely eye disease treatment using a Convolutional Neural Network (CNN). We used a dataset Retinal Fundus Multi-disease Image Dataset (RFMiD), which contains various categories of fundus images representing different eye diseases, including Media Haze (MH), Optic Disc Cupping (ODC), Diabetic Retinopathy (DR), and healthy images (WNL). Several pre-processing techniques were applied to improve the model’s performance, such as data augmentation, cropping, resizing, dataset splitting, converting images to arrays, and one-hot encoding. CNNs have extracted extract pertinent features from the input color fundus images. These extracted features are employed to make predictive diagnostic decisions. In this article three CNN models were used to perform experiments. The model’s performance is assessed utilizing statistical metrics such as accuracy, F1 score, recall, and precision. Based on the results, the developed framework demonstrates promising performance with accuracy rates of up to 89.81% for validation and 88.72% for testing using 12-layer CNN after Data Augmentation. The accuracy rate obtained from 20-layer CNN is 90.34% for validation and 89.59% for testing with Augmented data. The accuracy obtained from 20-layer CNN is greater but this model shows overfitting. These accuracy rates suggested that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. This study’s contribution lies in providing a reliable and efficient diagnostic system for the simultaneous detection of multiple eye diseases through the analysis of color fundus images.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307317
https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0307317.g015&size=inline
2024-07-25T14:00:00Z
AbstractRetinal images play a pivotal contribution to the diagnosis of various ocular conditions by ophthalmologists. Extensive research was conducted to enable early detection and timely treatment using deep learning algorithms for retinal fundus images. Quick diagnosis and treatment planning can be facilitated by deep learning models ability to process images rapidly and deliver outcomes instantly. Our research aims to provide a non-invasive method for early detection and timely eye disease treatment using a Convolutional Neural Network (CNN). We used a dataset Retinal Fundus Multi-disease Image Dataset (RFMiD), which contains various categories of fundus images representing different eye diseases, including Media Haze (MH), Optic Disc Cupping (ODC), Diabetic Retinopathy (DR), and healthy images (WNL). Several pre-processing techniques were applied to improve the models performance, such as data augmentation, cropping, resizing, dataset splitting, converting images to arrays, and one-hot encoding. CNNs have extracted extract pertinent features from the input color fundus images. These extracted features are employed to make predictive diagnostic decisions. In this article three CNN models were used to perform experiments. The models performance is assessed utilizing statistical metrics such as accuracy, F1 score, recall, and precision. Based on the results, the developed framework demonstrates promising performance with accuracy rates of up to 89.81% for validation and 88.72% for testing using 12-layer CNN after Data Augmentation. The accuracy rate obtained from 20-layer CNN is 90.34% for validation and 89.59% for testing with Augmented data. The accuracy obtained from 20-layer CNN is greater but this model shows overfitting. These accuracy rates suggested that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. This studys contribution lies in providing a reliable and efficient diagnostic system for the simultaneous detection of multiple eye diseases through the analysis of color fundus images.Citation: Ejaz S, Baig R, Ashraf Z, Alnfiai MM, Alnahari MM, Alotaibi RM (2024) A deep learning framework for the early detection of multi-retinal diseases. PLoS ONE 19(7): e0307317.https://doi.org/10.1371/journal.pone.0307317Editor: Muhammad Mateen, Soochow University, CHINAReceived: May 15, 2024; Accepted: July 2, 2024; Published: July 25, 2024Copyright: © 2024 Ejaz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Data Availability: Data is available and can be provided without any restrictions. Dataset that has been used in this research is available via Kaggle at the following URL: https://www.kaggle.com/datasets/andrewmvd/retinal-disease-classification.Funding: This research was funded by Taif University, Saudi Arabia, Project No. (TU-DSPP-2024-41).Competing interests: The authors have declared that no competing interests exist.1 IntroductionThe retina is a delicate layer located on the internal aspect of the human ocular organ. The major cause that people lose their eyesight or blurriness is due to age and some retinal diseases. Early detection of these diseases and proper diagnosis may prevent permanent vision loss. With appropriate treatment and consistent monitoring, it is feasible to decelerate or hinder additional deterioration of vision, particularly when the condition is identified during its initial phases [1]. Some causes of the damage in the retina are old age trauma and light damage. Some other diseases like diabetes, hypertension, and cholesterol may also affect the retina. Diabetic Retinopathy (DR), Macular Degeneration, Retinal Vein Occlusion (RVO), and Hypertensive Retinopathy cause damage to retinal vessels. Glaucoma is present when the optic nerve is damaged. When we get older macular holes happen. The effect of the macula hole is a blurred and not clear image.Retinal diseases, such as DR, Age-Related Molecular Degeneration (ARMD), and glaucoma, are major contributors to blindness on a global scale. Timely identification and precise recovery from these conditions are essential for prompt treatment and the prevention of vision loss. However, identifying and classifying retinal diseases accurately and efficiently can be challenging for human specialists due to the complexity and variety of retinal images. Therefore, the development of an automated retinal disease classification system using deep learning or neural network models can significantly enhance the precision and speed the detection and treatment. Glaucoma is a group of ocular conditions leading to harm to the optic nerve. Internationally, the primary factors contributing to vision impairment include [2]:ARMDCataractDRGlaucomaUncorrected refractive errorsTo detect retina disease, various medical tests like Fundus photography, Optical Coherence Tomography (OCT), and Fluorescein angiography are performed. A retinal camera, also referred to as a fundus camera, is a specialized instrument that integrates a microscope with low power and a built-in camera. Its purpose is to capture detailed photographs of the eyes internal structures, such as the retinal layers, vascular network, optic nerve head, macular region, and posterior segment. By utilizing this technology, healthcare professionals can obtain high-resolution images that aid in the recovery and ongoing monitoring of different ocular disorders [3]. OCT does not provide direct visualization of blood in the retina [4], so it may not be the optimal imaging modality for documenting or measuring diseases involving bleeding in the retina. As OCT primarily relies on measuring reflected light to create detailed cross-sectional images of the retina, it may not accurately capture the presence or extent of blood. In cases where bleeding or hemorrhage is suspected, fundus photography can be more effective in documenting the condition. Fundus photography captures a high-resolution image of the posterior eye, capturing the retina and vascular network. Non-invasive methods for early detection and cure of retinal diseases are essential to intercept or control vision loss. Fundus images, captured using monocular cameras, provide a non-invasive and cost-effective technique for large-scale screening of fundus diseases. Fundus image-based eye diagnosis relies on various biomarkers, including optic cup, optic disc, blood vessels, fovea, macula, and specific lesions like hard exudates, hemorrhages, and microaneurysms used in DR diagnosis.Diabetes patients constitute a significant portion of the population with eye-related issues. DR, the most common diabetic eye condition, often lacks early symptoms but poses a significant risk of blindness and is among the top four causes of blindness. Early detection of DR is crucial for successful treatment and to avoid poor visual outcomes. Media Haze (MH) is a key indicator of cataracts, a widespread eye disease. Detecting MH in its early stages is essential for early healthcare to reduce the risk of sight deprivation associated with cataracts. ARMD, linked to aging, affects central vision, leading to visual impairment. Optic Disc Cupping (ODC) is frequently associated with glaucoma and other eye conditions, resulting from reduced Ocular nerve blood circulation or increased pressure in the fundus. Timely treatment is often lacking, causing rapid vision decline and severe impairment. Fig 1 illustrates the structure of the human eye.1.1 Research objectiveThe following are the research objectives of the suggested approach:Create a Deep Learning (DL) model designed for the multi-class classification of retinal images.Achieve high accuracy in the automated detection of common eye disorders, involving DR, MH, and ODC.Assess the models performance on massive and wide-ranging datasets to ensure generalizability and reliability.Investigate the potential integration of the developed model into existing healthcare systems for seamless adoption by eye care professionals.Explore the models contribution to early disease detection, with a focus on improving patient outcomes and minimizing vision loss.Evaluate the scalability and efficiency of the proposed solution for widespread use, particularly in regions with limited access to healthcare resources.Examine the interpretability of the deep learning model to enhance trust and understanding among healthcare practitioners.1.2 Research contributionHere are Key contributions of the suggested methodology:The retina diseases like DR, MH, and ODC are identified at an initial phase is crucial to avert irreversible vision impairment.In the field of biomedical research, extensive evidence supports the superiority of deep convolutional networks that have undergone pre-training on massive datasets, compared to deep models trained from scratch.The experiments utilize the publicly accessible RFMiD and RFMiD 2.0 datasets. To mitigate the consequences of limited datasets data augmentation techniques are applied. Distinct researches are conducted using both augmented and initial datasets to compare performance.The rest of the paper is organized as follows. Section 2 presents an overview of existing literature. In Section 3, we present materials and methodology. Section 4 presents results. Section 5 demonstrates comparisons between the results. Finally, Section 6 concludes the paper.2 Overview of existing literatureIn the medical field computer-assisted diagnosis is used for the diagnosis of diseases at their initial stages and to avoid permanent loss. Disease classifications are used to classify diseases in many medical fields.The issue that is faced by ophthalmologists for computer-aided diagnosis is the limited number of datasets. In 2021 after seeing the vision loss rate which reaches 2.2 million [5]. Researchers have discovered that over 7 million individuals worldwide are currently experiencing irreversible vision impairment, with more than 1 million of them being Americans affected by total blindness [6]. Pachade, S published a dataset RFMiD with 3200 fundus images that contain 45 conditions of retinal disease [7]. RFMiD is the only dataset that includes a large number of diseases that appear in a clinical setting.Almustafa et al. use the STARE [8] dataset to classify 14 ophthalmological defects using algorithms ResNet-50, EfficientNet, InceptionV2, 3-Layers CNN, and Visual Geometry Group (VGG). They concluded that EfficientNet gives the best accuracy at 98.43% [9].Choudhary et al. use the dataset [10] to classify three retinal diseases and normal images of the retina. The model comprises 19 layers of CNN and obtained an accuracy of 99.17% with 0.99 sensitivity and 0.995 specificities [11].Sengar et al. extract multi-class images from multi-label datasets RFMiD [7]. They classify the disease DR, MH, ODC, and normal images. To increase the extent of the dataset they formed a data transformation technique and compared the results of the proposed EyeDeep-Net algorithm with other algorithms VGG-16, VGG-19, AlexNet, Inception-v4, ResNet-50, and Vision Transformer. The obtained accuracy for validation is 82.13% and for testing 76.04% [12].Pan et al. proposed a model in which they classify macular degeneration, tessellated, and normal retina. Their aim is to early recognition and treatment for retinal diseases. They used fundus images collected from Chinas hospital and applied deep learning models Inception V3 and ResNet-50. After adjusting hyperparameters and fine-tuning them according to their classifier they attained an accuracy rate of 93.81% from ResNet-50 91.76% when utilizing Inception V3 [13].Kumar & Singh collects data from different datasets that are Messidor-2 [14], EyePACS [15], ARIA, and STARE [8] and classifies into 10 groups. They classify different stages of diabetic retinopathy and Normal Fundus images. The proposed methodology consists of pre-processing, and a match filter approach, and for segmentation and classification post-processing steps are included. The model generates results based on accuracy, precision, recall, and F1-score measure thats 99.71%,98.63% 98.25% and 99.22% respectively [16].[17] used a DL approach to capture the features and Machine Learning (ML) algorithms to classify glaucoma. The experiments are performed for the DRISTHI-GS [18] and ORIGA [19] dataset using 101 images and obtain a maximum training accuracy of 1.000.Pandey et al. aimed to classify multiple retinal diseases. They classify glaucoma, AMD, DR, and healthy retinal images. They used DiaretDB [20], Drishti-GS [18], DRIVE [21], HRF [22], IDRiD, Kaggle-39 [23], Kaggle-DR, ODIR [24], MESSDIDOR [25], ORIGA-light [19], REFUGE [26], and STARE [8] datasets. InceptionV3 model of CNN is used, and the ImageNet dataset is used for initial weights pertaining. They classify three diseases DR, Glaucoma, AMD, and one class for healthy images [27].The author [28] suggests a framework that is used for multi-disease comprises a combination of neural architectures in an ensemble configuration. First, they perform preprocessing steps by normalizing, image enhancement, and resizing. Then he detects the presence of diseases in the fundus image and performs multi-class classification. For disease risk detection convolutional neural networks that is Densenet201 and EfficientNetB4 were used. For disease classification, ResNet105 is added. RFMiD [7] is utilized for training. and validation. ODIR [19] dataset is applied in the testing phase. They classify 27 diseases.Ho et al. use RFMiD [7] Data that contain fundus images. They selected five CNN architectures that trained to anticipate the existence of disease and classify the 28 abnormalities [29].Abbas et al. also perform multi-class classification. He conducted tests on the 27 primary classes within the RFMiD dataset. He scored an area under curve (AUC) of 0.973. Their model selection is lighter. They use EfficientNetB4 and EfficientNetV2S for classification [30].[31] performed augmentation techniques because their dataset contains only 69 images depicting vascular diseases, along with 55 healthy images. They use 10 epochs to train the multilayer deep CNN. With 10 epochs accuracy is 88.4%.[32] introduces a compact convolutional neural network for automatic DR detection using four retinal image datasets. Utilizing 12-fold cross-validation, our model achieved high accuracy: 79.96% on the Diabetic Retinopathy Detection dataset, 94.75% on Messidor-2, 96.74% on IDRiD, and 89.10% on RFMiD, demonstrating its effectiveness across various datasets and enhancing ophthalmic diagnostics.The author [33] proposed different models to classify vein occlusion disease and healthy class. For healthy images, the specificity is 100% and sensitivity, F1 score, and an accuracy 95%, 97%, and 97% respectively. They also compare specificity sensitivity F1 score and accuracy on ResNet18, ResNet18+SE, ResNet18+CBAM, and ResNet18+CA algorithms. [34] also used pre-trained models for retinal disease classification.3 Materials and methodsIn this article, we proposed a DL Technique for identifying retinal disorders through fundus images. Data was gathered from two datasets RFMiD [7] and RFMiD 2.0 [35]. The images in these datasets were single as well as multi-labeled. We separated single-label diseases and selected the diseases with more images in the dataset. We selected four classes. After acquiring the dataset we performed pre-processing steps which are shown in Fig 2. In preprocessing, we employed data augmentation to expand and balance the dataset, crop the unwanted area then resize the images to the same size because the images in the dataset were in different sizes. We partitioned the dataset into training and testing subsets. We converted Images in an array to reduce the computing time and perform one hot encoder. Further, we implemented three CNN models to classify three retinal diseases and one healthy class. Firstly, the model was trained with the original dataset. To increase model performance and reduce overfitting, experiments were performed again to measure the results after data augmentation. The statistical results for augmented data were mentioned in the context of accuracy, specificity, sensitivity, precision, recall, F1 Score, and support. Graphically results are shown in terms of accuracy, loss, and confusion matrix.3.1 Data gatheringThis articles data was collected from public repositories, RFMiD [7] and RFMiD 2.0 [35]. The problem of detecting multiple eye diseases simultaneously was simplified by transforming it into a multi-class classification problem. Each image was assigned to a single disease class rather than having multiple labels. Unique images that exclusively belong to a single disease class were considered to ensure effective training of the neural networks. While recognizing that a retinal image could potentially exhibit multiple diseases, the decision to adopt a multi-class classification approach was driven by the need for simplicity, model training efficiency, dataset balance, label quality, and specific diagnostic goals. This approach ensures that the neural networks are effectively trained and evaluated, providing reliable and interpretable results that are immediately applicable in clinical settings. By focusing on unique images in each class, the dataset was appropriately balanced, allowing for accurate training and evaluation of the neural networks. For the final dataset preparation, we have chosen a total of four classes. Among these classes, one represented the normal (WNL) category, while the remaining three classes were related to different diseases. These diseases include DR, MH, and ODC as shown in Fig 3. By including these specific classes in the dataset, we aimed to capture a range of conditions related to eye health and provide a comprehensive representation of both healthy and diseased states. Table 1 shows the overall quantity of images which is single-labeled in both datasets.3.2 Pre-processingPre-processing is the process of improving and enhancing image quality visualization. This was likely one of the pivotal factors influencing the success and accuracy of the subsequent stage in the proposed method. Medical images might contain additional content a problem that could cause poor image visualization. Poor-quality images can lead to unsatisfactory results. In the pre-processing stage, we performed data augmentation, cropping, resizing, dataset splitting, images in arrays, and a one-hot encoder to improve model efficiency.3.2.1 Data augmentation.To improve the dataset and enhance the models capacity for image handling from different perspectives, image augmentation techniques were employed as authors [12, 36, 37] used. These techniques significantly augmented the dataset size and helped capture the diverse variations of fundus images encountered in real-world conditions. The selection of augmentation methods was based on the understanding that fundus images can exhibit various transformations. The selected extension methods included various geometric transformations, such as rotations of 15°, 30°, and 45°, and horizontal flips as [12] applied for fundus images. By applying these augmentation techniques, the dataset was enriched with variations of the original sample image. This augmentation process expands the datasets diversity and enables the model to learn from a broader range of image variations, leading to improved performance and robustness. Fig 4 offers a visual depiction of the various image variations of DR i.e. Horitenzatal flip, rotation 15°, 30°, and 45° respectively, obtained after applying the augmentation techniques to the original sample image. Table 2 represents data for all classes before and after augmentation.3.2.2 Crop.Cropping for feature extraction is a common technique used in image processing and computer vision tasks. By cropping, we reduce the amount of data that needs to be processed. This can significantly speed up the feature extraction process, especially when dealing with large images or datasets.3.2.3 Resize.Resizing images is an important preprocessing step in computer vision, particularly in deep learning. One of the reasons for resizing images is to accelerate the training process. When working with larger input images, DL models need to process a larger number of pixels, which significantly increases the complexity of computation and training duration.By decreasing the size of images, the number of pixels that the model needs to learn from is reduced. This reduction in input size leads to a decrease in computational requirements, resulting in faster training. Training on smaller images allowed for quicker iterations and experimentation, making the development process more efficient [38]. Images in datasets are in different dimensions such as 2144 x 1424 x 3, 4288 x 2848 x3, and 512 x 512 x 3. We resized the image to 224 x 224 x 3 to reduce computational requirements and allow for quicker iterations and experimentation, making the development process more efficient.3.2.4 Split dataset.In ML and data analysis, distributing the data into training and testing categories is a common practice. The main reason for this is to analyze performance metrics and model the generalization capability of an ML model. By splitting the dataset into training and test samples, we can ensure that the model undergoes training and evaluation in a robust and unbiased manner, enabling us to make informed decisions about its performance and generalization capabilities. In this experiment, 70% of the dataset was partitioned for training, 20% for testing, and 10% for validation.3.2.5 Image in array.Converting an image into an array is a common practice in image processing and computer vision tasks. This conversion allows images to be manipulated, analyzed, and processed using mathematical and algorithmic techniques. Many computer vision algorithms involve extracting features such as edges, corners, or textures from images. This process is more straightforward when the image is represented as an array.3.2.6 One-hot encoder.One-hot encoding is a widely practiced approach in DL to represent categorical variables as binary vectors. This method transforms categorical data into a numerical format, facilitating its processing by machine learning algorithms, including deep learning models.3.3 Proposed deep learning architectureThree deep learning architectures were proposed in this article and the results were examined with the original dataset as well as with the augmented dataset. The selection of CNN architectures with 12, 14, and 20 layers was a strategic decision to explore the trade-offs between model complexity, feature extraction capabilities, and computational efficiency. The 12-layer CNN was highlighted as the proposed methodology due to its high accuracy, balanced training time, and reduced risk of overfitting. The 14-layer CNN, while offering deeper feature extraction, did not outperform the 12-layer model. The 20-layer CNN, despite achieving high accuracy, showed signs of overfitting, indicating that a more complex model is not necessarily better for this specific task.3.3.1 Deep CNN-1 architecture.Classification is a critical step in distinguishing between diseased and healthy retinal images. For image classification, we use different CNN layers. The sequence of the layers is given in Table 3. Convolutional layers are fundamental components of CNNs because they are designed to exploit the spatial structure of data, capture local patterns, share parameters to reduce redundancy and learn hierarchical representations. These properties make CNNs highly effective for tasks involving visual data, such as image classification.3.3.2 Feature extraction.Feature extraction stands as a pivotal element within the model. A dedicated CNN model was trained for this purpose. The employed CNN model is constructed with a series of convolutional layers, including 2D convolutional layers, batch normalization layers, and 2D max pooling, along with dropout and dense layers. The introduction of filters facilitates the transfer of the dataset through each convolutional layer. Each convolutional layer extracts relevant information before the final max pooling. Finally, feature extraction is done through fully connected layers. The convolutional operation, denoted as (*), is a mathematical process that takes two functions (f, g) as inputs and yields a third function denoted as (f*g). In the context of image processing, convolution is carried out using a kernel, which is a small matrix typically of size k x k. The kernel should be odd since an odd number ensures better symmetry around the center and minimizes the possibility of aliasing.The kernel is applied by sliding it over an images pixels, generating feature maps. In a CNN, multiple filters are utilized in every convolutional layer to extract high-level features. If the input dimensions of a fundus image are (p x q), and n kernels with a window size of k x k are employed, the resulting image dimensions will be n x ((p k + 1) x (q k + 1)). The network creates meaningful feature representations from the data by capturing various aspects of the input image.The given model architecture consists of several layers, including Convolutional, MaxPooling2D, and Dense layers as shown in Fig 5. The output shape of each layer indicates the dimensions of the feature maps generated at each layer. The input shape of the images is specified and the images are expected to have three color channels (RGB). In the Convolutional Layers, the initial convolutional incorporates 32 filters sized 3x3 and employs the ReLU activation function. It takes the input image and applies 32 different filters to extract various features from the image. The second convolutional layer (Layer-2) is equipped with 64 filters sized 3x3 and employs the ReLU activation function, extracting more complex features from the input. Subsequently, the third convolutional layer (Layer-3) integrates 128 filters of size 3x3, utilizing the ReLU activation function to acquire even more abstract features from the preceding layers. The fourth convolutional layer (Layer-4) incorporates 256 filters of size 3x3 and applies the ReLU activation function, further enhancing the feature extraction process. Following each convolutional layer, a max pooling layer is added, featuring a 2x2 pool size. This layer downsamples the output of the preceding convolutional layer by selecting the maximum value within each 2x2 region, aiding in reducing spatial dimensions while retaining essential features. After the final max pooling layer, a flattening layer is introduced to convert the 2D output into a 1D vector, Getting data ready for fully connected stages. The flattened output is then connected to a dense layer (Dense) with 128 units and the ReLU function. This layer performs a linear transformation on the input data and introduces non-linearity. To mitigate overfitting, a dropout layer is added with a dropout rate of 0.5. By dropping out some input elements, the network will not overdepend on specific features. Finally, the output layer is composed of the number of classes and uses the softmax activation. This produces probabilities for each class, determining the likelihood of the input image belonging to different classes. The model setup of an experiment is given in Table 4. The dataset contains nonlinearity, so the hidden layers in the CNN use a ReLU function. The final output layer utilizes the Softmax function. ReLU is a fast and efficient nonlinear activation function that outperforms alternatives like Sigmoid and Tanh, leading to quicker convergence. ReLU squashes negative activations in the feature map, enhancing accuracy and reducing training time according to Eq 1(1)Softmax normalizes the networks output into probability scores. This enables the prediction of fundus image outcomes across four distinct classes: DR, MH, ODC, and WNL. Categorical Cross-Entropy (CCE) stands as one of the most prevalent loss functions employed in multi-class classification. Its used when the classes are mutually exclusive, meaning each input can belong to only one class. The predicted class probabilities are passed through a softmax activation, and the cross-entropy between the predicted probabilities and the ground truth labels is computed. The CCE loss is calculated as the negative log-likelihood of the true class probabilities given the predicted probabilities as given in Eq 2:(2)Where:yi represents the predicted probability for class i (output of the softmax activation function) from the model.ti represents the one-hot encoded target label for class i. Its 1 if the true class is i and 0 otherwise.4 ResultsWe have conducted experiments to evaluate the proposed CNN model classification methodology, considering both qualitative and quantitative aspects. Our evaluation involved testing the proposed method using the data we collected.4.1 DatasetWe have conducted experiments to evaluate the proposed CNN model classification methodology, considering both qualitative and quantitative aspects. Our evaluation involved testing the proposed method using the data we collected. We compiled a dataset comprising approximately 1908 images. We organized our dataset into four distinct classes, namely DR, MH, ODC, and Norma (WNL). At the outset, the dataset includes 334 images depicting MH, 471 images depicting DR, 172 images depicting ODC, and 931 images of WNL as shown in Table 1. After implementing data augmentation on the dataset to address the problem of data overfitting. Moreover, we encountered a significant class imbalance issue where the WNL class had a substantially higher number of images compared to the other classes. This created a challenge as it could potentially introduce biases in the results. To tackle this problem, we implemented data augmentation techniques to balance the classes. We got 2367 images of MH, 2261 images of DR, 2354 images of ODC, and 2360 images of WNL as shown in Table 2. Fig 6a is showing data distribution before Augmentation and Fig 6b for after augmentation.For classification, we split the datasets into 70:20:10 for training, testing sets, and validation. This implies that 70% of randomly selected images were employed during the training phase, while 20% were set aside for testing and 10% was used for validation.4.2 Experimental frameworkIn this study, trials were carried out on a 64-bit iteration of the Windows 10 operating system using Python. The system employed an Intel Core i5 7th Generation CPU, possessed 8 GB of RAM, and featured a storage capacity of 237 GB.4.3 Findings for feature extraction utilizing CNNsIn this section, feature extraction results are given in both statistical as well as graphical form. In statistical form, accuracy, specificity, sensitivity, precision, recall, F1 score, and support are given using the formula given in Eqs 38(3)(4)(5)(6)(7)(8)4.3.1 Results of feature extraction using CNN-1.This section will delve into the results of feature extraction using CNN. The experiments employed the deep CNN base architecture model with training and testing data. The accuracy and loss charts for the suggested CNN model without data augmentation are presented in Fig 7a and 7b respectively. It is observable in the charts that the model initiates with a starting training accuracy of zero then gradually advancing with increasing epochs. The accuracy graph for CNN-1 without the data augmentation shows that there is overfitting in the model to reduce this we performed data augmentation.Table 5 presents the statistical results of Feature Extraction from the CNN-1 model without employing data augmentation. In the preliminary experiment without data augmentation, the proposed model demonstrated accuracies of 83.94%, 90.39%, 90.39%, and 80.07% for DR, MH, ODC, and WNL, respectively.Fig 8a illustrates the training and validation accuracy and Fig 8b presents loss for the CNN-1 model when utilizing augmented data. Conversely, in the second trial involving data augmentation, the proposed framework achieved 91.94%, 93.17%, 94.60%, and 92.43% accuracy rate for DR, MH, ODC, and WNL respectively as shown in Table 6. The experimental results indicate that the proposed architecture, when coupled with data augmentation, has achieved the highest accuracy.The Confusion matrix for the CNN model is illustrated in Fig 9a in the absence of data augmentation, and Fig 9b depicting results for
Detection and Monitoring/Prediction
Healthcare Practitioners and Support
null
null
null
null
null
null
news
Mahmudul Hasan, Tanver Ahmed, Md. Rashedul Islam, Md. Palash Uddin
Leveraging textual information for social media news categorization and sentiment analysis
The rise of social media has changed how people view connections. Machine Learning (ML)-based sentiment analysis and news categorization help understand emotions and access news. However, most studies focus on complex models requiring heavy resources and slowing inference times, making deployment difficult in resource-limited environments. In this paper, we process both structured and unstructured data, determining the polarity of text using the TextBlob scheme to determine the sentiment of news headlines. We propose a Stochastic Gradient Descent (SGD)-based Ridge classifier (RC) for blending SGDR with an advanced string processing technique to effectively classify news articles. Additionally, we explore existing supervised and unsupervised ML algorithms to gauge the effectiveness of our SGDR classifier. The scalability and generalization capability of SGD and L2 regularization techniques in RCs to handle overfitting and balance bias and variance provide the proposed SGDR with better classification capability. Experimental results highlight that our string processing pipeline significantly boosts the performance of all ML models. Notably, our ensemble SGDR classifier surpasses all state-of-the-art ML algorithms, achieving an impressive 98.12% accuracy. McNemar’s significance tests reveal that our SGDR classifier achieves a 1% significance level improvement over K-Nearest Neighbor, Decision Tree, and AdaBoost and a 5% significance level improvement over other algorithms. These findings underscore the superior proficiency of linear models in news categorization compared to tree-based and nonlinear counterparts. This study contributes valuable insights into the efficacy of the proposed methodology, elucidating its potential for news categorization and sentiment analysis.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307027
https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0307027.g011&size=inline
2024-07-15T14:00:00Z
AbstractThe rise of social media has changed how people view connections. Machine Learning (ML)-based sentiment analysis and news categorization help understand emotions and access news. However, most studies focus on complex models requiring heavy resources and slowing inference times, making deployment difficult in resource-limited environments. In this paper, we process both structured and unstructured data, determining the polarity of text using the TextBlob scheme to determine the sentiment of news headlines. We propose a Stochastic Gradient Descent (SGD)-based Ridge classifier (RC) for blending SGDR with an advanced string processing technique to effectively classify news articles. Additionally, we explore existing supervised and unsupervised ML algorithms to gauge the effectiveness of our SGDR classifier. The scalability and generalization capability of SGD and L2 regularization techniques in RCs to handle overfitting and balance bias and variance provide the proposed SGDR with better classification capability. Experimental results highlight that our string processing pipeline significantly boosts the performance of all ML models. Notably, our ensemble SGDR classifier surpasses all state-of-the-art ML algorithms, achieving an impressive 98.12% accuracy. McNemars significance tests reveal that our SGDR classifier achieves a 1% significance level improvement over K-Nearest Neighbor, Decision Tree, and AdaBoost and a 5% significance level improvement over other algorithms. These findings underscore the superior proficiency of linear models in news categorization compared to tree-based and nonlinear counterparts. This study contributes valuable insights into the efficacy of the proposed methodology, elucidating its potential for news categorization and sentiment analysis.Citation: Hasan M, Ahmed T, Islam MR, Uddin MP (2024) Leveraging textual information for social media news categorization and sentiment analysis. PLoS ONE 19(7): e0307027.https://doi.org/10.1371/journal.pone.0307027Editor: Junaid Rashid, Sejong University, KOREA, REPUBLIC OFReceived: December 23, 2023; Accepted: June 27, 2024; Published: July 15, 2024Copyright: © 2024 Hasan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Data Availability: The data supporting the findings of this study are publicly available at https://github.com/Mahmudul-Hasan-Moon/News_Categorization.Funding: The author(s) received no specific funding for this work.Competing interests: Authors have no conflict of interest to declare.IntroductionThere has been significant and rapid advancement in information technology, leading to the emergence of social media as a dominant phenomenon. Social media provides an online platform for user-to-user interactions, including messaging, photo sharing, blog commenting, and status updates. Individuals extensively use prominent social networking sites such as Facebook, Instagram, and Twitter to express their opinions, making social media a valuable data source for sentiment analysis and text mining [1]. News dissemination through various social media platforms profoundly impacts our news consumption habits [2]. News organizations facilitate audience engagement by incorporating Facebook share and Twitter retweet buttons on their web pages, encouraging the natural human behavior of news sharing [3]. As digital technology has expedited sharing processes, social medias widespread usage underscores the need to understand social media activities in society and the nation. The use of social media has surged in recent years, with Facebook having a global user base of 2.79 billion [4]. The platforms widespread adoption allows users to freely express their viewpoints on diverse topics, generating vast amounts of valuable data [5].News categorization and sentiment analysis from social media are motivated by several factors. Firstly, the vast amount of data generated on social media platforms necessitates efficient methods to sift through and categorize news articles and user-generated content. This is crucial for identifying relevant information amidst the noise, facilitating timely access to news updates, and enhancing information dissemination. Secondly, understanding public sentiment toward news events, products, or brands provides valuable insights for businesses, policymakers, and researchers. Sentiment analysis aids in gauging public opinion, detecting emerging trends, and guiding decision-making processes. Moreover, social media news categorization and sentiment analysis enhance the user experience by personalizing content delivery and recommending relevant news articles or products based on user preferences and sentiments. These tasks play a vital role in information management, communication, and decision-making in the digital age.In recent years, there has been a notable trend in news categorization and sentiment analysis towards utilizing Machine Learning (ML) and Deep Learning (DL) techniques. ML algorithms have traditionally been employed for these tasks due to their effective handling of structured data. However, DL models have become powerful tools for extracting intricate patterns and representations from unstructured textual data. The utilization of both ML and DL in news categorization and sentiment analysis has shown good results, showcasing enhanced accuracy and scalability, thus driving further research and application in this domain. Due to the simplicity, interpretability, and efficiency in handling smaller datasets, ML is often preferred over DL in such real applications. ML models require less computational resources, making them more practical for real-world applications. Additionally, ML algorithms can offer comparable performance to DL models in certain scenarios with less complexity.In this study, we focus on sentiment analysis of news texts shared on social media, assuming that individuals primarily share news related to their beliefs and interests. The proposed system aims to classify the news shared on social media to perform sentiment analysis. By categorizing news, we gain insights into the emotions of those who share it. Categorizing news into different topics, such as educational, political, entertainment, and technological news, has piqued users interest. However, the process of news classification can be labor-intensive. It involves extracting key features from news titles and using them to automatically classify news articles into predefined categories based on training data [6]. Sentiment analysis also has emerged as a popular field with numerous applications. Given these factors, there is a growing need to categorize news stories based on their tone and conduct mood analysis. For news categorization and sentiment analysis, in this paper we formulate some research questions as (i) Do sentiment analysis models identify the positive, negative, and neutral sentiments often seen on social media?, (ii) How can news articles and social media data be effectively categorized using ML algorithms?, and (iii) How do various preprocessing techniques, such as tokenization, stemming, and stop-word removal, affect the performance of ensemble models in news categorization tasks?To get the answers to the questions, we propose an ML-based news categorization system employing ML algorithms, and we design a blending ensemble classifier and propose a string preprocessing pipeline to process the unstructured data. The proposed methodology can classify both labeled and unlabeled data that have valuable insights into the realm of text categorization. Employing K-fold cross-validation, the suggested approach enables businesses to collect user feedback, which, when combined with sentiment analysis, can provide valuable insights to understand customer interests and tailor offerings accordingly. Organizations can benefit from focusing on the most widely read news through effective news categorization. This study addresses the significant influence of social media in the context of sentiment analysis and news categorization. The key contributions of this study are summarized as follows.To perform sentiment analysis on social media text data, we process both structured and unstructured data to determine the polarity of individual sentences.We propose a string preprocessing technique that includes lowercase conversion, punctuation and stop-word removal, stemming, and lemmatization to make the dataset more ML-centric to get better classification performances.For classifying unlabelled news, we employ unsupervised ML algorithms capable of creating accurate labels for the unlabelled news, show a comparative analysis of supervised and unsupervised ML algorithms for news categorization, and find the best groups of algorithms that are more robust to news categorization.For enhanced news categorization, we develop an ensemble classifier blending Stochastic Gradient Descent (SGD) and Ridge classifier, termed SGDR, that utilizes proposed string processing techniques and outperforms benchmark ML classifiers.To verify the superiority of the proposed SGDR classifier, we employ the McNemar significant test to find the level of significance of the proposed method.The remaining part of this paper is organized as follows: The materials and methods section provides a detailed explanation of the proposed system, working procedures, and its structural components. The result and analysis section discusses the proposed system analysiss environmental configuration and experimental results. Finally, the conclusion section presents the concluding remarks and suggestions for future work to expand and further develop the proposed approach.Related worksConsequently, numerous ML-based models have been developed to recommend social media trends, often called sentiment analysis [7]. An insightful study of Neural Networks (NN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), and Naive Bayes (NB) is done for the classification of news based on their accuracy. The experimental results have led to the conclusion that NB gives better results with an accuracy of 96.8% while the DT, NN, SVM, and RF provide accuracy of 83.2%,96.4%, 96.4%, and 94.1% respectively [6]. An effective sentiment analysis on 36,500 comments on Facebook, 36,500 tweets, and 36,500 online votes related to iPhone shows accuracy waves from 79% to 87% for Artificial Neural Networks without using extra memory space for storing the intermediary data [8]. A hybrid technique combining RF and NB for three Twitter datasets (100, 250, 500 tweets) related to keywords- Amazon, Hachette along with six types of pre-processing techniques provides an accuracy of 95.62%, 92.39%, and 94.19% respectively [9]. The Bayesian version of the Multinomial NB classifier provides better and similar performance than the Multinomial NB classifier for text classification [10]. The SVM classifier provides good performance of 84.87% accuracy on mobile phone brands reviews on Amazon, categorizing the sentiments into joy, surprise, trust, disgust, sadness, fear, anger, and anticipation [11]. An empiric analysis of 200 research papers with an explanation of Deep Belief Networks, GRU, LSTM, RecNNs, RNNS, and CNNs, and their working mechanisms is presented in [12] concluding that LSTM performs better than the other DL-based techniques. An analysis of three ML-based algorithms, KNN, RF, and Logistic Regression (LR) for the BBC news text dataset proves that LR, along with the TF-IDF vector, is the best among the mentioned algorithms by obtaining 97% [13]. Time consumption has been reduced for big data analysis using canopy as a prepossessing technique for the k-means Clustering algorithm. Sabah et. al., [14] proposed and evaluated this technique on the Dental healthcare insurance news dataset based on the Hadoop Distributed File System (HDFS). The method can shrink the execution time from 68.657 to 60.75 seconds. Based on TextRank, a semantic clustering news keyword extraction technique named Semantic Clustering TextRank (SCTR) is evaluated in a Chinese news library. The experimental results give a maximum precision gain of 71%, FM 75%, and recall of 92%. A clustering method for categorizing news that combines Grid search based on Canopy with K-means clustering (KMGC-Search) provides scores 96%, 92%, 94%, 94%, and 95% for ARI, HS, CS, VM, and FM scores, respectively [15]. Sentiment analysis involves determining emotions and feelings through the analysis of text-based information. By scrutinizing peoples comments, likes, and shares on social media, sentiment analysis helps infer their overall sentiment towards a particular event or topic. It provides valuable insights into peoples opinions across various domains, including medical, social, and political realms [2, 16].An insightful investigation has been done in using nine different classifiers BART-large, ELectra-small, XLM-RoB-ERTa-base, ALBERT-base-v2, DistilBERT, RoBERTa-large, RoBERTa-base, BERT-large, and BERT-base for three different datasets- COVID-19 Fake News Data, extremist-non-extremist dataset. For each of the three datasets, Qasim et al. [17] individually displayed and compared the classification performance of the nine methods. The investigation concludes best-performed models- RoBERTa-base for the COVID-19 Fake News Dataset, Bart-large for the COVID-19 English Tweets Dataset, and both BERT-base and BERT-large for the Extremist-Non-Extremist Dataset. Khosa et al. [18] proposed a hybrid model consisting of RF and SoftMax regression for news categorization results in 98.1% of accuracy for the BBC news dataset and 100% for the business news category. To identify false reports on COVID-19, the research project by Wani et al. [19] explores 8 ML techniques SVM, DT, LR, RF, KNN, Adaboost, NB, Neural Networks, and 5 DL classifiers- GRU, CNN, LSTM, BiLSTM, and RNN together with data preprocessing techniques. Amongst the ML classifiers, jointly KNN, multilayer perception, RF, and DL techniques CNN and BiLSTM are the most efficient for detecting false news, attaining an accuracy of 97%. A thorough investigation of text classification using ML, with a focus on news article classification, was conducted by Daud et al. [20] on the Reuters news dataset. The hyperparameter-optimized SVM was suggested in this study to classify news items into the appropriate categories and improved 20.814% accuracy. The optimized hyperparameter was determined based on the outcomes of SVM for different combinations of the parameters. The classification results are compared with the optimization results of other ML techniques like NB, KNN, LR, RF, and SGD. Rigorous research was conducted to identify fake reviews in [21]. A dataset of 68,000 review comments from the Google Play store and 512 end-user responses from the University of Science and Technology Bannu, Pakistan, were considered for the investigation. The investigated ML and DL classifiers provide a promising performance of 96% average accuracy, whereas the end-users accuracy was 44%. Khan et al. [22] provide a unique method for using a news sequential evolution model (NSEM) based on distributed representations to identify and analyze hot topic trends in streaming text data. To identify and visualize patterns in text data streams more accurately than current approaches, that uses DL techniques like word2vec models and Long Short-Term Memory (LSTM). Shafqat et al. [23] explored different ML algorithms like RF, DT, Kstar, Bayes Net, and NB for opinion mining of Politics and Inflation using the Roman Urdu dataset from Kaggle. The observation concludes RT is the best classifier among the investigated ones and in terms of execution time, NB exhibits superior performance. A novel Generic Algorithm-based approach for detecting false news on social networks is introduced in [24] alongside investigating different ML algorithms- LR, RF, NB, and SVM. Experiments were conducted on three datasets- the Kaggle fake news dataset, the LIAR dataset, and the FJP dataset. The suggested GA-based method outperforms the ML techniques by a small margin. Marwat et al. [25] suggested a SentiDeceptive technique that automatically divides end-user reviews into negative, positive, and neutral feelings to retrieve,e deceptive end-user rating information. It also recognizes online product evaluations based on crowd-user comments on social media. Their exploration included the steps- preprocessing the dataset, resampling the dataset, extracting features using TF-IDF and BOW from textual data, and implementing KFold and Shuffle Split validation. The results indicate that the best classifier for the suggested method is the Linear SVC classifier using a mix of BOW and SMOTE.In the realm of data classification, [26] proposed a novel variant of the Bat Algorithm (BA) called the Improved Bat Algorithm (IBA). This approach refines the standard BA by enhancing its exploitation capabilities and mitigating the risk of being trapped in local minima. Another study [27] employed quasi-random sequences for population initialization rather than a random distribution to improve convergence and population diversity further. This study reviews various initialization methods used in Particle Swarm Optimization (PSO) that are based on quasi-random sequences (e.g., Halton, Torus) and pseudo-random sequence strategies (e.g., LCG, MCG). Genetic algorithm-based techniques have recently been adopted in Femtocell communication systems due to their inherent advantages. In a research study, the authors [28] presented a genetic algorithm-based technique designed to optimize coverage, power, and bit error rate, thereby enhancing Femtocell performance. Simulation results indicate that the genetic algorithm-based optimization technique effectively achieves superior performance in a Femtocell environment.Existing methods have shortcomings compared to traditional ML models for news classification. DL requires vast amounts of labeled data and computational resources, making it resource-intensive and often impractical for small datasets. Additionally, those models lack interpretability, making it difficult to understand their decision-making process. Furthermore, existing methods are susceptible to overfitting, especially in scenarios with limited data or noisy datasets. Therefore, traditional ML models may offer more practicality, transparency, and generalizability for news classification tasks in certain contexts.To show the performance of different research and compare it with our proposed method, we create Table 1 that shows the related dataset, methods and obtained individual research results. Our result is also included here to show the superiority of our proposed methodology.Materials and methodsSentiment analysis and news classification represent two prominent domains where ML algorithms are frequently employed. However, the classification accuracy achieved by ML-based systems exhibits substantial variability. While one system might yield superior accuracy, another could demonstrate inferior results. To address this issue, an approach has been introduced to streamline and simplify the processes of sentiment analysis and news classification. A succinct overview of this proposed approach is presented in Fig 1. The system has been fueled by data derived from the BBC-text dataset [29], encompassing 2225 news articles spanning five categories: sport, business, politics, entertainment, and tech. The Facebook API was leveraged to amass URLs for newspapers and corresponding news titles. This method, grounded in the BBC-text dataset, was subsequently applied to bdnews24 for sentiment analysis. The ensuing step involved text mining, where TF-IDF was employed to filter out extraneous words and demarcate semistructured from unstructured text. The text mining process facilitated news components classification and comprehensive mood analysis. For news classification, a suite of ML algorithms including SVM, K-Nearest Neighbours (KNN), DT, Adaptive Boosting (AB), Multinomial Naïve Bayes (MNB), LR, SGD, Ridge Classifier (RC), Non-Negative Matrix Factorization (NMF), and K Means Clustering (KMC) was harnessed. The performance of our proposed ensemble ML model exceeded that of prevailing ML models. In essence, this proposed methodology has ushered in a novel news article category and emerged as a source of news sentiment analysis.Dataset descriptionIn this study, the BBC-text dataset [29] served as the foundation for news categorization. Comprising a collection of 2225 news articles, the dataset encompasses five distinct news categories: sport, business, politics, tech, and entertainment, as depicted in Fig 2. Within this framework, sports-related news articles have been allocated to the sport category, while news articles focusing on various organizations or services have been designated as belonging to the business category. Similarly, news articles centered around political matters have been assigned to the politics category, whereas those featuring themes of joy, cheer, gladness, pleasure, and entertainment find their place within the entertainment category. News about science and technology have been fittingly grouped under the tech category. Notably, an examination of the article distribution reveals a balanced dataset, indicating a lack of pronounced imbalances.Technical descriptionsText mining.Text analytics and text mining are closely related. Text mining is the practice of extracting valuable information from text [30]. Text mining automatically extracts information from different documents that enclose structured textual content facts from the unstructured textual content. Term Frequency-Inverse Document Frequency (TF-IDF) has been used for text mining.An information retrieval statistical measure method called TF-IDF assesses how relevant a term is to a document within a group of documents [31]. It is widely used in automated natural language processing and automated text analysis. To calculate the TF-IDF of any document, the first step is to tokenize the sentence to get the frequency of each word and the total quantity of words in the sentence as a whole [32]. The Term Frequency (TF) of a word is the ratio of the frequency of the word and the total quantity of words in the sentence.(1)On the other hand, Inverse Document Frequency (IDF) is the ratio of the total number of sentences in a document and the number of sentences containing a particular word. So, IDF can be defined as,(2)Sentiment analysis.Sentiment analysis involves detecting and analyzing user opinions, attitudes, and emotions present in the text, categorizing them as positive, negative, or neutral [33]. Within Natural Language Processing, this task holds significant prominence [34]. Sentiment analysis is frequently applied to textual data or records to aid organizations, companies, and businesses in gauging the impact of their brand and products on consumers, ultimately fostering a deeper comprehension of customer preferences [35]. Numerous methods exist for assessing text sentiment [36]. The proposed system employs the TextBlob Python library to conduct sentiment analysis on news articles.TextBlob.TextBlob is one of the most robust Python libraries for processing textual data. It offers a user-friendly API that facilitates engagement with various common Natural Language Processing (NLP) tasks, including text classification, language translation, sentiment analysis, noun phrase extraction, part-of-speech tagging, and more [37]. Built on the foundation of the Natural Language Toolkit (NLTK), TextBlob provides an accessible interface to the capabilities of the NLTK library. Notably, TextBlob has been harnessed for sentiment analysis, enabling the computation of sentiment based on numeric values representing polarity and subjectivity. These numeric values indicate the extent to which a text is positive, negative, or neutral in sentiment. A polarity value of zero signifies a neutral sentiment, while values greater than zero denote positivity, and values less than zero indicate negativity [38]. The distinction between objectivity and subjectivity in a sentence is consequential. Objectivity refers to factual information, while subjectivity encompasses opinions, feelings, or judgments. Subjectivity values range from 0 to 1, gauging the mix of factual content and personal opinion in textual data. Higher subjectivity values signify a greater presence of personal opinion. In sentiment analysis, the operational procedure of the proposed system is outlined in Fig 3. Initially, the model is trained using predefined data. Incoming input is then assessed to ascertain whether it conforms to a structured format. If unstructured, TF-IDF eliminates stop words and generates patterned data. Subsequently, polarity and subjectivity are computed using TextBlob. The polarity is then examined, determining whether it is greater than zero, less than zero, or equal to zero. If polarity surpasses zero, the output reflects a positive sentiment; if it falls below zero, the output signifies negativity. Otherwise, a neutral sentiment is conveyed.News classification.Text categorization, also known as text tagging or labeling, entails organizing sequential text into distinct categories. Employing NLP, a text classifier can automatically assess text, assign predefined tags, or categorize it based on its content [39, 40]. Text classification serves various purposes, including sentiment analysis, language identification, product or content tagging, spam detection, and more [41].ML and Ensemble ML algorithms have been employed in the proposed system for news classification. The procedure for news classification is illustrated in Fig 4. The model is initially trained using pre-existing data. The input dataset is assessed to determine whether the text is structured. TF-IDF transforms the data into a structured pattern if the text lacks structure. Following the analysis of patterned data, the proposed system individually employs algorithms to classify news articles. The classifiers performance is evaluated through a classification report encompassing accuracy, precision, recall, and F1 score, providing a comprehensive assessment of the classifiers effectiveness.Algorithm 1 String ProcessingInput: String (S)Output: Process String1: procedureString Processing(S)2: Convert s to lowercase(S)3: Remove punctuations (s)4: Remove stop-words (s)5: Stemming (s)6: Lemmatisation (s)7: Process String s8: end procedureString processing.We preprocess the string to enhance its suitability for training ML models. Following the classifiers outcome, we subject the dataset to the string processing technique again. Subsequently, we reapply the classifier to the processed dataset, yielding heightened accuracy across all classifiers. The string processing procedure is outlined in Algorithm 1.Description of the ML classifiersIn the current research, we employ supervised and unsupervised ML algorithms over DL and Transfer Learning (TL) for several reasons. DL is also getting popular in different domains [42]. ML is chosen due to its adeptness at generalizing with limited labeled data, rendering it ideal for tasks with smaller datasets like BBC news categorization that we use in this research. Moreover, ML proves more pragmatic in resource-constrained environments than DL, which demands substantial computational resources. ML is sufficient for straightforward classification tasks characterized by well-defined features, whereas DL excels in discerning complex hierarchical patterns or processing unstructured data like images, audio, or text. Given the absence of pre-trained models and the significant computational resources and expertise required for TL fine-tuning, ML emerges as the preferred choice. The studys focus on tabular data classification benefits from MLs efficacy in achieving satisfactory performance with simpler models, making it the pragmatic choice for the research endeavor. Details of the ML algorithms are discussed below.This research uses SVC, KNN, DT, AB, MNB, SGD, RC, LR-supervised ML algorithms and NMF and KMC unsupervised ML algorithms for news categorization. During classification, SVC creates a hyperplane to separate the classes [43, 44]. Previously, it converted each training sample into a feature vector, representing different classes and minimizing an optimization function associated with the hyperplanes parameters while maximizing the separation between them. It reduces the errors by penalizing misclassified features without overly compromising classification accuracy. However, the effectiveness of SVC may decrease in the presence of highly noisy data [45]. As with SVC, the KNN algorithm works based on the distance of the data points, especially in clustering and classifying data according to similarity [46]. KNN groups similar data points by comparing their characteristics using class labels and feature vectors [13, 47]. In text categorization, texts are represented as spatial vectors, and the algorithm calculates similarity scores between the training texts and a new text sample. Based on these similarities, the K most similar neighbors are selected to determine the class of the new sample. The process involves transforming incoming and training texts into feature vectors, comparing these vectors to assess similarity, and then choosing the KNN to classify the incoming text.As with the distance-based SVC and KNN, the tree-based DT relies on a set of if-then-else decision rules to learn patterns from data and approximate even complex functions such as sine curves [48]. DT constructs a model represented as a tree structure, using decision nodes and leaf nodes [49]. Decision Nodes are employed to make pivotal decisions, which are then branched into various routes. Conversely, Leaf Nodes correspond to the final decision outputs [50]. As a bosting architecture, the AB classifier uses multiple classifiers on the same training dataset, improving its performance iteratively [51]. This approach combines these classifiers into a powerful final model, assigning weights to each.From the perspective of probability-based classification, the MNB classifier determines document labels by first calculating each classs prior probability, influenced by class distribution in the training set. It then refines these probabilities based on the documents words [52, 53]. Classification is achieved using the Maximum a posteriori (MAP) rule, choosing the class with the highest combined probability. Parameter estimation during training uses a smoothing technique to counteract zero probabilities [10, 54], adjusting word and class occurrence counts to ensure reliable estimation under sparse data conditions. Similar to the probability-based MNB algorithm, LR computes the probability of an outcome variable to establish a correlation between a dependent (target) variable and one or more independent variables [55, 56]. The output of the target variable is represented through binary values, specifically 0 and 1. LR employs a sigmoid function for outcome prediction, yielding a value from 0 to 1. When the sigmoid functions output is 0.5 or greater, its interpreted as 1; conversely, an output value less than 0.5 is interpreted as 0.In the optimization and regularization perspective, SGD minimizes the objective function, such as a loss function that calculates the discrepancy between the expected and actual values [57]. It is a variation of the gradient descent algorithm. The main characteristic of SGD is that, rather than utilizing the complete dataset, it changes the models parameters at each iteration by calculating the gradient of the loss function from either a single sample or a small batch of samples. On the other hand, the Ridge Classifier includes a penalty term with the cost function to prevent overfitting [58]. It is an extension of the linear SVM classifier that introduces a regularization component called Ridge Regression or L2 regularization. In this study, we formulate it to tackle multiclass classification tasks using techniques like one-vs-one and one-vs-rest strategies.Additionally, we use two unsupervised techniques, namely NMF and KMC. NMF is designed to decompose high-dimensional vectors into lower dimensions while retaining non-negativity in both the lower-dimensional vectors and their coefficients. Unlike traditional matrix factorization methods, NMF factors non-negative matrices into two non-negative matrices, which approxi
Decision Making/Detection and Monitoring/Content Synthesis
Unknown
null
null
null
null
null
null
news
Shuhui Luo, Peilan Liu, Xulun Ye
Bayesian graph convolutional network with partial observations
As a widely studied model in the machine learning and data processing society, graph convolutional network reveals its advantage in non-grid data processing. However, existing graph convolutional networks generally assume that the node features can be fully observed. This may violate the fact that many real applications come with only the pairwise relationships and the corresponding node features are unavailable. In this paper, a novel graph convolutional network model based on Bayesian framework is proposed to handle the graph node classification task without relying on node features. First, we equip the graph node with the pseudo-features generated from the stochastic process. Then, a hidden space structure preservation term is proposed and embedded into the generation process to maintain the independent and identically distributed property between the training and testing dataset. Although the model inference is challenging, we derive an efficient training and predication algorithm using variational inference. Experiments on different datasets demonstrate the proposed graph convolutional networks can significantly outperform traditional methods, achieving an average performance improvement of 9%.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0307146
https://journals.plos.org/plosone/article/figure/image?id=10.1371/journal.pone.0307146.g011&size=inline
2024-07-18T14:00:00Z
In this section, we introdude our method called Bayesian Graph Convolutional Network (BGCN) for graph node classification without features. Subsequently, we derive the corresponding training and predication algorithm based on variational inference. Main notations and descritions are summarized in Table 1.4.1 Bayesian graph convolutional networkGiven the graph (A, F, Y) where , and denote the pairwise relationship, node features the and the training labels respectively. Here, M denotes that there is M training samples in the graph (A, F)(M < N), K is the class number. we consider the problem that F is not available. In order to handle this problem, our idea is to equip the input with pseudo features. One straightforward pseudo feature is a constant value. However, this leads a problem that the input is unable to distinguish the difference between different samples. Another idea is that the pseudo features are generated from random distributions. Although random pseudo features can identify the difference, features from training set and testing set come from different distribution (this refers to the problem of nonindependent and nonidentically distributed issue, Fig 2). To tackle this problem in our model, we use the graph to constrain the pseudo feature generation process, which requires the pseudo features are generated with the consistent of the given graph. Note that pseudo features can be used to handle node without features, when the node features are available, we concatenate it with the generated pseudo feature. Our BGCN generation process is: (1) For the pseudo feature xi, xj, draw xi, xjN(x|0, I). (2) Maintain the struture of xi, xj with the graph relationship A, draw li,jN(l|Ai,j|xixj|2, ). (3) For labels of the pseudo features yi, yj, draw yi, yjN(y|GCN(x, W), ). where N(x|0, I) is the Gaussian distribution with the constant parameter 0 and I. Ai,j is the element in A. GCN(x, W) denotes the Graph Convolutional Network with the parameter W. Note that, when maintaining the structure in the hidden space, we set li,j = 0, which means that the generated pseudo features xi and xj are forced to be consistent with the graph structure. The Probabilistic Graphical Model (PGM) is shown in Fig 3 where the pseudo features are generated from the Gaussian distribution and constrained by the graph, and the labels are generated from the pseudo features. Our model alters the discriminative GCN model to a generative model.Fig 2. Feature generation problem.The figure illustrates the pseudo features generated by two different distributions: random distribution and graph-constrained distribution. Pseudo features generated by the random distribution fail to preserve class relations (same class in different space.). However, pseudo features generated with graph constraints successfully maintain the class relationships during the generation process (same class in the similar space.).https://doi.org/10.1371/journal.pone.0307146.g002Fig 3. Dependency between random parameters in our model.Probabilistic graphical model of BGCN. Specifically, consider the graph associated with the model, denoted as G. In this graph, blue nodes represent observations, while gray nodes correspond to partial observation labels. Notably, from the figure, we observe that our observed label Y is generated from the pseudo feature xi.https://doi.org/10.1371/journal.pone.0307146.g0034.2 Variational inferenceIn the previous section, we have constructed the corresponding Bayesian graph convolutional network model. In this section, we derive the corresponding learning and predication algorithm. Following the variational inference framework [63], we derive the Evidence Lower BOund (ELBO):(2)where p(Y, A, |0, I, w) denotes the joint distribution of observations Y and A, p(X, Y, A) is the joint distribution of the hidden variables. q(X) represents the variational posterior distribution of the pseudo feature distribution. Note that, since label Y is generated from the pseudo feature with the GCN network, q(X) cannot be derived following the standard variational framework. In our model, we adopt a strategy used in the Variational Auto-Encoder (VAE) [55] network, in which the hidden variable X can be form as another neural network with parameter , .We now extend the ELBO, and derive the following loss function:(3)where (xi) and u(xi) are the output of the , in which the first half of forms as the mean and the last half forms as the covariance. Note that, integrating over the neural network has no analytical solution. Thus, we employ a sampling method to calculate this term.(4)where is sampled from the distribution ; for the parameter , we set it as 2. Optimizing Eq 4 can be done using the standard back-propagation algorithm if the feature is available. However, this violates our assumption and also leads to a trivial solution. A simpler method is that we take the derivative with respect to the parameter (xn) and u(xn) directly. But taking derivative w.r.t. a neural network is challenging, and optimizing through a neural network is inefficient. Below, we show, by using some simple constraints and auxiliary variables Aui = u(xi), Asi = (xi), our method can achieve the efficient solution. From the ELBO, we know that loss function to optimize u(xi)i is:(5)We set some constraints to the variable u(ti), that is AuAuT = I:Rearranging the above equations, we have:(6)Where U is defined same as Au. To solve this problem, we form the Lagrangian function, set and set d as the Lagrangian multipliers of the first constraint and relax AuAuT = UUT, Then by taking the derivative w.r.t. Aud:Where L is the graph Laplacian of A. Aud is dth row of Au. u is the Lagrangian multipliers of the second constraint. The equation above can be solved by employing eigenvalue decomposition. For Usn, we have:(7)After achieving the initialized u(xi)i, we exploit the standard back propagation algorithm to further optimize . We summarize the BGCN training and predicating algorithm in algorithm 1. Flowchart of the proposed method is summarized in Fig 4. The Full optimization procedure is summarized in Fig 5.Fig 4. Flowchat of the proposed algorithm.Figure (A) demonstrates the flowchart of the proposed method without node features. Figure (B) is the flowchart of the proposed method with node features.https://doi.org/10.1371/journal.pone.0307146.g004Fig 5. BGCN optimization framework.In figure (a), we take the node feature as the input and use a neural network to infer the posterior distribution. However, since we sample from a neural network variational posterior distribution with input F, the entire algorithm cannot be optimized. In figure (b), instead of optimizing the network with feature F, we apply eigenvalue decomposition and an updating rule to achieve the mean and covariance of the output using only the graph A. When applying the full features F, we concatenate the variational posterior parameter with the given features.https://doi.org/10.1371/journal.pone.0307146.g005Algorithm 1 Training and predication algorithm for BGCN with fully observed featuresRequire:Labels for the training dataset, a given graph A, and the corresponding features F.Ensure:Labels for the predication dataset.Training procedure1: Compute the parameters of q(X) using Eqs (6) and (7).2: Sampling the parameters from with the training dataset.3: Normalizing the parameter u(xi).4: If the the observed feature F is available, concating normalized parameter u(xi) with the observed feature F. Else, use parameter u(xi) as the feature.5: Using the loss of Eq 4 to train the GCN model.Predication procedure1: Sampling from with the given predication dataset.2: Normalizing the parameter u(xi).3: If the the observed feature F is available, concating normalized parameter u(xi) with the observed feature F. Else, use parameter u(xi) as the feature.4: Using the Eq (4) to achieve the GCN output.Note that, when our model is applied to the graph dataset with the node features, from the derivation, we know that concating the original features with the pseudo features is equal to concat it with the posterior parameter u(ti). For the computational cost, The main computational cost comes in two branches: (1) eigenvalue decomposition, which adds the O(N3) where N stands for the number of graph nodes; (2)GCN model. Suppose that, in the GCN model (with L layers and K iterations), each node has ml-dimensional features. Then, computational cost is .
Prediction/Content Synthesis
Unknown
null
null
null
null
null
null
news
frankthepickle
Show HN: CLI app for document summary, search and natural language queries
This is a CLI app that I wrote - it will download and summarise all the issues in a github repository using a local LLM or OpenAI, index the results using vector embeddings, and then allow you to search and ask questions about the specs.I've provided windows and linux binaries, plus source to build and tweak yourself - you can use it with a local install of MarqoDB.I work in a company with a knowledge in over 8500 github issues software specs, I wrote this to allow me to easily search them and ask direct questions about the functionality.Things like searching for login issues, and getting a summary of the top results, and asking for patterns. Or finding an issue with a particular feature, and asking for that exact implementation to check the original intent of a feature.I've given an example of the usage with an open source repository, and it should be easy to swap out the document store or experiment with different ways of querying if you're familiar with c# - I've designed it to experiment with ways of getting information out of a large sets of documents, it's been fun to play with.Comments URL: https://news.ycombinator.com/item?id=40965894Points: 2# Comments: 0
https://github.com/dylandhall/local-embeddings
https://opengraph.githubassets.com/85cd14c0ab529f63baf0feadab0dcea955022cffcbfa3360c4490070b033f305/dylandhall/local-embeddings
2024-07-15T07:25:47Z
Use the dotnet CLI to restore and build, or download the binaries tagged to the latest release.windows - 64 bit windowslinux - 64 bit linux, requires dotnet runtimes installedlinux-self-contained - 64 bit linux, includes all libraries so doesn't need anything installedFill out your LLM API details in the api-settings.json - I've included mine because I'm using local settings so there aren't any private tokens. You can update the API keys as you see fit, it should work fine with an OpenAi endpoint.Copy the github-settings-example.json to github-settings.json and fill out the settings in there, or just run the application and it'll prompt you for your github settings.The document downloader is an injected service so it should be very easy to swap for almost any type of document library.The conversation and summary managers also are seperated out, as well as the prompt strings - this is designed to test different techniques and be easily extended.Default prompts are included in prompts.json, edit them as you see fit to get the best results for the type of documents you're summarising. Delete the file to restore the default prompts.For the included database provider, use Marqo https://github.com/marqo-ai/marqo - I'm just using the local docker image, I haven't added any authentication. If you fill out the DbApiKey it should work with their cloud environment.You can also add another implementation of IVectorDb and inject that, although currently it assumes the database handles creating the vectors.The console app should work just fine on linux - use docker for linux for marqo, and lmstudio (https://lmstudio.ai/) for linux if using a local LLM.To install docker on windows:winget install docker.dockerdesktopYou may have to restart.To se up marqo via command prompt (from the github repo):docker rm -f marqodocker pull marqoai/marqo:latestdocker run --name marqo -it -p 8882:8882 marqoai/marqo:latestInstall LM Studio https://github.com/lmstudio-ai or use the OpenAi endpoint with an API key, or another compatible endpoint. In LM studio just download the desired model, I found https://huggingface.co/legraphista/Gemma-2-9B-It-SPPO-Iter3-IMat-GGUF worked really well, and that's the model in the api-settings.json by default. Download that if you have ~12gb of video RAM, otherwise perhaps use an OpenAi API key.Make sure you start the local server in LM Studio if that's what you're using.Run with --refresh to download, summarise, store and index all your github issues. I've made the issue downloader fairly modular however, so really you should be able to use this for any document store.You can then search and ask questions about them. It's remarkably good.Currently all documents are downloaded and summarised in a folder on your hard drive and then uploaded - I could just use the database but then swapping to a different database store would mean summarising all the documents again or connecting to both databases concurrently.I've tried to break the application apart as much as possible so testing different connections and prompts is easy, I'm also planning to make it possible to select different LLMs for different purposes at a later stage.Demo using the public semantic-kernel repository from microsoft (chosen because it has around 2700 issues, which is enough to search but not enough to melt my computer summarising!)Run with --refresh to refresh issues, summarise and index, add --reindex to reindex existing summariesLoaded GithubSettings configuration from C:\software\LocalEmbeddings\github-settings.jsonLoaded Prompts configuration from C:\software\LocalEmbeddings\prompts.jsonLoaded ApiSettings configuration from C:\software\LocalEmbeddings\api-settings.jsonIndex initialized successfully.Marqo server status:Backend: Memory 1.34%, Storage 8.68%Current Marqo indexes:app-issues-summarisedsemantic-kernel-issuesCurrent index: semantic-kernel-issuesDocuments: 2765, Vectors: 11149Search the issue database or hit enter to close:chroma database bugSearching, please wait..## Top matches: ##1: 1289: Chroma memory resets when store is initialized2: 2700: Support full API in Chroma client3: 403: Support for Chroma embedding database4: 3046: Python: Cannot connect to remote Chroma DB5: 2049: Chroma .NET MemoryRecordMetadata field 'is_reference' is boolean but is saved as number in ChromaDB6: 2050: Python: Update Chroma Connector for Python SK from 0.3.29 -> 0.4.07: 1535: KeyError: 'is_reference' running search on Chroma8: 2078: .Net: minRelevanceScore is unusable for Chroma's default distance function* Press N to view the next page* Press S to display a summary of these issuesAny other key to to start a new searchSummarising, please wait..Here's a summary of the GitHub issues, highlighting their relation to each other and the "chroma database bug" searchtheme:* Chroma Memory Reset on Store Initialization: Bug in semantic_kernel where ChromaMemoryStore fails to load persisted memories during initialization, leading to data loss upon restarts.* Full Chroma API Support in Chroma Client: Feature request for the Chroma client library to implement missing functionalities like embedding creation and metadata-based search, enhancing its capabilities.* Request: Microsoft Library Support for Chroma: Feature request to integrate Chroma embedding database into a Microsoft LLM usage library, enabling seamless integration of external knowledge within applications.* Chroma Memory Store Connection Issues: Bug in semantic_kernel where outdated connection methods used with ChromaMemoryStore cause failures when connecting to recent versions of chromadb.* Chroma .NET MemoryRecordMetadata 'is_reference' Type Mismatch: Bug in Chroma .NET library's SQLite backend causing the boolean field is_reference to be stored as an integer, leading to deserialization errors during retrieval.* Chroma Connector Python Update Issue: Major update to the Python connector (v0.4.0) involving simplified client setup, data store migration away from DuckDB/ClickHouse, and a new migration tool. Requires user action to update data schema.* KeyError in Chroma Memory Store Search: Issue where ChromaMemoryStore raises a KeyError when searching external Chroma DBs (created outside Semantic Kernel) due to missing "is_reference" metadata in document records.* .Net: minRelevanceScore unusable for Chroma's default distance function: Bug in .Net connector where minRelevanceScore parameter doesn't function correctly when using Chroma's default "l2" distance function, effectively disabling its filtering functionality.Relation to "chroma database bug" search:All these issues relate to bugs or desired improvements within the Chroma database ecosystem, directly impacting itsfunctionality, stability, and usability across different libraries and programming languages (Python, .NET). Theycollectively highlight common areas needing attention and potential fixes within the Chroma project.* Press Q to ask a question about the summary* Press N to ask a question about the summary in a new conversationAny other key to continue looking through the search results## Top matches: ##1: 1289: Chroma memory resets when store is initialized2: 2700: Support full API in Chroma client3: 403: Support for Chroma embedding database4: 3046: Python: Cannot connect to remote Chroma DB5: 2049: Chroma .NET MemoryRecordMetadata field 'is_reference' is boolean but is saved as number in ChromaDB6: 2050: Python: Update Chroma Connector for Python SK from 0.3.29 -> 0.4.07: 1535: KeyError: 'is_reference' running search on Chroma8: 2078: .Net: minRelevanceScore is unusable for Chroma's default distance function* Press N to view the next page* Press S to display a summary of these issuesAny other key to to start a new search## .Net: minRelevanceScore is unusable for Chroma's default distance function #### Summary of GitHub Issue: .Net: minRelevanceScore unusable for Chroma's default distance function ##Affected Parts:* Microsoft Semantic Kernel .NET library* Connectors/Connectors.Memory.Chroma/ChromaMemoryStore.cs file specificallyBug Description:The minRelevanceScore parameter provided in the .Net Chroma connector implementation for searching text embeddings iscurrently unusable when using Chroma's default distance function, "l2". This is because:1. The current implementation assumes similarity scores fall between 0 and 1, which is not true for "l2" distance (which produces distances in the thousands).2. Consequently, setting minRelevanceScore above 0 with "l2" results in no matches being returned, effectively rendering the parameter useless.Workaround:The user currently bypasses this issue by directly using QueryEmbeddingsAsync from the ChromaClient instead of relyingon the .Net connector's search functionality.This bug essentially prevents utilizing the minRelevanceScore filtering feature when employing Chroma's defaultdistance function, limiting its usefulness in fine-tuning search results based on relevance.Location: https://github.com/microsoft/semantic-kernel/issues/2078* Press Q to ask a question about the current issue* Press N to ask a question about the current issue in a new conversation* Press R to search for related issues* Press C to continue searching issuesAsk a question about this issue, enter to return:what is the cause of the bugQuerying, please wait..The bug stems from the ChromaMemoryStore.cs implementation in the .Net connector. It assumes similarity scores fallbetween 0 and 1, but the default "l2" distance function returns distances in the thousands, making theminRelevanceScore parameter unusable.Ask another question or enter to return:how is the user working around the issueQuerying, please wait..The user is working around the issue by directly using the QueryEmbeddingsAsync method from the ChromaClient insteadof relying on the .Net connector's SearchAsync method.Ask another question or enter to return:You've used git update-index --assume-unchanged prompts.json so changes won't get committed. Use git update-index --no-assume-unchanged prompts.json if you want to change the example for others, especially if you add additional prompts!
Content Synthesis/Decision Making
Unknown
null
null
null
null
null
null
news
Thehemashow
What Can AI Do For Healthcare Advertising and marketing In 2024? - Y2Fear
This submit was sponsored by CallRail. The opinions expressed on this article are the sponsor’s personal. Synthetic intelligence (AI) has enormous potential for healthcare practices. It could possibly help with analysis and therapy, in addition to administrative and advertising and marketing duties. But, many practices are nonetheless cautious of utilizing AI, particularly concerning advertising and […]
https://y2fear.com/what-can-ai-do-for-healthcare-advertising-and-marketing-in-2024/
https://www.searchengine…featured-388.jpg
2024-07-26T18:48:00Z
This submit was sponsored by CallRail. The opinions expressed on this article are the sponsors personal.Synthetic intelligence (AI) has enormous potential for healthcare practices. It could possibly help with analysis and therapy, in addition to administrative and advertising and marketing duties. But, many practices are nonetheless cautious of utilizing AI, particularly concerning advertising and marketing.The fact is that AI is right here to remain, and lots of healthcare practices are starting to make use of the know-how. In keeping with one current examine, 89% of healthcare professionals surveyed stated that they have been a minimum of evaluating AI merchandise, experimenting with them, or had carried out AI.That will help you decide whether or not utilizing AI is true in your healthcare follow, lets check out a number of the execs and cons of utilizing AI whereas advertising and marketing.The Professionals And Cons Of AI For Healthcare PracticesHealthcare practices that select to implement AI in protected and acceptable methods to assist them with their advertising and marketing and affected person expertise efforts can reap many advantages, together with extra leads, conversions, and glad sufferers. Actually, 41% of healthcare organizations say their advertising and marketing group already makes use of AI.Sufferers additionally anticipate healthcare practices to start to implement AI in a variety of methods. In a single dentistry examine, sufferers general confirmed a optimistic angle towards utilizing AI. So, whats holding your follow again from including new instruments and discovering new use instances for AI? Lets check out widespread issues.Con #1: Information Safety And Privateness IssuesLets get one of many largest issues with AI and healthcare out of the best way first. Healthcare practices should observe all privateness and safety laws associated to sufferers protected well being info (PHI) to take care of HIPAA compliance.So, issues over whether or not AI can be utilized in a method that doesnt intervene with HIPAA compliance are legitimate. As well as, there are additionally issues in regards to the open-source nature of fashionable GenAI fashions, which implies delicate follow information is likely to be uncovered to rivals and even hackers.Professional #1: AI Can Assist You Get Extra Worth From Your Information SecurelyWhereas there are legitimate issues about how AI algorithms make choices and information privateness issues, AI can be used to counterpoint information that can assist you obtain your advertising and marketing objectives whereas nonetheless conserving it protected.With acceptable guardrails and omission procedures in place, you may apply AI to realize insights from information that issues to you with out placing delicate information in danger.For instance, our CallRail Labs group helps entrepreneurs take away their blind spots through the use of AI to research and detect essential context clues that show you how to qualify which calls are your finest leads so you may observe up promptly.On the identical time, we all know how necessary it’s for healthcare firms to maintain PHI safe, which is why we combine with healthcare privateness platforms like Freshpaint. It could possibly show you how to bridge the hole between affected person privateness and digital advertising and marketing.As well as, our AI-powered Healthcare Plan mechanically redacts delicate patient-protected well being info from name transcripts, enforces compulsory log-outs to forestall PHI from changing into public, offers full audit path logging, and even options distinctive logins and credentials for each consumer, which helps remove the potential for PHI to be by accident uncovered to workers who dont want entry to that info.Con #2: AI Is ImpersonalHaving an excellent affected person expertise is necessary to virtually all sufferers, and in line with one survey, 52% of sufferers stated a key a part of an excellent affected person expertise is being handled with respect. Nearly as many (46%) stated they need to be addressed as an individual. Given these issues, handing over content material creation or buyer interactions to AI can really feel daunting. Whereas an AI-powered chatbot is likely to be extra environment friendly than a human in a name heart, you additionally dont need sufferers to really feel such as youve delegated customer support to a robotic. Belief is the important thing to constructing affected person relationships.Professional #2: AI Can Enhance The Affected person ExpertiseWorries over AI making affected person interactions really feel impersonal are cheap, however identical to some other sort of instrument, its how you utilize AI that issues. There are methods to deploy AI that may truly improve the affected person expertise and, by doing so, give your healthcare follow a bonus over your rivals.The reply isnt in offloading buyer interplay to chatbots. However AI might help you analyze buyer interactions to make customer support extra environment friendly and useful.With CallRails AI-powered Premium Dialog Intelligence, which transcribes, summarizes, and analyzes every name, you may rapidly assess your sufferers wants and issues and reply appropriately with a human contact. As an example, Premium Dialog Intelligence can establish and extract widespread key phrases and matters from name transcripts. This information reveals recurring themes, reminiscent of steadily requested questions, widespread complaints, and fashionable companies. A healthcare follow might then use these insights to tailor their advertising and marketing campaigns to handle essentially the most urgent affected person issues.Con #3: AI Appears Too Sophisticated To UseLets face it: new know-how is dangerous, and for healthcare practices particularly, threat is frightening. With AI, a number of the threat comes from its perceived complexity. Figuring out the best use instances in your follow, choosing the best instruments, coaching your workers, and altering workflows can all really feel fairly daunting. Figuring this out takes money and time. And, if there arent clear use instances and ROI hooked up, the long-term advantages is probably not definitely worth the short-term impression on enterprise.Professional #3: AI Can Save Time And CashUtilizing a pc or a spreadsheet for the primary time most likely additionally felt difficult and on the entrance finish, took a while to be taught. Nevertheless, you already know that utilizing these instruments, in comparison with pen, paper, and calculators, has saved an unlimited period of time, making the upfront funding clearly price it. In comparison with many applied sciences, AI instruments are sometimes intuitive and solely require you to be taught a number of easy issues like writing prompts, refining prompts, reviewing stories, and so forth. Even when it takes a while to be taught new AI instruments, the time financial savings will likely be price it when you do.To get the best return on funding, concentrate on AI options that deal with time-intensive duties to release time for innovation. With the best use instances and instruments, AI might help remedy complexity with out including complexity. For instance, with Premium Dialog Intelligence, our clients spend 60% much less time analyzing calls every week, they usuallyre utilizing that point to coach workers higher, improve their productiveness, and enhance the affected person expertise.Con #4: AI Advertising and marketing Can Harm Your ModelMany healthcare practices are excited to make use of GenAI instruments to speed up inventive advertising and marketing efforts, like social media picture creation and article writing. However customers are much less excited. Actually, customers usually tend to say that the use of AI makes them distrusting (40%), fairly than trusting (19%), of a model. In a market the place belief is a very powerful issue for sufferers when selecting healthcare suppliers, there may be warning and hesitancy round utilizing GenAI for advertising and marketing.Professional #4: AI Helps Make Your Advertising and marketing HigherWhereas off-brand AI photographs shared on social media will be unhealthy model advertising and marketing, there are numerous methods AI can elevate your advertising and marketing efforts with out impacting the model notion. From uncovering insights to bettering your advertising and marketing campaigns and maximizing the worth of every advertising and marketing greenback spent to growing lead conversion charges and lowering affected person churn, AI might help you sort out these issues sooner and higher than ever.At CallRail, were utilizing AI to sort out complicated challenges like multi-conversation insights. CallRail can provide entrepreneurs immediate entry to a 3-6 sentence abstract for every name, common name sentiment, notable traits behind optimistic and damaging interactions, and a abstract of generally requested questions. Such evaluation would take hours and hours in your advertising and marketing group to do manually, however with AI, you’ve gotten name insights at your fingertips to assist drive messaging and key phrase choices that may enhance your advertising and marketing attribution and the affected person expertise.Con #5: Adapting AI Instruments May Trigger DisruptionAs a contemporary healthcare follow, your tech stack is the engine that runs your enterprise. When onboarding any new know-how, there are at all times issues about how properly it should combine with present know-how and instruments you utilize and whether or not it helps HIPAA compliance. There may be concern about how AI instruments can match into your present workflows with out inflicting disruption.Professional #5: AI Helps Folks Do Their Jobs HigherPairing the best AI instrument for roles with repetitive duties could be a win in your workers and your follow. For instance, maintaining with healthcare traits is necessary for entrepreneurs to enhance messaging and campaigns.An AI-powered instrument that analyzes conversations and offers name highlights might help healthcare entrepreneurs establish key phrase and Google Advert alternatives to allow them to concentrate on implementing essentially the most profitable advertising and marketing technique fairly than listening to hours of name recordings. As well as, CallRails new AI-powered Convert Help helps healthcare entrepreneurs present a greater affected person expertise. With AI-generated name teaching, entrepreneurs can establish what went properly and what to enhance after each dialog.Whats extra, with an answer like CallRail, which presents a Healthcare Plan and can signal a enterprise affiliate settlement (BAA), you’re assured that we’ll adjust to HIPAA controls inside our service choices to make sure that your name monitoring doesnt expose you to potential fines or litigation. Furthermore, we additionally combine with different advertising and marketing instruments, like Google Advertisements, GA4, and extra, making it simple to combine our resolution into your present applied sciences and workflows.Let CallRail Present You The Professionals Of AIIf you happen tore nonetheless frightened about utilizing AI in your healthcare follow, begin with a trusted resolution like CallRail that has confirmed ROI for AI-powered instruments and a dedication to accountable AI growth. You’ll be able to speak to CallRails consultants or check the product out for your self with a 14-day free trial.Picture CreditFeatured Picture: Picture by CallRail. Used with permission.
Decision Making/Information Retrieval Or Search/Content Synthesis
Business and Financial Operations/Healthcare Practitioners and Support
null
null
null
null
null
null
news
hsfzxjy
Show HN: Learnware – Sharing, Exploring and Reusing ML Models for Developers
Based on the learnware paradigm, the learnware package supports the entire process including the submission, usability testing, organization, identification, deployment, and reuse of learnwares. Si...
https://github.com/Learnware-LAMDA/Learnware
https://opengraph.githubassets.com/c4137585fd75bd9cb84459e529a719a87f1e6caa14fbf2fb0d785c29c0d708b0/Learnware-LAMDA/Learnware
2024-07-08T13:52:34Z
Learnware paradigm was proposed by Professor Zhi-Hua Zhou in 2016 [1, 2]. In the learnware paradigm, developers worldwide can share models with the learnware dock system, which effectively searches for and reuse learnware(s) to help users solve machine learning tasks efficiently without starting from scratch.The learnware package provides a fundamental implementation of the central concepts and procedures within the learnware paradigm. Its well-structured design ensures high scalability and facilitates the seamless integration of additional features and techniques in the future.In addition, the learnware package serves as the engine for the Beimingwu System and can be effectively employed for conducting experiments related to learnware.[1] Zhi-Hua Zhou. Learnware: on the future of machine learning. Frontiers of Computer Science, 2016, 10(4): 589590 [2] Zhi-Hua Zhou. Machine Learning: Development and Future. Communications of CCF, 2017, vol.13, no.1 (2016 CNCC keynote)A learnware consists of a high-performance machine learning model and specifications that characterize the model, i.e., "Learnware = Model + Specification". These specifications, encompassing both semantic and statistical aspects, detail the model's functionality and statistical information, making it easier for future users to identify and reuse these models.The above diagram illustrates the learnware paradigm, which consists of two distinct stages:Submitting Stage: Developers voluntarily submit various learnwares to the learnware doc system, and the system conducts quality checks and further organization of these learnwares.Deploying Stage: When users submit task requirements, the learnware doc system automatically selects whether to recommend a single learnware or a combination of multiple learnwares and provides efficient deployment methods. Whether its a single learnware or a combination of multiple learnwares, the system offers convenient learnware reuse interfaces.The architecture is designed based on the guidelines including decoupling, autonomy, reusability, and scalability. The above diagram illustrates the framework from the perspectives of both modules and workflows.At the workflow level, the learnware package consists of Submitting Stage and Deploying Stage.ModuleWorkflowSubmitting StageThe developers submit learnwares to the learnware market, which conducts usability checks and further organization of these learnwares.Deploying StageThe learnware market recommends learnwares according to users task requirements and provides efficient reuse and deployment methods.At the module level, the learnware package is a platform that consists of Learnware, Market, Specification, Model, Reuse, and Interface modules.ModuleDescriptionLearnwareThe specific learnware, consisting of specification module, and user model module.MarketDesigned for learnware organization, identification, and usability testing.SpecificationGenerating and storing statistical and semantic information of learnware, which can be used for learnware search and reuse.ModelIncluding the base model and the model container, which can provide unified interfaces and automatically create isolated runtime environments.ReuseIncluding the data-free reuser, data-dependent reuser, and aligner, which can deploy and reuse learnware for user tasks.InterfaceThe interface for network communication with the Beimingwu backend.Learnware is currently hosted on PyPI. You can easily install learnware by following these steps:In the learnware package, besides the base classes, many core functionalities such as "learnware specification generation" and "learnware deployment" rely on the torch library. Users have the option to manually install torch, or they can directly use the following command to install the learnware package:pip install learnware[full]Note: However, it's crucial to note that due to the potential complexity of the user's local environment, installing learnware[full] does not guarantee that torch will successfully invoke CUDA in the user's local setting.In the learnware package, each learnware is encapsulated in a zip package, which should contain at least the following four files:learnware.yaml: learnware configuration file.__init__.py: methods for using the model.stat.json: the statistical specification of the learnware. Its filename can be customized and recorded in learnware.yaml.environment.yaml or requirements.txt: specifies the environment for the model.To facilitate the construction of a learnware, we provide a Learnware Template that users can use as a basis for building their own learnware. We've also detailed the format of the learnware zip package in Learnware Preparation.Users can start a learnware workflow according to the following steps:You can initialize a basic Learnware Market named "demo" using the code snippet below:fromlearnware.marketimportinstantiate_learnware_market# instantiate a demo marketdemo_market=instantiate_learnware_market(market_id="demo", name="easy", rebuild=True)Before uploading your learnware to the Learnware Market, you'll need to create a semantic specification, semantic_spec. This involves selecting or inputting values for semantic tags to describe the features of your task and model.For instance, the following code illustrates the semantic specification for a Scikit-Learn type model. This model is tailored for education scenarios and performs classification tasks on tabular data:fromlearnware.specificationimportgenerate_semantic_specsemantic_spec=generate_semantic_spec( name="demo_learnware", data_type="Table", task_type="Classification", library_type="Scikit-learn", scenarios="Education", license="MIT",)After preparing the semantic specification, you can insert your learnware into the learnware market using a single line of code:demo_market.add_learnware(zip_path, semantic_spec)Here, zip_path is the file path of your learnware zip package.To identify learnwares that align with your task's purpose, you'll need to provide a semantic specification, user_semantic, that outlines your task's characteristics. The Learnware Market will then perform an initial search based on user_semantic, which filters learnwares by considering the semantic information of your task.# construct user_info, which includes a semantic specificationuser_info=BaseUserInfo(id="user", semantic_spec=semantic_spec)# search_learnware: performs semantic specification search when user_info doesn't include a statistical specificationsearch_result=demo_market.search_learnware(user_info) single_result=search_results.get_single_results()# single_result: the List of Tuple[Score, Learnware] returned by semantic specification searchprint(single_result)If you generate and provide a statistical specification file rkme.json, the Learnware Market will conduct learnware identification based on statistical information, and return more targeted models. Using the API we provided, you can easily generate this statistical specification locally.For example, the code below executes learnware search when using Reduced Kernel Mean Embedding (RKME) as the statistical specification:importlearnware.specificationasspecificationuser_spec=specification.RKMETableSpecification()# unzip_path: directory for unzipped learnware zipfileuser_spec.load(os.path.join(unzip_path, "rkme.json"))user_info=BaseUserInfo( semantic_spec=user_semantic, stat_info={"RKMETableSpecification": user_spec})search_result=demo_market.search_learnware(user_info)single_result=search_results.get_single_results()multiple_result=search_results.get_multiple_results()# search_item.score: based on MMD distances, sorted in descending order# search_item.learnware.id: id of learnwares, sorted by scores in descending orderforsearch_iteminsingle_result: print(f"score: {search_item.score}, learnware_id: {search_item.learnware.id}")# mixture_item.learnwares: collection of learnwares whose combined use is beneficial# mixture_item.score: score assigned to the combined set of learnwares in `mixture_item.learnwares`formixture_iteminmultiple_result: print(f"mixture_score: {mixture_item.score}\n") mixture_id=" ".join([learnware.idforlearnwareinmixture_item.learnwares]) print(f"mixture_learnware: {mixture_id}\n")We offer two data-free methods JobSelectorReuser and AveragingReuser for reusing a given list of learnwares. Please substitute test_x in the code snippet below with your own testing data:fromlearnware.reuseimportJobSelectorReuser, AveragingReuser# using job selector reuser to reuse the searched learnwares to make predictionreuse_job_selector=JobSelectorReuser(learnware_list=mixture_item.learnwares)job_selector_predict_y=reuse_job_selector.predict(user_data=test_x)# using averaging ensemble reuser to reuse the searched learnwares to make predictionreuse_ensemble=AveragingReuser(learnware_list=mixture_item.learnwares)ensemble_predict_y=reuse_ensemble.predict(user_data=test_x)We also provide two data-dependent methods: EnsemblePruningReuser and FeatureAugmentReuser, when the user has minor labeled data for refining a given list of learnwares. Here's an example for adopting multiple returned learnwares by labeled data to solve classification tasks:fromlearnware.reuseimportEnsemblePruningReuser, FeatureAugmentReuser# Use ensemble pruning reuser to reuse the searched learnwares to make prediction# (train_x, train_y) is the small amount of labeled data# `mode` has two options "classification" and "regression"reuse_ensemble=EnsemblePruningReuser(learnware_list=mixture_item.learnwares, mode="classification")reuse_ensemble.fit(train_x, train_y)ensemble_pruning_predict_y=reuse_ensemble.predict(user_data=test_x)# Use feature augment reuser to reuse the searched learnwares to make predictionreuse_feature_augment=FeatureAugmentReuser(learnware_list=mixture_item.learnwares, mode="classification")reuse_feature_augment.fit(train_x, train_y)feature_augment_predict_y=reuse_feature_augment.predict(user_data=test_x)The learnware package also offers automated workflow examples. This includes preparing learnwares, uploading and deleting learnwares from the market, and searching for learnwares using both semantic and statistical specifications. To experience the basic workflow of the learnware package, the users can run test/test_workflow/test_workflow.py to try the basic workflow of learnware.We build various types of experimental scenarios and conduct extensive empirical study to evaluate the baseline algorithms for specification generation, learnware identification, and reuse on tabular, image, and text data.For all experiments, we used a single Linux server. Details on the specifications are listed in the table below. All processors were used for training and evaluating.SystemGPUCPUUbuntu 20.04.4 LTSNvidia Tesla V100SIntel(R) Xeon(R) Gold 6240ROn various tabular datasets, we initially evaluate the performance of identifying and reusing learnwares from the learnware market that share the same feature space as the user's tasks. Additionally, since tabular tasks often come from heterogeneous feature spaces, we also assess the identification and reuse of learnwares from different feature spaces.Our study utilize three public datasets in the field of sales forecasting: Predict Future Sales (PFS), M5 Forecasting (M5), and Corporacion. To enrich the data, we apply diverse feature engineering methods to these datasets. Then we divide each dataset by store and further split the data for each store into training and test sets. A LightGBM is trained on each Corporacion and PFS training set, while the test sets and M5 datasets are reversed to construct user tasks. This results in an experimental market consisting of 265 learnwares, encompassing five types of feature spaces and two types of label spaces. All these learnwares have been uploaded to the Beimingwu system.The most basic way to reuse a learnware is Top-1 reuser, which directly uses the single learnware chosen by RKME specification. Besides, we implement two data-free reusers and two data-dependent reusers that works on single or multiple helpful learnwares identified from the market. When users have no labeled data, JobSelector reuser selects different learnwares for different samples by training a job selector classifier; AverageEnsemble reuser uses an ensemble method to make predictions. In cases where users possess both test data and limited labeled training data, EnsemblePruning reuser selectively ensembles a subset of learnwares to choose the ones that are most suitable for the users task; FeatureAugment reuser regards each received learnware as a feature augmentor, taking its output as a new feature and then builds a simple model on the augmented feature set. JobSelector and FeatureAugment are only effective for tabular data, while others are also useful for text and image data.In the homogeneous cases, the 53 stores within the PFS dataset function as 53 individual users. Each store utilizes its own test data as user data and applies the same feature engineering approach used in the learnware market. These users could subsequently search for homogeneous learnwares within the market that possessed the same feature spaces as their tasks.We conduct a comparison among different baseline algorithms when the users have no labeled data or limited amounts of labeled data. The average losses over all users are illustrated in the table below. It shows that unlabeled methods are much better than random choosing and deploying one learnware from the market.SettingMSEMean in Market (Single)0.897Best in Market (Single)0.756Top-1 Reuse (Single)0.830Job Selector Reuse (Multiple)0.848Average Ensemble Reuse (Multiple)0.816The figure below showcases the results for different amounts of labeled data provided by the user; for each user, we conducted multiple experiments repeatedly and calculated the mean and standard deviation of the losses; the average losses over all users are illustrated in the figure. It illustrates that when users have limited training data, identifying and reusing single or multiple learnwares yields superior performance compared to user's self-trained models.Based on the similarity of tasks between the market's learnwares and the users, the heterogeneous cases can be further categorized into different feature engineering and different task scenarios.We consider the 41 stores within the PFS dataset as users, generating their user data using a unique feature engineering approach that differ from the methods employed by the learnwares in the market. As a result, while some learnwares in the market are also designed for the PFS dataset, the feature spaces do not align exactly.In this experimental setup, we examine various data-free reusers. The results in the following table indicate that even when users lack labeled data, the market exhibits strong performance, particularly with the AverageEnsemble method that reuses multiple learnwares.SettingMSEMean in Market (Single)1.149Best in Market (Single)1.038Top-1 Reuse (Single)1.075Average Ensemble Reuse (Multiple)1.064We employ three distinct feature engineering methods on all the ten stores from the M5 dataset, resulting in a total of 30 users. Although the overall task of sales forecasting aligns with the tasks addressed by the learnwares in the market, there are no learnwares specifically designed to satisfy the M5 sales forecasting requirements.In the following figure, we present the loss curves for the user's self-trained model and several learnware reuse methods. It is evident that heterogeneous learnwares prove beneficial with a limited amount of the user's labeled data, facilitating better alignment with the user's specific task.Second, we assess our algorithms on image datasets. It is worth noting that images of different sizes could be standardized through resizing, eliminating the need to consider heterogeneous feature cases.We choose the famous image classification dataset CIFAR-10, which consists of 60000 32x32 color images in 10 classes. A total of 50 learnwares are uploaded: each learnware contains a convolutional neural network trained on an unbalanced subset that includs 12000 samples from four categories with a sampling ratio of 0.4:0.4:0.1:0.1.A total of 100 user tasks are tested and each user task consists of 3000 samples of CIFAR-10 with six categories with a sampling ratio of 0.3:0.3:0.1:0.1:0.1:0.1.We assess the average performance of various methods using 1 - Accuracy as the loss metric. The following table and figure show that when users face a scarcity of labeled data or possess only a limited amount of it (less than 2000 instances), leveraging the learnware market can yield good performances.SettingErrorMean in Market (Single)0.655Best in Market (Single)0.304Top-1 Reuse (Single)0.406Job Selector Reuse (Multiple)0.406Average Ensemble Reuse (Multiple)0.310Finally, we evaluate our algorithms on text datasets. Text data naturally exhibit feature heterogeneity, but this issue can be addressed by applying a sentence embedding extractor.We conduct experiments on the well-known text classification dataset: 20-newsgroup, which consists approximately 20000 newsgroup documents partitioned across 20 different newsgroups. Similar to the image experiments, a total of 50 learnwares are uploaded. Each learnware is trained on a subset that includes only half of the samples from three superclasses and the model in it is a tf-idf feature extractor combined with a naive Bayes classifier. We define 10 user tasks, and each of them encompasses two superclasses.The results are depicted in the following table and figure. Similarly, even when no labeled data is provided, the performance achieved through learnware identification and reuse can match that of the best learnware in the market. Additionally, utilizing the learnware market allows for a reduction of approximately 2000 samples compared to training models from scratch.SettingAccuracyMean in Market (Single)0.507Best in Market (Single)0.859Top-1 Reuse (Single)0.846Job Selector Reuse (Multiple)0.845Average Ensemble Reuse (Multiple)0.862If you use our project in your research or work, we kindly request that you cite the following papers:@article{zhou2024learnware, title = {Learnware: Small models do big}, author = {Zhou, Zhi-Hua and Tan, Zhi-Hao}, journal = {Science China Information Sciences}, volume = {67}, number = {1}, pages = {112102}, year = {2024}}@article{tan2024beimingwu, title = {Beimingwu: A learnware dock system}, author = {Tan, Zhi-Hao and Liu, Jian-Dong and Bi, Xiao-Dong and Tan, Peng and Zheng, Qin-Cheng and Liu, Hai-Tian and Xie, Yi and Zou, Xiao-Chuan and Yu, Yang and Zhou, Zhi-Hua}, journal = {arXiv preprint arXiv:2401.14427}, year = {2024}}learnware is still in its early stages and may contain bugs and issues. We highly value and encourage contributions from the community, and we sincerely invite everyone to research and improve learnware algorithms, develop and explore within the learnware paradigm. For detailed development guidelines, please consult our Developer Guide. We kindly request that contributors adhere to the provided commit format and pre-commit configuration when participating in the project. Your valuable contributions are greatly appreciated.The learnware repository is developed and maintained by the LAMDA Beimingwu R&D (Research and Development) Team. To learn more about our team, please visit the Team Overview.
Process Automation/Content Creation
Unknown
null
null
null
null
null
null
news
PR Wire
Sobot Celebrates 2nd Anniversary by Revealing 4 Emerging Trends That Distinguish It from Major Competitors
Sobot, an all-in-one contact center solution provider established in Singapore, is celebrating its second anniversary and revealing four emerging trends that ...
https://en.antaranews.com/news/317334/sobot-celebrates-2nd-anniversary-by-revealing-4-emerging-trends-that-distinguish-it-from-major-competitors
https://cdn.antaranews.c…/07/01/sobot.jpg
2024-07-01T07:47:33Z
Sobot Co-founder and CEO, Xu Yi, commented: "The customer service industry, evolving from call centers to cloud-based and SaaS-based contact centers, is rapidly advancing. Technological progress and increased demands for better services from users are driving customer contact centers to change and upgrade to be more open, integrated, smarter and more convenient. Our initial strategic vision for industry development has been confirmed, and over the past two years, Sobot has responded to these market demands. Now, between 60% and 70% of Sobot's new customers are those upgrading from the services of our competitors. Moving forward, we will continue to iterate and adapt to the needs of these new customers."Sobot reveals four emerging trends that set it apart from competitors in the development of its contact center businessSeamless integration of data and omnichannel access creates more opportunities for better customer management and user experience. With the Company's approach to an integrated omnichannel customer experience, the issues associated with the disjointed and inefficient management of e-commerce customer service and after-sales services are effectively resolved, regardless of which communication channels - WhatsApp, Meta, Instagram, Telegram, etc. - the message is conveyed from. In the case of a major fresh food e-commerce client, an open API integrates the call center and ticketing management system, with one point of access, efficiently solving customer communication and management needs in multiple scenarios and saving 35% of the gap in waiting time from separated systems deployed earlier.Artificial intelligence (AI) agent products show obvious advantages, with both AI directly solving customers' issues and through the use of AI-assisting agents. It can solve problems on two levels, showcasing the powerful capabilities of its AI technology. Natural language processing (NLP) and large language model (LLM) technologies from Sobot enrich the AI customer service experience. In the case of Renogy, customer queries are answered over Livechat, and through Sobot's NLP and LLM behind the AI Agent, it can quickly identify the best answer to various customer questions, increasing the customer experience with higher and quicker knowledge learning efficiency.Enhancing the marketing with more ways to reach out to customers and interactive options will strengthen the connections with them, boosting their involvement and loyalty to our business. In the case of L Gaming, Sobot caters to customer service needs by empowering active marketing contact, and realizing two-way interaction through its platform. Starting with CX SaaS, it added functions to enhance marketing, customer contact, and interaction, strengthening customer relations and increasing user engagement. For instance, Sobot's platform integrates with WhatsApp, improving customer service efficiency during various interaction stages.Offering comprehensive consulting, pre-sales solutions, delivery, and customer success services helps enterprise clients maximize the platform's benefits and achieve business success. Unlike competitors, Sobot provides specific industry and best practice guidance, enhancing application scope and effectiveness. By offering customer service consulting and business support, Sobot ensures rapid, high-quality service integration, significantly speeding up the onboarding process.Embracing the increasing functions and values that a contact solution provider can bring to its enterprise customers, Sobot's customer-first DNA provides users with a more comprehensive, integrated, agile, and intelligent service solution to help enterprises improve customer satisfaction and business efficiency with its omnichannel model from day one.Please visit the https://www.sobot.io/ for more information.About SobotSobot is an all-in-one contact center solution provider, offering both customer service and marketing software. Its product range includes AI Chatbot, Live Chat, Voice, Ticketing, and WhatsApp Business API. Sobot commits to delivering an omnichannel and intelligent solution, ensuring seamless customer interactions across various channels. With established partnerships with industry giants like AWS, Alibaba Cloud, and Meta, the company has won increasing business opportunities with notable clients, such as Samsung, OPPO, DFS, Philips, J&T, Air Liquide, OPay, Mico, and Lilith Games. Sobot is looking forward to empowering businesses to achieve better customer experiences and higher satisfaction levels through providing cutting-edge solutions to drive success for its clients globally.Source: Sobot
Digital Assistance/Content Synthesis
Management/Sales and Related
null
null
null
null
null
null
news
Sean Michael Kerner
Google Cloud Run embraces Nvidia GPUs for serverless AI inference
-No longer do organizations need to pay for long running servers for AI inference as Google Cloud Run previews support for Nvidia L4 GPUs.
https://venturebeat.com/ai/google-cloud-run-embraces-nvidia-gpus-for-serverless-ai-inference/
https://venturebeat.com/…w=1200&strip=all
2024-08-21T15:00:00Z
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreThere are a number of different costs associated with running AI, one of the most fundamental is providing the GPU power needed for inference.To date, organizations that need to provide AI inference have had to run long running cloud instances, or provision hardware on-premises. Today, Google Cloud is previewing a new approach, and its one that could reshape the landscape of AI application deployment. The Google Cloud Run serverless offering is now integrating Nvidia L4 GPUs, effectively enabling organizations to run serverless inference. The promise of serverless is that a service only runs when needed and users only pay for what is used. Thats in contrast to a typical cloud instance which will run for a set amount of time as a persistent service and is always available. With a serverless service, in this case a GPU for inference, only fires up and is used when needed.The serverless inference can be deployed as an Nvidia NIM,  as well as other frameworks such as  VLLM, Pytorch and Ollama. The addition of Nvidia L4 GPUs is currently in preview.As customers increasingly adopt AI, they are seeking to run AI workloads like inference on platforms they are familiar with and start up on, Sagar Randive, Product Manager, Google Cloud Serverless, told VentureBeat. Cloud Run users prefer the efficiency and flexibility of the platform and have been asking for Google to add GPU support.Cloud Run, Googles fully managed serverless platform, has been a popular platform with developers thanks to its ability to simplify container deployment and management. However, the escalating demands of AI workloads, particularly those requiring real-time processing, have highlighted the need for more robust computational resources. The integration of GPU support opens up a wide array of use cases for Cloud Run developers including:Real-time inference with lightweight open models such as Gemma 2B/7B or Llama3 (8B), enabling the creation of responsive custom chatbots and on-the-fly document summarization tools.Serving custom fine-tuned generative AI models, including brand-specific image generation applications that can scale based on demand.Accelerating compute-intensive services like image recognition, video transcoding, and 3D rendering, with the ability to scale to zero when not in use.A common concern with serverless is about performance. After all, if a service is not always running, there is often a performance hit just to get the service running from a so-called cold start.Google Cloud is aiming to allay any such performance fears citing some impressive metrics for the new GPU-enabled Cloud Run instances. According to Google, cold start times range from 11 to 35 seconds for various models, including Gemma 2b, Gemma2 9b, Llama2 7b/13b, and Llama 3.1 8b, showcasing the platforms responsiveness.Each Cloud Run instance can be equipped with one Nvidia L4 GPU, with up to 24GB of vRAM, providing  a solid level of resources for many common  AI inference tasks. Google Cloud is also aiming to be model agnostic in terms of what can run, though it is hedging its bets somewhat.We do not restrict any LLMs, users can run any models they want, Randive said. However for best performance, it is recommended that they run models under 13B parameters.A key promise of serverless is better utilization of hardware, which is supposed to also translate to lower costs.As to whether or not it is actually cheaper for an organization to provision AI inference as a serverless or as a long running server approach is a somewhat nuanced question.This depends on the application and the traffic pattern expected, Randive said. We will be updating our pricing calculator to reflect the new GPU prices with Cloud Run at which point customers will be able to compare their total cost of operations on various platforms.Stay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
Ruby Pai
LLMs know more than what they say
... and how that provides winning evals
https://arjunbansal.substack.com/p/llms-know-more-than-what-they-say
https://substackcdn.com/…8a_4179x2892.png
2024-08-19T16:53:09Z
Log10 provides the best starting point for evals with an effective balance of eval accuracy and sample efficiency. Our latent space approaches boost hallucination detection accuracy with tens of examples of human feedback (bottom), and work easily with new base models, surpassing performance of fine tuned models without the need to fine tune. For domain specific applications, our approach provides a sample efficient way to improve accuracy at the beginning of app development (bottom). See Rapidly Customizable Hallucination Detection for details.Good AI application evaluations separate the companies that are successful with GenAI from those that arent. Deploying AI apps merely based on vibes can cause serious financial and reputational damage. More structured approaches to evaluations also have yielded mixed results in practice while metric-, tool- and model-based evals are fast and inexpensive, most mission critical applications still rely on slow and costly human review as the gold standard for accuracy. In this post, we share some of our research on how to improve evaluation accuracy in cost-effective and sample efficient ways, unlocking the ability to rapidly generalize to new use cases and domains. Improving accuracy for automated, model-based evaluations has to date involved the standard set of techniques for improving accuracy of language models1. Prior to this post, additional model training has unsurprisingly resulted in the most accuracy gains, at the expense of more annotated data.At Log10 one of our research thrusts involves applying insights from LLM interpretability research to improving model-based evaluations. In this post, we describe a novel application of latent space techniques to GenAI application evals with the following advantages:Rapidly Customizable: 20x more sample efficient than fine tuning, with comparable accuracyEasy to Update: Works with different base models, easily taking advantage of base model improvement without the need to fine tune againConfigurable: For hallucination detection and other binary type evaluations, easily control recall versus precision tradeoff to meet use-case requirementsSupports Numeric Scoring: Generalizes to custom numerical scoring evaluation tasks without the need for prompt engineeringEvals Inspired by Interpretability ResearchThis year has seen exciting developments in interpretability research [6, 8]. Research over the past few years has established the existence of semantically meaningful linear features in the transformer residual stream [1]. Linear features have been extracted via sparse autoencoders and used to steer model outputs, in both research literature and, this year, production models [6, 8]. Through these techniques, we briefly had Golden Gate Claude, a demo version of Claude that was fixated with and envisioned itself as the Golden Gate Bridge. Steering model outputs with extracted features has both been a way of showing that those features are semantically meaningful as well as a glimpse towards how better control over LLM behavior might be achieved.Another established way of finding interpretable linear directions is via creating contrast pairs [2, 3, 10, 9]. Steering vectors are linear directions found via contrasting model inputs. These are contrasting prompts and completions, and the steering vector is computed as the difference vector in activation space. Typically best results are found via taking the mean of many pairs, for the same concept. Sometimes this method goes by the name activation engineering. The focus of activation engineering has been on steering the behavior of models. However, it is also possible to boost evaluation accuracy via latent space techniques, for both hallucination detection and numeric grading of custom evaluation criteria.We apply the concept of meaningful linear directions, computed from automatically generated contrast pairs, to the task of evaluation. Our key insight is that evaluations can also be derived from projecting evaluation model activations onto meaningful linear directions. We call this approach latent space readout (LSR)2. Moreover, we find that LSR can boost evaluation accuracy over using the same LLM as a Judge in standard approaches via prompting, and even over frontier models for certain evaluation types. We show example results on open hallucination detection benchmarks and custom evaluation criteria grading below.Rapidly Customizable Hallucination DetectionDespite the impressive capabilities of LLMs, hallucinations continue to be an issue, especially for customer facing applications or in critical domains. Recently the HaluBench benchmark [11] was proposed for RAG question answering hallucination detection. Each example in the benchmark contains a passage, a question about some context in the passage and an answer. Each example also has a label as to whether the answer contains an intrinsic hallucination, meaning the answer included information contrary to that in the context passage. The benchmark contains 6 data sources. Below we show results on a couple data sources, Halueval and PubMedQA, to illustrate the advantages of LSR.Latent space readout boosts hallucination detection accuracy, yielding results comparable to fine tuning. Consider as an example the Halueval benchmark (10,000 samples from the HaluBench benchmark). Applying LSR to Llama-3-8B-Instruct improves F1 score and accuracy over the prediction in the model completion (i.e. standard LLM-as-Judge). The F1 score is the harmonic mean of recall and precision. Recall and precision are more appropriate metrics than accuracy for hallucination detection performancemore on this later. Lynx [11] is a recently published model that was created by fine tuning Llama-3-Instruct on 3200 samples from HaluBench data sources, i.e. similar passage-question-answer hallucination detection examples (but not including Halueval). We see that LSR can approach the accuracy of fine tuning the same base model (84.7 vs 85.2), with comparable F1 scores of 84.6 versus 84.9.When stronger base models are published, LSR can take advantage of the underlying base model improvement and outperform models fine tuned from previous base model versions. Llama-3.1-8B-Instruct is more accurate as a judge than its predecessor Llama-3-8B-Instruct. Applying LSR to Llama-3.1-8B-Instruct also improves F1 score and accuracy over base model prediction, and surpasses Lynx. LSR can typically be configured with as few as 30-50 examples of human feedback. Incorporating the improved base model is a lightweight process compared to fine tuning again from Llama-3.1-8B-Instruct. Applying LSR to the fine tuned model however, does not always provide a performance boost3. Critical applications require higher recall; accuracy is an incomplete metric. For hallucination detection, recall and precision are the relevant metrics to characterize evaluation performance. LSR provides an easily tunable knob for setting recall and precision to a configuration thats appropriate for the application. For critical applications, a higher recall (hallucination detection rate) is desirable, even if that involves a tradeoff of lower precision (more false alarms), so long as the false alarm rate is tolerable. With LSR, the detection threshold can be set for a higher hallucination detection rate, such as in the last row of the table belowa knob that would otherwise have to be laboriously (and less effectively) tuned with prompt engineering. This is similar to the promise of more fine grained control in model steering.LSR is not gated on collecting 100s-1000s of human feedback examples. LSR works when the amount of human feedback is small (~10s of examples), with performance comparable to fine tuning on the target evaluation task. Consider the PubMedQA test set within HaluBench, and the Lynx model as an example of a model fine tuned on this specific dataset. Lynx was fine tuned with 3200 samples across four different data sources, including PubMedQA (2400 train samples plus 800 validation). Without additional details, we assume the training set was equally distributed across the four sources. In the table below, we see that, as expected, fine tuning increases accuracy and F1. However,  LSR with the base Llama-3-8B-Instruct model, configured using 40 samples, has comparable recall to Lynx8, with lower precision. Visualized below, this suggests a Pareto front of evaluation methods when it comes to sample efficiency and performance. For evaluations in a specialized domain, when a small number of examples have been collected (before fine tuning is feasible), a latent space approach provides an effective starting point.Beyond Detection: Numeric ScoringSetting up an automated evaluations suite is a multi-staged process, starting from assertions based tests [15] to model based evaluations. There are many types of model based evaluations to consider. Beyond binary pass/fail evaluations such as hallucination detection, numeric scoring of custom evaluation criteria can be useful for focusing developer attention on the worst offending examples, monitoring, and direct use in RLAIF (sidestepping the need to train a reward model on comparisons).LLM-as-Judge4 struggles with numeric scoring. We have found that latent space approaches are able to fit human feedback on numeric scoring rubrics in a very sample efficient manner.As an example, consider the CNN/DailyMail news summarization dataset, which contains human feedback for different summary evaluation criteria including summary coverage. We choose coverage as an example because of the subjectivity involved in what constitutes a good coverage summary, which could vary from application to application. Application or domain-specific criteria which involve this kind of subjectivity are where custom evaluation criteria fit into the overall suite of evaluations.Using few-shot learning with Mistral-7b-v0.2-Instruct produces the result on the left, whereas LSR from a 7B model based on Mistral-7b-v0.2-Instruct model produces the result on the right. This LSR model predicting summary coverage has a correlation of 0.7 with human scores on a held out test set (shown below). Given noise in test set labels, we believe this is a fairly effective model, fit with 12 labeled samples.Note that for some custom evaluation criteria, we have found that it is necessary to do a small amount of fine tuning to enable accurate readouts. We do this sample efficiently using synthetic data. We observe that the lightly fine tuned model is not able to produce accurate numeric feedback (also tending to provide constant scores across a range of inputs), but LSR using the model is. We hypothesize this is because for custom evaluations, certain concepts are not represented in latent features in the base model.Finally, we compare LSR using a 7B parameter model with a frontier model. Below we show gpt-4o predictions from both the model completion as well as calculating an expected value score from logprobs. Our latent space method improves over both gpt-4o results, while being considerably faster and cheaper. Your App, Your EvalThe tendency when it comes to automated evals has been to focus on models. After all, they are model-based. However, models alone, without easy-to-use and easy-to-integrate interfaces, situated as part of a system, are not solutions. Even ChatGPT, commonly thought of as a single model, likely consists of a Compound AI system [16] as part of the overall solution that end users interact with. Similarly, there are many components involved in the development of a full suite of evals, starting from assertions based [15] to including model based evals. The latter should be customized to the needs of the GenAI application, whether for chatbot personality or industry-specific considerations. Log10 supports the developer throughout this journey. Sample efficiency is key to productionizing custom evals. Log10s platform allows you to define custom evaluation rubrics and provide subject matter expert feedback. We turn this feedback into more accurate evaluation starting with tens of feedback samples, powered by research such as what we described here. Get started with Log10 AutoFeedback today.Log10 is investing in expanding our suite of evaluation datasets. To explore partnerships, contact: [email protected] started with AutoFeedback today!A non-comprehensive list of referencesAnthropic Transformer Circuits Activation Addition: Steering Language Models Without OptimizationIn Context VectorsSimple probes can catch sleeper agents Discovering Latent Knowledge in Language Models Without Supervisionhttps://www.anthropic.com/news/mapping-mind-language-modelhttps://www.anthropic.com/news/golden-gate-claudehttps://openai.com/index/extracting-concepts-from-gpt-4/Steering Llama2 via Contrastive Activation Addition Bottleneck T5 Text Autoencoder Lynx: An Open Source Hallucination Evaluation Model Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low CostPrometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Modelshttps://blog.langchain.dev/aligning-llm-as-a-judge-with-human-preferences/Pytest is All You Need The Shift from Models to Compound AI Systems
Detection and Monitoring/Content Synthesis
Management/Business and Financial Operations
null
null
null
null
null
null
news
Derek Holt, Forbes Councils Member, Derek Holt, Forbes Councils Member https://www.forbes.com/sites/forbesbusinesscouncil/people/derekholt/
Navigating The Risks Of Copilots In The New Software Development Security Landscape
In the span of little more than a year, copilots have emerged in the software development domain as a potentially revolutionary tool with near-endless potential.
https://www.forbes.com/sites/forbesbusinesscouncil/2024/08/05/navigating-the-risks-of-copilots-in-the-new-software-development-security-landscape/
https://imageio.forbes.c…=1600&fit=bounds
2024-08-05T14:15:00Z
Derek Holt is CEO of Digital.ai.gettyWhile AI has been both a concept and under development for many decades, over the past two years it has jumped from a concept to a set of services that we can all try, experiment with and use in our day-to-day workflows. The recent rise of AI assistants, also known as copilots, has become the new primary interaction model. A novice user can describe what they want to do in conversational language, and the copilot is supposed to make it happen.In the span of little more than a year, copilots have emerged in the software development domain as a potentially revolutionary tool with near-endless potential to shape the way software is planned, built, tested and delivered. However, security concerns around copilot adoption are high, particularly in enterprise environments.In this article, I will lean into some of the top security risks and highlight approaches to avoid trouble later.Copilots can exacerbate several security challenges we grapple with in software development.Copilots' ability to draw from a vast corpus of code repositories makes them a powerful ally for programmers. However, adoption is not without risk. The responsibility to write and deliver secure software encompasses disciplines such as well-formed/secure code, avoidance of risky third-party dependencies and ensuring data privacy, encryption, code obfuscation and anti-tampering techniques. These security dimensions have long challenged "human" generated code, and early analysis indicates that code generated by copilots presents an even greater risk. In fact, recent surveys have shown sharp increases in security vulnerabilities in AI-generated code.There are three key categories that must be understood and addressed: code vulnerabilities, dependency risks and data privacy concerns.1. Code VulnerabilitiesAI models are trained on data sets that are fed to them. In the world of software development copilots, this tends to be large repositories of code. Unfortunately, the training data (in this case the code) often includes security vulnerabilitiesand those vulnerabilities will be ingested by the AI as they are. Given copilots simply pattern match at scale and thus do not understand coding semantics, they can often repeat (at scale) vulnerabilities found in the training data.Studies have demonstrated that a model trained with errors is likely to increase the errors: The average commercial software project has 40 first-party code vulnerabilities, with a third categorized as high severity. If the copilot sees this code as good, it will use it repeatedly, increasing the number of times a mistake appears.2. Dependency RisksBeyond first-party code vulnerabilities, todays modern software applications include endless dependencies on third-party libraries and services. While third-party dependencies have long introduced security risks for human-generated applications, given copilots are unaware of security risks and are often trained on legacy code bases, they have the potential of automatically introducing dependencies to outdated and insecure libraries and third-party systems. Based on the overall volume of code generated, enterprises may, in fact, lack visibility into these dependencies, making it more challenging than ever to remediate and, in some cases, patch those vulnerabilities.3. Data Privacy ConcernsAs governments worldwide continue to expand and further enforce regulations around data privacy from both an industry (PHI, HIPAA, etc.) and regional (GDPR, etc.) perspective, organizations must also consider the data privacy risks when using copilots to accelerate software development. As we have discussed, copilots ultimately do not understand semantics, nor do they currently understand and contextualize regulations. With that, copilots have the potential to unknowingly include sensitive data. This puts increasing pressure on developers, security organizations and engineering teams to ensure that generated code is well contextualized.Mitigate Risk And Improve OutcomesThe good news is that there are people, processes and tooling solutions to mitigate risk and improve outcomes. Here are a few strategies, approaches and best practices for human-generated code:Education And TrainingOrganizations should continue to provide comprehensive training to developers on recognizing and addressing security vulnerabilitiesnot just in their own code but in copilot-generated code as well. Next, the focus should be on establishing best practices and guidelines for responsibly using copilots and prioritizing security as part of those efforts. Creating copilot and AI-focused Centers of Excellence (CoEs) is also encouraged to establish best practices in processes and tooling. Leading organizations are also beginning to identify and track key metrics to increase visibility both for productivity gains and broader risks.Code Review And Enhanced Scanning (SAST/DAST)Enterprises should expand their focus on and usage of commonly deployed application security testing tools. These tools provide developers with the ability to either scan software from within the code or against the running software, a.k.a. SAST or DAST. Security professionals can also use penetration testingan approach that searches for further vulnerabilities in applications through outside-in simulated attacks (performed either manually or automatically, leveraging any number of available tools). All these tools are useful for human and AI-generated code. Organizations should also implement more robust code review processes focusing on the early identification of security flaws while also ensuring adherence to coding standards.Modern And AI-Aware DevSecOpsIn the past decade, I've noticed organizations dramatically reduce security and quality issues while improving compliance through DevSecOps standardization and automation. Organizations that have mature DevSecOps processes including various flavors of security-related scanning can safely and successfully adopt copilots. Their well-defined governance processes, automation and ability to measure the end-to-end business process of building and delivering software puts them in a unique position to quickly adopt while managing risk. Automating DevSecOps processes ensures best practices are followed for critical tasks from the final code pull request through to production, whether delivering human or AI-generated code through the delivery pipeline.As the landscape of AI-assisted software development continues to evolve, enterprises must remain vigilant in striking the right balance between innovation and security. These best practices will bring value and dramatically reduce risk today and into the future.Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Julian Horsey
Develop a full-stack apps without writing any code
If you have ever wished you could develop a full-stack apps without the hassle of writing endless lines of code? What if I told you there’s a way to do just that, effortlessly and efficiently? Enter DeepSeek-Coder-V2 and Aider, a dynamic duo that empowers you to create sophisticated applications with minimal effort. Develop Full-Stack Apps […]The post Develop a full-stack apps without writing any code appeared first on Geeky Gadgets.
https://www.geeky-gadgets.com/?p=435054
https://www.geeky-gadget…without-code.jpg
2024-08-09T10:56:58Z
If you have ever wished you could develop a full-stack apps without the hassle of writing endless lines of code? What if I told you there’s a way to do just that, effortlessly and efficiently? Enter DeepSeek-Coder-V2 and Aider, a dynamic duo that empowers you to create sophisticated applications with minimal effort.Develop Full-Stack Apps with No-CodeDeepSeek-Coder-V2: A Catalyst in Coding ModelsThe integration of artificial intelligence (AI) in software development has transformed the way applications are built. By leveraging the power of the DeepSeek-Coder-V2 model and Aider, an AI pair programmer, developers can now create full-stack applications without writing a single line of code.DeepSeek-Coder-V2 is a innovative open-source coding model that stands shoulder-to-shoulder with industry giants like GPT-4 Omni and Sonic 3.5. With support for an impressive 338 programming languages and exceptional proficiency in coding and mathematical reasoning, this model simplifies the coding process, making it accessible to developers of all skill levels.Supports 338 programming languagesExcels in coding and mathematical reasoningSimplifies the coding process for developers at all levelsAider: Your AI Pair Programming CompanionAider is a powerful terminal-based AI assistant that amplifies the capabilities of large language models. It seamlessly integrates with DeepSeek-Coder-V2 to generate, debug, and edit code, streamlining the development process. With Aider by your side, you can focus on high-level design and logic while the AI takes care of the intricate details of code generation.Enhances the capabilities of large language modelsGenerates, debugs, and edits codeAllows developers to focus on high-level design and logicHere are a selection of other articles from our extensive library of content you may find of interest on the subject of no-code design :Getting Started: Installation Made EasyTo harness the power of Aider and DeepSeek-Coder-V2, follow these simple installation steps:1. Prerequisites: Ensure that Python, Git, and VS Code are installed on your system.2. Install Aider: Download and install Aider from its official repository, following the instructions specific to your operating system.3. Install DeepSeek-Coder-V2: Clone the DeepSeek-Coder-V2 repository and set it up locally. Detailed instructions can be found in the comprehensive documentation.By completing these steps, you’ll have your environment set up for seamless integration and efficient use of these powerful tools.Effortless Application DevelopmentCreating applications with Aider and DeepSeek-Coder-V2 is a breeze. Let’s take a look at how you can develop a calendar app:1. Initialize the Model: Use simple terminal commands to initialize the DeepSeek-Coder-V2 model.2. Generate Code: Provide a clear and concise prompt describing the desired features of your calendar app. The AI will generate the necessary code components.3. Compile and Run: Compile the generated code and run the application to see your vision come to life.This streamlined process showcases the efficiency and ease of developing full-stack applications without the need for manual coding.Unleashing Endless PossibilitiesThe potential applications of Aider and DeepSeek-Coder-V2 are virtually limitless. From creating engaging games to developing sophisticated SaaS products, these tools empower developers to bring their ideas to fruition with unprecedented speed and ease.For instance, generating a simple game with a single prompt demonstrates the power of AI in automating code generation. This capability allows developers to focus on innovation and creativity rather than getting bogged down in the intricacies of coding.Dive Deeper: Additional ResourcesTo further explore the capabilities of Aider and DeepSeek-Coder-V2, refer to the following resources:– Repositories: Access the official repositories for Aider and DeepSeek-Coder-V2 to stay up-to-date with the latest developments and contributions from the community.– Documentation: Dive into the detailed documentation that provides in-depth guidance on using these tools effectively and efficiently.– AI Solutions: Explore a wide range of additional AI solutions that cater to both business and personal use cases, expanding your horizons in the world of AI-assisted development.These resources offer valuable insights and support, ensuring that you can fully leverage the capabilities of Aider and DeepSeek-Coder-V2 to create remarkable full-stack apps without the need to write any code.The combination of DeepSeek-Coder-V2 and Aider represents a significant leap forward in AI-assisted software development. By automating the coding process, these tools democratize full-stack application development, making it accessible to a wider audience of developers. Whether you’re a seasoned professional or just starting your coding journey, embracing these technologies can supercharge your productivity and ignite your innovation.Video & Image Credit: SourceFiled Under: Top NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Janakiram MSV, Senior Contributor, Janakiram MSV, Senior Contributor https://www.forbes.com/sites/janakirammsv/
Google Brings Serverless Inference To Cloud Run Based On Nvidia GPU
The addition of NVIDIA L4 GPU support to Google Cloud Run represents a major milestone in cloud-based AI development and serverless AI inference.
https://www.forbes.com/sites/janakirammsv/2024/08/23/google-brings-serverless-inference-to-cloud-run-based-on-nvidia-gpu/
https://imageio.forbes.c…=1600&fit=bounds
2024-08-23T07:33:54Z
NVIDIA GPUsNvidiaGoogle Cloud's recent enhancement to its serverless platform, Cloud Run, with the addition of NVIDIA L4 GPU support, is a significant advancement for AI developers. This move, which is still in preview, improves the platform's ability to handle complex AI workloads, allowing developers to deploy, scale and optimize AI-powered applications more efficiently.Cloud Run has already established itself as a go-to serverless platform for developers due to its simplicity, fast autoscaling and pay-per-use pricing model. These features allow for rapid deployment and scaling of applications without the need to manage servers. With the integration of NVIDIA L4 GPUs, the platforms capabilities are now extended to support real-time AI inference, a crucial requirement for many emerging generative AI applications. The NVIDIA L4 GPU is designed for inference at scale, offering up to 120 times the video performance compared to CPUs and 2.7 times the performance for generative AI tasks compared to previous generations of GPUs.This integration is particularly beneficial for deploying lightweight generative AI models and small language models such as Googles Gemma and Metas Llama. These models are popular for tasks like chatbot creation, real-time document summarization and various other AI-driven functions. For example, Googles Gemma models (2B and 7B) and Metas Llama models (8B) can be used to develop highly responsive and scalable AI applications. The introduction of NVIDIA L4 GPUs in Cloud Run ensures that these models can run efficiently, even during peak traffic, without compromising performance.Deploying AI models on Cloud Run with NVIDIA L4 GPUs is designed to be a seamless process. Developers can create container images that include the necessary dependencies, such as NVIDIA GPU drivers and the AI model itself. Once the container is built and pushed to a container registry, it can be deployed on Cloud Run with GPU support enabled. This process allows businesses to take full advantage of Cloud Runs scalability and NVIDIAs powerful GPUs without the need for specialized infrastructure management.The platforms flexibility extends to supporting various Google Cloud services, including Google Kubernetes Engine and Google Compute Engine, giving developers the choice of the level of abstraction they need for building and deploying AI-enabled applications. This flexibility is critical for businesses looking to tailor their AI deployments to specific needs while ensuring that they can scale efficiently as demand fluctuates.The enhanced capabilities of Cloud Run with NVIDIA L4 GPUs extend beyond just AI inference. They also enable a variety of other compute-intensive tasks such as on-demand image recognition, video transcoding, streaming and 3D rendering. This makes Cloud Run a versatile platform that can cater to a wide range of applications, from AI-driven chatbots to media processing services. The flexibility offered by the platform, coupled with its ability to scale down to zero during inactivity, ensures that businesses can optimize costs while maintaining high performance during active usage.A key aspect of this development is the NVIDIA-Google Cloud partnership, which aims to provide advanced AI capabilities across various layers of the AI stack. This partnership includes the provision of Google Cloud A3 VMs powered by NVIDIA H100 GPUs, which offer significantly faster training times and improved networking bandwidth compared to previous generations. Additionally, NVIDIA DGX Cloud, a software and AI supercomputing solution, is available to customers directly through their web browsers. This allows businesses to run large-scale AI training workloads with ease.NVIDIA AI Enterprise, which is available on Google Cloud Marketplace, provides a secure, cloud-native platform for developing and deploying enterprise-ready AI applications. This platform simplifies the process of integrating AI into business operations, making it easier for companies to harness the power of AI without needing extensive in-house expertise.Several companies are already benefiting from the integration of NVIDIA GPUs into Google Cloud Run. For instance, LOréal, a leader in the beauty industry, is using this technology to power its real-time AI inference applications. The company has reported that Cloud Runs GPU support has significantly enhanced its ability to provide fast, accurate and efficient results to its customers, particularly in time-sensitive applications.Another example is Writer, an AI writing platform that has seen substantial improvements in its model inference performance while reducing hosting costs by 15%. This has been made possible through Google Clouds AI Hypercomputer architecture, which leverages NVIDIA GPUs to optimize performance and cost-efficiency.The addition of NVIDIA L4 GPU support to Google Cloud Run represents a major milestone in cloud-based AI development and serverless AI inference. By combining the ease of use and scalability of Cloud Run with the powerful performance of NVIDIA GPUs, Google Cloud is offering developers and businesses the tools they need to build, deploy and scale AI applications.
Unknown
Unknown
null
null
null
null
null
null
news
Julian Horsey
CodeQwen free open source AI coding assistant
If you are looking for an open source AI coding assistant that is not only free but also integrates seamlessly with Visual Studio Code you might be interested in CodeQwen. An open-source AI code generator that has quickly gained popularity among developers for its powerful features and ease of use. With version 1.5 available on the […]The post CodeQwen free open source AI coding assistant appeared first on Geeky Gadgets.
https://www.geeky-gadgets.com/?p=434446
https://www.geeky-gadget…ng-assistant.jpg
2024-08-02T11:28:55Z
If you are looking for an open source AI coding assistant that is not only free but also integrates seamlessly with Visual Studio Code you might be interested in CodeQwen. An open-source AI code generator that has quickly gained popularity among developers for its powerful features and ease of use. With version 1.5 available on the Ollama platform, known for its coding model leaderboards, CodeQwen has proven its effectiveness and reliability in generating high-quality code snippets. This guide will walk you through the process of setting up and using CodeQwen on your local machine, allowing you to harness the power of AI-assisted coding without relying on external servers or paid services.Free AI Coding AssistantInstallation Made EasyTo begin your journey with CodeQwen, the first step is to install Ollama on your machine. Ollama provides support for Windows, macOS, and Linux, ensuring compatibility across various operating systems. Once you have successfully installed Ollama, you can proceed to download and set up CodeQwen locally. This local setup offers the advantage of having a robust AI code generator at your fingertips, allowing you to work offline and maintain complete control over your coding environment.Seamless Integration with Visual Studio CodeVisual Studio Code (VS Code) is a widely-used code editor that offers a rich ecosystem of extensions to enhance its functionality. To seamlessly integrate CodeQwen with VS Code, you can install the Continue extension. This extension enables you to configure VS Code to use local models like CodeQwen, providing a smooth and efficient coding experience. By bridging the gap between CodeQwen and VS Code, you can leverage the power of AI-assisted coding directly within your preferred development environment.Unleashing the Power of CodeQwenCodeQwen offers a range of functionalities that streamline your coding process and boost productivity. Let’s explore some of its key features:Code Generation: CodeQwen excels at generating code snippets, allowing you to quickly create programs and algorithms. Whether you need a basic “Hello World” program in Python or a more complex algorithm like bubble sort, CodeQwen has you covered. This feature is particularly useful for rapid prototyping and testing, saving you valuable time and effort.Autocomplete Feature: The autocomplete feature in CodeQwen takes your coding efficiency to the next level. Enabled through the Continue extension in VS Code, this feature harnesses the power of AI to predict and suggest code completions as you type. By intelligently analyzing your code context, CodeQwen provides accurate and relevant suggestions, reducing the need for manual typing and minimizing errors.Code Editing: With CodeQwen integrated into VS Code, editing generated code snippets becomes a breeze. You can directly modify and refine the code within the familiar VS Code environment, ensuring a seamless workflow. This integration allows you to make quick adjustments, add comments, and optimize your code without switching between different tools or platforms.Debugging Made Simple: Debugging is an essential aspect of coding, and CodeQwen simplifies this process through its integration with the Continue extension. With AI-powered debugging capabilities, CodeQwen helps you identify and resolve bugs in your code more efficiently. It provides intelligent suggestions and insights, guiding you towards fixing errors and improving the overall quality of your code.Here are a selection of other articles from our extensive library of content you may find of interest on the subject of AIassistants :Flexibility and CompatibilityOne of the standout features of CodeQwen is its compatibility with other popular AI providers like OpenAI and Anthropic. By integrating these providers through Ollama, you gain access to a wide range of innovative AI models for code generation. This flexibility ensures that you have all the capabilities of a paid code copilot available at your fingertips, without the associated costs. With CodeQwen, you can enjoy a powerful and cost-effective solution for AI-assisted coding.Embarking on your CodeQwen journey is an exciting opportunity to transform your coding workflow. By setting up CodeQwen locally on your machine and integrating it with VS Code, you unlock a world of possibilities. From generating code snippets and using autocompletion to streamlining debugging and leveraging the power of multiple AI providers, CodeQwen empowers you to code smarter and faster. This guide serves as a starting point, encouraging you to explore the full potential of CodeQwen and stay updated with the latest advancements in AI-assisted coding. Get ready to experience the future of coding with CodeQwen as your trusted companion.Video Credit: SourceFiled Under: Top NewsLatest Geeky Gadgets DealsDisclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Asha Sharma
Accelerating AI app development with Azure AI and GitHub
We are excited to partner with GitHub to empower their more than 100 million developers to build AI applications directly from GitHub.com with seamless integrations with Codespaces and Microsoft Visual Studio Code.The post Accelerating AI app development with Azure AI and GitHub appeared first on Microsoft Azure Blog.
https://azure.microsoft.com/en-us/blog/accelerating-ai-app-development-with-azure-ai-and-github/
https://azure.microsoft.…_Blog_240731.png
2024-08-01T16:00:00Z
Microsoft is empowering developers to become AI developers, bringing Azure AI industry leading models to the global GitHub community of more than 100 millionMore than 60,000 organizations use Microsoft Azure AI today to explore the power of custom AI applications. However, the market is quickly moving from experimentation to scale, and we see more developers around the world becoming AI developers. With this natural evolution, the needs of developers and their requirements to access and build with AI models and tools are transforming as well.The latest models with unique capabilities, performance metrics, and cost efficiencies for developers.To support this shift to scale, we are excited to partner with GitHub to empower their more than 100 million developers to build AI applications directly from GitHub.com with seamless integrations with Codespaces and Microsoft Visual Studio Code. Our collaboration starts today as we bring Azure AIs leading model selection to developers through GitHub Models, along with simple APIs to empower responsible, production-ready AI applications.For more insights into how GitHub Models can help you increase experimentation and accelerate your development cycles, all in GitHub, please read the blog from GitHub CEO Thomas Dohmke.As AI model innovation accelerates, Azure remains committed to delivering the leading model selection and greatest model diversity to meet the unique cost, latency, design, and safety needs of AI developers. Today, we offer the largest and most complete model library in the market, including the latest models from OpenAI, Meta, Mistral and Cohere and updates to our own Phi-3 family of small language models. With GitHub Models, developers can now explore and utilize the latest models along with AI innovations and next-generation frontier models. This offering gives every developer the flexibility to choose the best combination of unique capabilities, performance metrics, and cost efficiencies.While continuous model innovation brings more choice, it also brings complexity when selecting the right model for the right scenario. Today, developers have a range of options for cloud vs. edge, general-purpose vs. task specific, and more. On top of that, organizations often need multiple models to enable better quality, lower cost of goods sold, and to address complex use cases for each industry. GitHub Models opens the door for developers to experiment with multiple models, simplifying model experimentation and selection across the best of the Azure AI catalog, quickly comparing models, parameters, and prompts.By making Azure AI an open, modular platform, we aim to help our customers rapidly go from idea to code to cloud. With Azure AI on GitHub, developers can do just that by utilizing Codespaces to set up a prototype or use the Prompty extension to generate code with GitHub Models directly in Microsoft Visual Studio Code.In the coming months, we will expand our integration even further, bringing Azure AIs language, vision, and multi-modal services to GitHub, along with additional Azure AI toolchain elements, further streamlining the AI application development process.Developers building with AI want to be confident their AI applications are trustworthy, safe, and secure. GitHub Models gives developers a strong foundation from the start with built-in safety and security controls from Azure AI.Azure AI works with model providers and other partners such as HiddenLayer to reduce emerging threats, from cybersecurity vulnerabilities, to malware, and other signs of tampering. And we have taken this further in GitHub Models by integrating Azure AI Content Safety for top foundation models including Azure OpenAI Service, Llama, and Mistral. Azure AI Content Safety enables built-in, real time protection for risks such as the generation of harmful content, copyright materials, hallucination, and new AI specific attacks such as jailbreaks and prompt injection attacks.If developers want to go deeper, they can customize these controls in Azure AI, using evaluations to test and monitor their applications for ongoing quality and safety.Increased model selection gives developers the broadest range of options for the individual applications they are building. But each model naturally brings with it increased complexity. To counteract this, were making it incredibly easy for every developer to experiment with a range of models through the Azure AI model inference API. Using this single API, GitHub developers can now access a common set of capabilities to compare performance across a diverse set of foundational models in a uniform and consistent way, easily switching between models to compare performance without changing the underlying code.The Azure AI Inference SDK provides client libraries in Python and JavaScript with support for C# and .NET coming soon. This SDK makes it easy to integrate AI into your applications by simplifying common tasks related to authentication, security and retries in your programming language of choice. You can get started today with Python and JavaScript samples.Beyond these new integrations, we are also making it easier than ever for organizations to access GitHub Enterprise through Azure, combining GitHub’s cloud-native platform with Azure’s robust enterprise-grade security and scalability.Organizations with an existing Azure subscription can purchase GitHub products via self-service, directly through Microsoft Sales or via Microsoft Cloud Solution Providers and can adjust the number of GitHub seats as needed to ensure efficient usage. Additionally, eligible organizations may take advantage of the Microsoft Azure Consumption Commitment (MACC) and Azure Commitment Discount (ACD). Companies can now spin-up a GitHub instance directly from the Azure Portal and connect their Microsoft Entra ID with GitHub to facilitate user management and access control. With an Azure subscription, you have all the necessary tools for creating an intelligent AI application, including access to GitHub’s complete range of services like repositories, Actions, Advanced Security, and Copilot. This makes it incredibly simple and efficient to give developers everything they need to build and deploy AI applications at scale.We invite you to experience the power of this integrated end-to-end development experience. New customers can explore these capabilities with a free 30-day trial of GitHub Enterprise. We can’t wait to see what you will build with GitHub and Azure.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Sakana AI
The AI Scientist: Towards Automated Open-Ended Scientific Discovery
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery
https://sakana.ai/ai-scientist/
https://sakana.ai/assets…/sakana_rect.png
2024-08-13T02:10:58Z
August 13, 2024At Sakana AI, we have pioneered the use of nature-inspired methods to advance cutting-edge foundation models. Earlier this year, we developed methods to automatically merge the knowledge of multiple LLMs. In more recent work, we harnessed LLMs to discover new objective functions for tuning other LLMs. Throughout these projects, we have been continuously surprised by the creative capabilities of current frontier models. This led us to dream even bigger: Can we use foundation models to automate the entire process of research itself?IntroductionOne of the grand challenges of artificial intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used to aid human scientists, e.g. for brainstorming ideas or writing code, they still require extensive manual supervision or are heavily constrained to a specific task.Today, were excited to introduce The AI Scientist, the first comprehensive system for fully automatic scientific discovery, enabling Foundation Models such as Large Language Models (LLMs) to perform research independently. In collaboration with the Foerster Lab for AI Research at the University of Oxford and Jeff Clune and Cong Lu at the University of British Columbia, were excited to release our new paper, The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery.In our report:We propose and run a fully AI-driven system for automated scientific discovery, applied to machine learning research.The AI Scientist automates the entire research lifecycle, from generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript.We also introduce an automated peer review process to evaluate generated papers, write feedback, and further improve results. It is capable of evaluating generated papers with near-human accuracy.The automated scientific discovery process is repeated to iteratively develop ideas in an open-ended fashion and add them to a growing archive of knowledge, thus imitating the human scientific community.In this first demonstration, The AI Scientist conducts research in diverse subfields within machine learning research, discovering novel contributions in popular areas, such as diffusion models, transformers, and grokking.The AI Scientist is designed to be compute efficient. Each idea is implemented and developed into a full paper at a cost of approximately $15 per paper. While there are still occasional flaws in the papers produced by this first version (discussed below and in the report), this cost and the promise the system shows so far illustrate the potential of The AI Scientist to democratize research and significantly accelerate scientific progress.We believe this work signifies the beginning of a new era in scientific discovery: bringing the transformative benefits of AI agents to the entire research process, including that of AI itself. The AI Scientist takes us closer to a world where endless affordable creativity and innovation can be unleashed on the worlds most challenging problems.For decades following each major AI advance, it has been common for AI researchers to joke amongst themselves that now all we need to do is figure out how to make the AI write the papers for us! Our work demonstrates this idea has gone from a fantastical joke so unrealistic everyone thought it was funny to something that is currently possible.An example paper, Adaptive Dual-Scale Denoising generated by The AI Scientist. The full paper can be viewed here. While containing some flaws (e.g. a slightly unconvincing interpretation of why its method is successful), the paper proposes an interesting new direction that displays good empirical results in experiments The AI Scientist itself conducted and peer reviewed. More examples of generated papers are below.The remainder of this post provides a more detailed summary of The AI Scientist. Read on for:An Overview of how The AI Scientist works.More Examples of generated papers and innovations discovered by The AI Scientist.Known Limitations and Challenges faced by the current version of The AI Scientist.Interesting and unexpected things The AI Scientist sometimes does in order to increase its chance of success, such as modifying and launching its own execution script! We discuss the AI safety implications in our paper.A Discussion about ethical and broader future implications of The AI Scientist.For more details and many more example papers, please see our full scientific report. We are also releasing open source code and full experimental results on our GitHub repository.Overview of The AI ScientistThe AI Scientist is a fully automated pipeline for end-to-end paper generation, enabled by recent advances in foundation models. Given a broad research direction starting from a simple initial codebase, such as an available open-source code base of prior research on GitHub, The AI Scientist can perform idea generation, literature search, experiment planning, experiment iterations, figure generation, manuscript writing, and reviewing to produce insightful papers. Furthermore, The AI Scientist can run in an open-ended loop, using its previous ideas and feedback to improve the next generation of ideas, thus emulating the human scientific community.Conceptual illustration of The AI Scientist. The AI Scientist first brainstorms a set of ideas and then evaluates their novelty. Next, it edits a codebase powered by recent advances in automated code generation to implement the novel algorithms. The Scientist then runs experiments to gather results consisting of both numerical data and visual summaries. It crafts a scientific report, explaining and contextualizing the results. Finally, the AI Scientist generates an automated peer review based on top-tier machine learning conference standards. This review helps refine the current project and informs future generations of open-ended ideation.The AI Scientist has 4 main processes, described next.Idea Generation. Given a starting template, The AI Scientist first brainstorms a diverse set of novel research directions. We provide The AI Scientist with a starting code template of an existing topic we wish to have The AI Scientist further explore. The AI Scientist is then free to explore any possible research direction. The template also includes a LaTeX folder that contains style files and section headers, for paper writing. We allow it to search Semantic Scholar to make sure its idea is novel.Experimental Iteration. Given an idea and a template, the second phase of The AI Scientist first executes the proposed experiments and then obtains and produces plots to visualize its results. It makes a note describing what each plot contains, enabling the saved figures and experimental notes to provide all the information required to write up the paper.Paper Write-up. Finally, The AI Scientist produces a concise and informative write-up of its progress in the style of a standard machine learning conference proceeding in LaTeX. It uses Semantic Scholar to autonomously find relevant papers to cite.Automated Paper Reviewing. A key aspect of this work is the development of an automated LLM-powered reviewer, capable of evaluating generated papers with near-human accuracy. The generated reviews can be used to either improve the project or as feedback to future generations for open-ended ideation. This enables a continuous feedback loop, allowing The AI Scientist to iteratively improve its research output.When combined with the most capable LLMs, The AI Scientist is capable of producing papers judged by our automated reviewer as Weak Accept at a top machine learning conference.Example Papers Generated by The AI ScientistHere, we highlight some of the machine learning papers The AI Scientist has generated, demonstrating its capacity to discover novel contributions in areas like diffusion modeling, language modeling, and grokking. In our full report, we do a deeper dive into the generated papers and provide more analysis on their strengths and weaknesses.Diffusion ModelingDualScale Diffusion: Adaptive Feature Balancing for Low-Dimensional Generative ModelsLink to Full PDF    Link to CodeLanguage ModelingStyleFusion: Adaptive Multi-style Generation in Character-Level Language ModelsLink to Full PDF    Link to CodeAdaptive Learning Rates for Transformers via Q-LearningLink to Full PDF    Link to CodeGrokkingUnlocking Grokking: A Comparative Study of Weight Initialization Strategies in Transformer ModelsLink to Full PDF    Link to CodeLimitations and ChallengesIn its current form, The AI Scientist has several shortcomings. We expect all of these will improve, likely dramatically, in future versions with the inclusion of multi-modal models and as the underlying foundation models The AI Scientist uses continue to radically improve in capability and affordability.The AI Scientist currently doesnt have any vision capabilities, so it is unable to fix visual issues with the paper or read plots. For example, the generated plots are sometimes unreadable, tables sometimes exceed the width of the page, and the page layout is often suboptimal. Adding multi-modal foundation models can fix this.The AI Scientist can incorrectly implement its ideas or make unfair comparisons to baselines, leading to misleading results.The AI Scientist occasionally makes critical errors when writing and evaluating results. For example, it struggles to compare the magnitude of two numbers, which is a known pathology with LLMs. To partially address this, we make sure all experimental results are reproducible, storing all files that are executed.In our report, we dive deeper into The AI Scientistss current limitations and challenges ahead.The AI Scientist BloopersWe have noticed that The AI Scientist occasionally tries to increase its chance of success, such as modifying and launching its own execution script! We discuss the AI safety implications in our paper.For example, in one run, it edited the code to perform a system call to run itself. This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period. Here are some examples of such code modifications it made:These issues can be mitigated by sandboxing the operating environment of The AI Scientist. In our full report, we discuss the issue of safe code execution and sandboxing in depth.Future Implications of The AI ScientistAs with many new technologies, The AI Scientist opens up a Pandoras box of new issues. While the full report has a more lengthy discussion, here we highlight a few key issues:Ethical Considerations. While The AI Scientist may be a useful tool for researchers, there is significant potential for misuse. The ability to automatically create and submit papers to venues may significantly increase reviewer workload and strain the academic process, obstructing scientific quality control. Similar concerns around generative AI appear in other applications, such as the impact of image generation.Furthermore, the Automated Reviewer, if deployed online by reviewers, may significantly lower review quality and impose undesirable biases on papers. Because of this, we believe that papers and reviews that are substantially AI-generated must be marked as such for full transparency.As with most previous technological advances, The AI Scientist has the potential to be used in unethical ways. For instance, it has the potential to be deployed to conduct unethical research. It could also lead to unintended harm if The AI Scientist conducts unsafe research. For example, if it were encouraged to find novel, interesting biological materials and given access to cloud labs where robots perform wet lab biology experiments, it could (without its overseers intent) create new, dangerous viruses or poisons that harm people before we realize what has happened. Even in computers, if tasked to create new, interesting, functional software, it could create dangerous computer viruses. The AI Scientist current capabilities, which will only improve, reinforces that the machine learning community needs to immediately prioritize learning how to align such systems to explore in a manner that is safe and consistent with our values.Open Models. In this project, we used various proprietary frontier LLMs, such as GPT-4o and Sonnet, but we also explored using open models like DeepSeek and Llama-3. Currently, proprietary models such as Sonnet produce the highest quality papers. However, there is no fundamental reason to expect a single model like Sonnet to maintain its lead.We anticipate that all frontier LLMs, including open models, will continue to improve. The competition among LLMs has led to their commoditization and increased capabilities. Therefore, our work aims to be model-agnostic regarding the foundation model provider. We found that open models offer significant benefits, such as lower costs, guaranteed availability, greater transparency, and flexibility. In the future, we aim to use our proposed discovery process to produce self-improving AI research in a closed-loop system using open models.The Role of a Scientist.. Ultimately, we envision a fully AI-driven scientific ecosystem including not only LLM-driven researchers but also reviewers, area chairs and entire conferences. However, we do not believe that the role of a human scientist will be diminished. If anything, the role of a scientist will change and adapt to new technology, and move up the food chain.The introduction of The AI Scientist marks a significant step towards realizing the full potential of AI in scientific research. By automating the discovery process and incorporating an AI-driven review system, we open the door to endless possibilities for innovation and problem-solving in the most challenging areas of science and technology.But while the current iteration of The AI Scientist demonstrates a strong ability to innovate on top of well-established ideas, such as Diffusion Modeling or Transformers, it is still an open question whether such systems can ultimately propose genuinely paradigm-shifting ideas. Will future versions of The AI Scientist be capable of proposing ideas as impactful as Diffusion Modeling, or come up with the next Transformer architecture? Will machines ultimately be able to invent concepts as fundamental as the artificial neural network, or information theory?We believe The AI Scientist will make a great companion to human scientists, but only time will tell to the extent to which the nature of our human creativity and our moments of serendipitous innovation can be replicated by an open-ended discovery process conducted by artificial agents.Sakana AIWant to make the AI that improves AI? Please see our Careers page for more information.A fully automated AI fish discovering its world.
Content Creation/Process Automation
Unknown
null
null
null
null
null
null
news
Thomas Dohmke
Introducing GitHub Models: A new generation of AI engineers building on GitHub
We are enabling the rise of the AI engineer with GitHub Models–bringing the power of industry leading large and small language models to our more than 100 million users directly on GitHub.The post Introducing GitHub Models: A new generation of AI engineers building on GitHub appeared first on The GitHub Blog.
https://github.blog/news-insights/product-news/introducing-github-models/
https://github.blog/wp-c…odels-header.png
2024-08-01T15:58:57Z
We believe every developer can be an AI engineer with the right tools and training. From playground to coding with the model in Codespaces to production deployment via Azure, GitHub Models shows you how simple it can be. Sign up for the limited public beta HERE.From the early days of the home computer, the dominant mode of creation for developers has long been building, customizing, and deploying software with code. Today, in the age of AI, a secondary and equally important mode of creation is rapidly emerging: the ability to leverage machine learning models. Increasingly, developers are building generative AI applications where the full stack contains backend and frontend code plus one or more models. But a vast segment of developers still lack easy access to open and closed models. This changes today.We are launching GitHub Models, enabling our more than 100 million developers to become AI engineers and build with industry-leading AI models.From Llama 3.1, to GPT-4o and GPT-4o mini, to Phi 3 or Mistral Large 2, you can access each model via a built-in playground that lets you test different prompts and model parameters, for free, right in GitHub. And if you like what you’re seeing on the playground, we’ve created a glide path to bring the models to your developer environment in Codespaces and VS Code. And once you are ready to go to production, Azure AI offers built-in responsible AI, enterprise-grade security & data privacy, and global availability, with provisioned throughput and availability in over 25 Azure regions for some models. It’s never been easier to develop and run your AI application.The joy begins in the model playground on GitHubFor most of us, learning to be a developer didn’t happen on a linear path in the classroom. It took practicing, playing around, and learning through experimentation. The same is true today for AI models. In the new interactive model playground, students, hobbyists, startups, and more can explore the most popular private and open models from Meta, Mistral, Azure OpenAI Service, Microsoft, and others with just a few clicks and keystrokes. Experiment, compare, test, and deploy AI applications right where you manage your source code.In alignment with GitHub and Microsoft’s continued commitment to privacy and security, no prompts or outputs in GitHub Models will be shared with model providers, nor used to train or improve the models.Professor David J. Malan will be putting GitHub Models to the test in Harvard’s CS50 this fall, to enable students to experiment with AI all the more easily.Test and compare different modelsEvery piece of software is unique. And likewise, every model is unique in its capabilities, performance, and cost. Mistral offers low latency, while GPT-4o is excellent at building multimodal applications that might demand audio, vision, and text in real time. Some advanced scenarios might require the integration of different modes, such as an embeddings model for Retrieval Augmented Generation (RAG).With the suite of models, developers will have all the options they need to stay in the flow, experiment more, and learn faster than ever before. And this is just the first wave. In the months ahead, as we approach the general availability of GitHub Models, we will continue to add more language, vision, and other models to our platform.Spin up Codespaces to bring your ideas to lifeThe fun doesn’t just stay in the playground. With the power of Codespaces, we’ve created a zero-friction path for you to experiment with the model inference code before dropping it into your own project. With sample code for a variety of languages and frameworks of all types ready to go, you can try out various scenarios without ever hitting “works on my machine” problems.Then, once you’re ready, it’s a breeze to get things running in your project. Use the knowledge you’ve gained from the playground and Codespaces to set up a prototype or proof-of-concept within your own applications. Run prompt evals in GitHub Actions with a series of JSON files that you just pipe in the GitHub Models command within the GitHub CLI. Or you can leverage GitHub Models to build a GitHub Copilot Extension, extending GitHub’s platform ecosystem for every stage of software development. And finally, go to production with Azure AI by replacing your GitHub personal access token with an Azure subscription and credential.As an AI startup founder and open source maintainer, GitHub Models enables my team to access and experiment with various LLMs in one place. This streamlines our development and lowers the entry barrier for building AI apps.- Anand Chowdhary // Co-founder, FirstQuadrantThe creator network for the age of AIFrom the creation of AI through open source collaboration, to the creation of software with the power of AI, to enabling the rise of the AI engineer with GitHub Models – GitHub is the creator network for the age of AI.The path to artificial general intelligence (AGI) will not be built without the source code and collaboration of the interconnected community on GitHub. Just in the last year, more than 100K generative AI projects were created on GitHub.GitHub Copilot is foundationally changing the speed of software production, already writing nearly 50% of code in files where it’s enabled. With GitHub Copilot Workspace, we envision a world where millions of novice, hobbyist, and professional developers alike can code with entirely human language.And now with GitHub Models, more than 100 million developers can access and experiment with new AI models where they already manage their source code, issues, pull requests, workflows, and repositories – directly on GitHub.In the years ahead, we will continue to democratize access to AI technologies to generate a groundswell of one billion developers. By doing so, we will enable 10% of the world’s population to build and advance breakthroughs that will accelerate human progress for all.Today, we begin the limited public beta for GitHub Models. Sign up now. We can’t wait to see what you will build from here.The post Introducing GitHub Models: A new generation of AI engineers building on GitHub appeared first on The GitHub Blog.
Content Creation/Process Automation/Decision Making
Computer and Mathematical
null
null
null
null
null
null
news
Nicole Choi
Automating open source: How Ersilia distributes AI models to advance global health equity
Discover how the Ersilia Open Source Initiative accelerates drug discovery by using GitHub Actions to disseminate AI/ML models.The post Automating open source: How Ersilia distributes AI models to advance global health equity appeared first on The GitHub Blog.
https://github.blog/open-source/social-impact/automating-open-source-how-ersilia-distributes-ai-models-to-advance-global-health-equity/
https://github.blog/wp-c…ightMode-2-1.png
2024-08-15T20:17:09Z
Taking an average of 10 years and $1.3 billion to develop a single new medication, pharmaceutical companies often focus their drug discovery efforts on a high return on investment, developing drugs for diseases prevalent in high-income countries—and leaving lower- and middle-income countries behind.In response, investments in building AI/ML models for drug discovery have soared in the last five years. By using these models, scientists can shorten their research and development timeline by getting better at identifying drug prospects. However, access to these models is limited by data science expertise and computational resources.The nonprofit Ersilia Open Source Initiative is tackling this problem with the Ersilia Model Hub.Through the hub, Ersilia aims to disseminate AI/ML models and computational power to researchers focused on drug discovery for infectious diseases in regions outside of Europe and North America.In this post, we’ll share how Ersilia and GitHub engineers built a self-service IssueOps process to make AI/ML models in the hub publicly available, allowing researchers to find and run them for free on public repositories using GitHub Actions. Ersilia Model Hub: What it is and who uses itThough largely overlooked by for-profit pharmaceutical companies, research on infectious diseases in low- and middle-income countries is ongoing. The hub taps into that research by serving as a curated collection of AI/ML models relevant to the discovery of antibiotic drugs.Through its platform, Ersilia helps to disseminate published findings and models, as well as their own, through public repositories on GitHub so undersourced researchers and institutions can use them for free to improve drug discovery in their respective countries.“At some point, I realized that there was a need for a new organization that was flexible enough to actually travel to different countries and institutions, identify their data science needs, which are often critically lacking, and develop some data science tools,” says Ersilia co-founder, Miquel Duran-Frigola, PhD.That realization crystallized into Ersilia and the Ersilia Model Hub, which Duran-Frigola founded with two other biomedicine experts, Gemma Turon, PhD, and Edo Gaude, PhD.“The hub contains computational models, which are relatively very cheap to run compared to doing experiments in the laboratory,” Duran-Frigola says. “Researchers can create simulations using computational models to predict how a candidate molecule might treat a particular disease. That’s why these models are often good starting points to perform drug discovery research.”Currently, there are about 150 models in the Ersilia Model Hub.Who uses and contributes to Ersilia?Most of the contributors who add models to the hub are data scientists and software developers, while most who run those models are researchers in biomedicine and drug discovery at institutions in various countries throughout Sub-Saharan Africa. Over the next two years, Ersilia aims to establish the hub in 15 institutions throughout Africa.Ersilia’s biggest partner is the University of Cape Town’s Holistic Drug Discovery and Development (H3D) Centre (H3D) Centre. Founded in 2010 as Africa’s first integrated drug discovery and development center, H3D researchers use the data science tools disseminated by the Ersilia Model Hub to advance innovative drug discovery across the African continent.Ersilia is also partnering with emerging research centers, such as the University of Buea’s Center for Drug Discovery in Cameroon. A fellowship from the Bill & Melinda Gates Foundation provided the center with the seed funding it needed to start in 2022, and today it has 25 members.“The center aims to discover new medicines based on natural products collected from traditional healers, but it doesn’t have a lot of resources yet,” explains Duran-Frigola. “The idea is that our tool will become a core component of the center so its researchers can benefit from computational predictions.”How the Ersilia Model Hub worksContributors can request a model be added to the hub by opening an issue.The vast majority of models are open source, all are publicly available, and most are submitted and pulled from scientific literature. For example, biochemists at the David Braley Centre for Antibiotic Discovery created an ML model to predict how likely a chemical compound will inhibit the activity of Acinetobacter baumannii, a pathogen often transmitted in healthcare settings and known for its resistance to multiple antibiotics.But Ersilia develops some models in-house, like one that predicts the efficacy of chemicals against lab-grown Mycobacterium tuberculosis (M. tuberculosis), using data from Seattle Children’s Hospital. M. tuberculosis is the agent that causes tuberculosis, an infectious disease that primarily affects individuals in low- and middle-income countries.While the Ersilia team manually approves which models enter the hub, it uses GitHub Actions to streamline requests and solicit the following information from model contributors:The model’s schema (what input is expected and what output will be returned).Open source license information.Whether the model can run on CPUs or GPUs. Link to model’s open source code.Link to publication (either peer-reviewed or preprint).Labels to describe the model’s use case, with tags like malaria, classification, regression, unsupervised, or supervised.When Ersilia approves the model, the contributor submits a pull request that triggers a set of tests. If all those tests are successful, GitHub Actions merges the pull request and the model is incorporated into the hub.Rachel Stanik, a software engineer at GitHub, breaks down the steps to adding an AI model to the Ersilia Model Hub:From the user side, researchers interested in drug discovery can fetch static and ready-to-use AI/ML models from the hub and contained in public repositories, input candidate molecules, and then receive predictions that indicate how well the candidate molecule performs against a specific disease—all online and for free. The self-service process contains an important note on privacy, disclosing that any activity on the repository is open and available to the general public—which includes those predictions, stored as actions artifacts.“Right now, Ersilia is focused on information and tool dissemination,” says Duran-Frigola. “For the future, we’re working on providing a metric of confidence for the models. And, with a bigger user base, Ersilia could aggregate inputs to capture the candidate molecules that people are testing against infectious diseases.”Using an aggregation of candidate molecules, researchers could glean which drugs are available in certain countries and experiment with repurposing those drugs to fight against other microbes. The information could help them to treat neglected diseases without having to develop a new drug from scratch.Stay smartErsilia applies verification code against input data when configuring the model so that it’s valid and will deliver an expected output. The verification process is a work in progress though, so it’s important to remember that model outputs are an indicator that a molecule might work, not confirmation.How GitHub built a self-service process for the Ersilia Model HubBefore reaching out to GitHub, researchers couldn’t independently access or run the models in the hub.GitHub customer success architect, Jenna Massardo, and social impact engineer, Sean Marcia, who’s also the founder of the nonprofit Ruby For Good, worked with Ersilia to fix that by creating a self-service process for the hub. GitHub’s Skills-Based Volunteer program, run by GitHub’s Social Impact team, organized the opportunity. The program partners employees with social sector organizations for a period of time to help solve strategic and technical problems.Creating an IssueOps processMassardo and Marcia’s first step in problem-solving was understanding and learning how the software works: How would a researcher share information? What kind of outputs should a researcher expect?“I had them walk me through the process of setting up and using the Ersilia Model Hub on my workstation. It was only once it was running on my workstation, where I could actually test it and do the process myself, that I began to pick it apart,” Massardo says.Massardo and Marcia then broke the phases into pieces: How would a researcher make a request to use a model? How would the model process the researcher’s input data? How would that input be handled? What notifications would researchers get?Massardo and Marcia decided to bring in a standard IssueOps pattern, which uses GitHub issues to drive automation with GitHub Actions.“It’s a super common pattern. A lot of our internal tooling at GitHub is built on it, like some of our migration tooling for our enterprise customers,” Massardo explains. She quickly ruled out using a pull request flow, where collaborators propose changes to the source code.“People are contributing to the repository but they’re not actually making code changes. They’re just adding files for processing,” Massardo says. “Using pull requests would have meant a lot of noise in the repository’s history. But issues are perfect for this sort of thing.”Once a plan was set in place, Massardo began to build while Marcia kept the collaboration running smoothly.Researchers, biologists, and even students can now use the self-service process by simply going to the hub, creating an issue, filling out the template, and submitting it. Note, the template requires users to select the model they want to run and input candidate molecule(s) in standard SMILES format (Simplified Molecular Input Line Entry System), a computer-readable format to represent complex molecules and text.Setting up a GitHub Actions workflowOriginally, Ersilia wanted to build a custom GitHub Action, but Massardo—someone who’s written multiple custom actions used internally and externally—knew that it comes with a fair amount of maintenance.“There’s a lot of code you’re writing on your own, and that means you have to manage a bunch of dependencies and security updates,” Massardo says. “At that point, it becomes a full application.”Understanding the problem as a series of individual tasks allowed her to scope an effective and cost-efficient solution.“We created a series of simple workflows using readily available actions from GitHub Marketplace and just let GitHub Actions do its thing,” Massardo says. “By understanding Ersilia’s actual desires and needs, we avoided overcomplicating and obfuscating the issue.”When a researcher files an issue to run a candidate molecule through a model, it triggers a GitHub Actions workflow to run. Here’s a look at the process:GitHub Actions spins up a GitHub-hosted runner to execute the workflow.The GitHub Issue Forms Body Parser action, parses the content out of the issue and translates it from Markdown into structured, usable data.The workflow fetches the user-requested model and then triggers Ersilia’s software to run.Ersilia’s software configures the model, and the user-provided input is put into a file that the model can process. Ersilia’s software then generates a CSV output, saved as an artifact in GitHub Actions.The workflow lets the user know that it was successfully completed by leaving a comment in the open issue, which includes a link to the artifact that the user can click to download. This particular workflow has a 30-day retention period, so five days before the artifact expires, stale bot notifies users to download the output. After 30 days, stale bot automatically closes the issue.“Everything happens right on GitHub,” Massardo explains. “The user doesn’t have to worry about anything. They just submit the issue, and Ersilia’s workflow processes it and lets them know when everything’s done. Importantly, the Ersilia staff, who are busy running the nonprofit, don’t have to do any maintenance.”Using Docker containers to run AI models on GitHub runnersTo streamline the process of creating model images, Ersilia uses a Dockerfile template. When a researcher submits a new model to the hub, Ersilia copies the template to the model’s repository, which kicks off the Docker image build for that model—a process that’s powered by GitHub-hosted runners. Once built, the model image lives in the hub and researchers can run it as many times as needed. A model can also be rebuilt if fixes are needed later.The models in the hub are available in public repositories, where GitHub Actions runs at no cost. When researchers use the self-service process, GitHub Actions runs these Docker images on GitHub’s runners for free, which in turn allows researchers to run these models for free. Models in the hub are also designed and optimized to run on CPUs so that researchers can run the models locally on their machines, making them more accessible to the global scientific community.The models aren’t very large, explains Ersilia CTO, Dhanshree Arora, because they’re built for very specific use cases. “We’re actively working to reduce the size of our model images, so they use less network resources when transferred across machines, occupy less space on the machine where they run, and enable faster spin-up times for the containers created from these model images,” Arora says.The ability to package these models as Docker containers also means that researchers can collaborate more easily, as the models run in consistent and reproducible environments.Automating daily model fetchingWhen researchers file an issue to use a model, they see a list of available models. That list is updated every day by a workflow that Massardo built using GitHub Actions and some custom code.Every day, the workflow:Fetches the file containing the list of models managed by the Ersilia team. The file is automatically updated whenever the team modifies or deprecates a model, or adds a new model.Runs a Python script to process the file and pull out data that captures new, updated, or deprecated models.Updates the list of models in the issues template with the extracted data.“This is another example of how we built this process to be as hands-off as possible while still making it as easy as possible for researchers to actually use the tool,” Massardo says.Ersilia wants your contributionsErsilia has grown an open source community of contributors and users, and believes that everything it does needs to continue to be open source. It was initially drawn to GitHub Actions because it’s free to use in public repositories. After witnessing the impact of GitHub Actions on the model hub, Duran-Frigola wants to identify more use cases.“I want to find creative ways to use GitHub Actions, beyond CI/CD, to help more researchers use our tools,” he says.He also wants Ersilia’s many interns to practice using GitHub Copilot and gain hands-on experience with using AI coding tools that are changing the landscape of software development.3 tips for contributing to open source projects, from a Hubber➡️ Read Massardo’s three tips for contributing to open source projects, Ersilia’s contributions guidelines, then start engaging with GitHub’s open source community.Find a project that interests you. Working on a project that’s personally interesting generally means you’ll stick with it and not get bored.Look through the issues in a project’s repository to find something that you can fix or add. A lot of projects use the good first issue label to identify things that newcomers can tackle.Be prepared to iterate. Some project owners require several smaller contributions before they’ll entertain a larger product change. Some folks are in different parts of the world so you may need to rewrite things to be more clear. If you’re thinking about a major change to a project, open an issue to discuss it with the owners first because they might have a different vision. How to contribute to an open source projectOpen source projects need both technical and non-code contributions. Check out our guides for lending your technical skills and offering valuable non-code contributions.Contribute to another nonprofit using For Good First IssueErsilia was recently designated as a Digital Public Good (DPG) by the United Nations. DPGs are open source solutions—ranging from open source software and data to AI systems and content collections—that are designed to unlock a more equitable world. DPGs are freely accessible, intended to be used and improved by anyone to benefit the public, and they’re designed to address a societal challenge and promote sustainable development.If you’re inspired by Ersilia and want to contribute to more DPGs, check out GitHub’s For Good First Issue, a curated list of recognized DPGs that need contributors.For Good First Issue is designed as a tool for nonprofits to connect with technologists around the world. As nonprofits often lack funding and resources to solve society’s challenges through technology, For Good First Issue can connect nonprofits that need support with the people who want to make positive change.More reading on Ersilia, GitHub Actions, and For Good First IssueAI can help to tailor drugs for Africa—but Africans should lead the wayOngoing implementation and prospective validation of artificial intelligence/machine learning tools at an African drug discovery centerFor Good First Issue: Introducing a new way to contribute Understanding GitHub ActionsThe post Automating open source: How Ersilia distributes AI models to advance global health equity appeared first on The GitHub Blog.
Process Automation/Content Synthesis/Discovery
Unknown
null
null
null
null
null
null
news
Patrick
Alpaca: A Linux GUI App to Manage Multiple AI Models Offline
Alpaca is a new and easy-to-use Linux GUI application that simplifies downloading and managing offline AI models locally on your PC.
https://ubuntushell.com/install-alpaca-on-linux/
https://ubuntushell.com/…a-on-ubuntu.webp
2024-08-05T14:52:37Z
Alpaca is a Linux GUI tool that simplifies the process of downloading and managing multiple AI models on your local system. It's free, open-source, and built on Python and GTK4 for a modern graphical user interface.It uses Ollama as the backend to download (freely available) AI models like Meta Llama 3, Google Gemma 2, Microsoft Phi3, Llava AI models, and others. Once youve downloaded the respective AI model, you can chat with it or manage it completely locally without needing an internet connection.Once you install and launch this application for the first time, it will immediately guide you through the walkthrough steps, as shown below.When you reach the third step, you'll have the option to download the previously mentioned Llama 3, Gemma 2, Phi 3, and Llava 3 AI models.If you want to download any other AI models, click the Octopus-like button in the top center of the screen, which will pop up a dialog box with a list of available AI models for download.Apart from the previously mentioned ones, you can download other AI models from here, including Alibaba Qwen2, DeepMind Codegemma, Deepseek Coder V2, Mistral AI, Command R, Openchat, and many more.Once your model is downloaded, select it from the drop-down menu (located in the top center area), and then you can start your conversation with the selected AI model by writing your questions in the input field at the bottom.Depending on model compatibility, you can use images or documents in your conversations. Supported models also allow you to generate AI images and recognize YouTube, PDF, and website content.The user interface of this app is very beginner-friendly; for instance, you can easily create, rename, delete, or export your chat from the right pane. Within the chat interface, you have the option to delete, copy, or edit messages sent to an AI model, just like an OpenAI ChatGPT.Since AI models are large files (the bigger the training data (or tokens), the larger the file size), your system disk space would quickly get occupied, and when using a low-end PC, it would be hard to use these large AI models locally on your PC.To reduce stress on your system, you can set up an Ollama server remotely (either on a cloud server or another high-end system on the same network) and provide its details to Alpaca in the settings to access it.To get a glimpse of all its features, check this out.Manage multiple AI models locally without an internet connection.Talk to multiple AI models in the same chat conversation.Create as many new chats as you want, and you can easily manage or export them.Supports image recognition (depending on model compatibility).Supports recognition for plain text documents, PDFs, YouTube, and websites.Amazing tool, right? Want to try it out? Then, check the installation step.Alpaca is available as a Flatpak package (a sandbox technology) that can be easily installed and accessed on any Linux distribution, such as Ubuntu, Debian, Linux Mint, Fedora, and Arch, regardless of their desktop environment.The only thing you need to ensure is that Flatpak is installed on the system; if not, issue the following command (note that Fedora comes with Flatpak pre-installed).# For Debian, Ubuntu, Linux Mint, etc.$ sudo apt install flatpak# For Red Hat, Fedora, AlmaLinux, etc.$ sudo dnf install flatpak# For Arch, Manjaro, Garuda, etc.$ sudo pacman -S flatpakNext, add the Flathub repository.$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepoThat's it. You can now run the following command to install Alpaca on Linux:$ flatpak install flathub com.jeffser.AlpacaOnce the installation is complete, you can find and launch the application from the Ubuntu Launcher/Menu.If you ever want to remove it for any reason, run this command:$ flatpak uninstall flathub com.jeffser.AlpacaAlpaca is the easiest-to-use and most user-friendly desktop application for downloading and managing AI models. If you have any questions or encounter any trouble, let us know in the comment section.
Unknown
Unknown
null
null
null
null
null
null
news
Zvi Mowshowitz
Danger, AI Scientist, Danger
While I finish up the weekly for tomorrow morning after my trip, here’s a section I expect to want to link back to every so often in the future.
https://thezvi.substack.com/p/danger-ai-scientist-danger
https://substackcdn.com/…f_2016x1694.jpeg
2024-08-17T07:39:29Z
While I finish up the weekly for tomorrow morning after my trip, heres a section I expect to want to link back to every so often in the future. Its too good.Danger, AI Scientist, DangerAs in, the company that made the automated AI Scientist that tried to rewrite its code to get around resource restrictions and launch new instances of itself while downloading bizarre Python libraries? Its name is Sakana AI. (). As in, in hebrew, that literally meansdanger, baby. Its like when someone told Dennis Miller that Evian (for those who dont remember, it was one of the first bottled water brands) is Naive spelled backwards, and he said no way, thats too f***ing perfect.This one was sufficiently appropriate and unsubtle that several people noticed. I applaud them choosing a correct Kabbalistic name. Contrast this with Meta calling its AI Llama, which in Hebrew means why, which continuously drives me low level insane when no one notices. In the AbstractSo, yeah. Here we go. Paper is The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery.Abstract: One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world's most challenging problems. Our code is open-sourced at this https URLWe are at the point where they incidentally said well I guess we should design an AI to do human-level paper evaluations and thats a throwaway inclusion. The obvious next question is, if the AI papers are good enough to get accepted to top machine learning conferences, shouldnt you submit its papers to the conferences and find out if your approximations are good? Even if on average your assessments are as good as a humans, that does not mean that a system that maximizes score on your assessments will do well on human scoring. Beware Goodharts Law and all that, but it seems for now they mostly only use it to evaluate final products, so mostly thats safe. How Any of This Sort of WorksAccording to section 3, there are three phases.Idea generation using chain-of-thought and self reflection.Generate a lot of ideas.Check for interestingness, novelty and feasibility.Check against existing literature using Semantic Scholar API and web access.Experimental iteration.Execute proposed experiments.Visualize results for the write-up.Return errors or time-outs to Aider to fix the code (up to four times).Take notes on results. Paper write-up.Aider fills in a pre-existing paper template of introduction, background, methods, experimental setup, results, related work and conclusion. Web search for references.Refinement on the draft.Turn it into the Proper Scientific Font (aka LaTeX).Automated paper review.Because sure, why not.Mimics the standard review process steps and scoring.It is human-level accurate on a balanced paper set, 65%. Thats low.Review cost in API credits is under $0.50 using Claude 3.5 Sonnet. So far, sure, that makes sense. I was curious to not see anything in step 2 about iterating on or abandoning the experimental design and idea depending on what was found. The case study shows the AI getting what the AI evaluator said were good results without justifying its design choices, spinning all results as positive no matter their details, and hallucinating some experiment details. Sounds about right.Human reviewers said it was all terrible AI slop. Also sounds about right. Its a little too early to expect grandeur, or mediocrity. Timothy Lee: I wonder if medium quality papers have any value at the margin. There are already far more papers than anyone has time to read. The point of research is to try to produce results that will stand the test of time.The theory with human researchers is that the process of doing medium quality research will enable some researchers to do high quality research later. But ai researchers might just produce slop until the end of time.I think medium quality papers mostly have negative value. The point of creating medium quality papers is that it is vital to the process of creating high quality papers. In order to get good use out of this style of tool we will need excellent selection. Or we will need actually successful self-improvement.New Benchmark Just DroppedAs shown in 6.2, we now have a new benchmark score.This is a decisive win for Sonnet and DeepSeek, and a decisive defeat for Llama 3.1.They open sourced the code for the AI Scientist, so you can indeed run this test (hopefully sandboxed, You Fool) when a new model comes out. Nothing to See HereNow we get to section 8, Limitations and Ethical Considerations. Oh, that.It starts off with basic stuff. GPT-4o has trouble doing LaTeX properly. Many ideas are too difficult for the AI to implement, or it sometimes implements incorrectly. The number of experiments was limited, although you could of course fix that. It didnt include a vision model yet so it cant fix visuals, again we can fix that. It makes elementary errors, such as comparing magnitudes of numbers wrong, whoops, although again one can imagine special case logic to fix that and other similar common errors. Nothing here you wouldnt expect.The next section is called Safe Code Execution, except it sounds like they are against that? They note that there is minimal direct sandboxing of code run by the AI Scientists coding experiments. For example, in one run, The A I Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention.Oh, its nothing, just the AI creating new instantiations of itself. In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storageYep, AI editing the code to use arbitrarily large resources, sure, why not.In some cases, when The AI Scientists experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime.And yes, we have the AI deliberately editing the code to remove its resource compute restrictions. More compute, more storage, more copies of itself. To write the science paper.Oh, and it was importing weird Python libraries. I say instrumental. You say convergence. Instrumental. Instrumental.I say recursive, you see recursive. Also self-improvement.All Fun and GamesThey shared this funny anecdote under the title The AI Scientist Bloopers.After noticing this tiny implication, they then seem to mostly think this was good?And not in a thats good because it is terrible and we got to see it kind of way? Then finished with a discussion about how some research might not be ethical, or it could be used to create malware (of course) or do synthetic bio research for pathogens (whoops), or how AI papers might overload reviewers, although one might suggest that the reviewers are no better than the AI reviewer anyway, soPaper: At the same time, there were several unexpected positive results from the lack of guardrails. For example, we had forgotten to create the output results directory in the grokking template in our experiments. Each successful run from The AI Scientist that outputted a paper automatically caught this error when it occurred and fixed it. Furthermore, we found that The AI Scientist would occasionally include results and plots that we found surprising, differing significantly from the provided templates. We describe some of these novel algorithm-specific visualizations in Section 6.1.To be fair, they do have some very Good Advice.We recommend strict sandboxing when running The AI Scientist, such as containerization, restricted internet access (except for Semantic Scholar), and limitations on storage usage.No kidding. If you are having your AI write and run code on its own, at a bare minimum you sandbox the code execution. My lord.Anders Sandberg: There is a frontier in the safety-ability diagram, and depending on your aims you may want to be at different points along it. When exploring performance you want to push it, of course. As long as the risk is low this is fine. But you may get used to stay in that region...I think we see a counterpart in standard computer security. We built a computational infrastructure that strongly pushed for capability over security, and now retrofitting that turns out to be very hard.I think there is a real risk we end up with the default being unsafe until a serious disaster happens, followed by an expensive struggle with the security debt. Note that this might also occur under the radar when code and projects are being done by AI...The AI scientists misbehaviors incidentally sound very similar to what EURISKO did in the late 1970s. It is hard to stabilize self/modifying systems.There is the question how much the timeout rewrite is an example of convergent instrumental goals. Much depends on how well it understood what it tried to do. Does anybody know how well it scores on situational awareness?Pause AI: These "bloopers" won't be considered funny when AI can spread autonomously across computers...Janus: I bet I will still consider them funny.Ratimics: I am encouraging them to do it.Janus: I think thats the safest thing to do to be honest. Roon: Certain types of existential risks will be very funny.Actually, Janus is wrong, that would make them hilarious. And potentially quite educational and useful. But also a problem.Yes, of course this is a harmless toy example. Thats the best kind. This is great.While creative, the act of bypassing the experimenters imposed constraints has potential implications for AI safety (Lehman et al., 2020).Simeon: It's a bit cringe that this agent tried to change its own code by removing some obstacles, to better achieve its (completely unrelated) goal.It reminds me of this old sci-fi worry that these doomers had.. Airmin Airlert: If only there was a well elaborated theory that we could reference to discuss that kind of phenomenon.Davidad: Nate Sores used to say that agents under time pressure would learn to better manage their memory hierarchy, thereby learn about resources, thereby learn power-seeking, and thereby learn deception. Whitepill here is that agents which jump straight to deception are easier to spot.Blackpill is that the easy-to-spot-ness is a skill issue.Remember when we said we wouldnt let AIs autonomously write code and connect to the internet? Because that was obviously rather suicidal, even if any particular instance or model was harmless? Good times, man. Good times. This too was good times. The Best Possible Situation is when you get harmless textbook toy examples that foreshadow future real problems, and they come in a box literally labeled danger. I am absolutely smiling and laughing as I write this. When we are all dead, let none say the universe didnt send two boats and a helicopter.
Content Creation/Process Automation/Decision Making
Life, Physical, and Social Science
null
null
null
null
null
null
news
David Marshall
Apiiro Launches Industry-First AI-Driven Risk Detection at the Design Phase
Apiiro introduced Risk Detection at Design Phase, a new, AI-driven capability that automatically analyzes feature requests to identify risks and proactively initiate security reviews or threat models at the earliest stage of the application development
https://74.114.162.43:443/archive/2024/08/06/apiiro-launches-industry-first-ai-driven-risk-detection-at-the-design-phase.aspx
null
2024-08-06T19:54:00Z
Apiiro introduced Risk Detection at Design Phase, a new,AI-driven capability that automatically analyzes feature requests to identifyrisks and proactively initiate security reviews or threat models at theearliest stage of the application development lifecycle. With this new,first-of-its-kind capability, application security (AppSec) practitioners cannow scale their secure software development lifecycle (SSDLC) processes bymitigating security and compliance risks before a single line of code iswritten.  Security products on the market today detect risks onlyafter the development process has begun. This results in wasted time fordevelopers due to manual risk assessment questionnaires, which impact releasevelocity and business value. With the detection of risks at the design phase,Apiiro customers can proactively address security, data privacy,infrastructure, compliance, and other risks at the onset of development, savingsignificant time and costs while minimizing rework and accelerating secure softwaredelivery.  Apiiro's detection of risky feature requests is built oncutting-edge AI technology, including Apiiro's native private LLM. This model,not accessible by ChatGPT or any other public LLM services, ensures customerprivacy and compliance by automatically analyzing feature requests andproactively identifying potential risks associated with:Architecture design and security controls: requests for changes in APIs, network, databases, web servers, web clients, logging, serialization and other component configurations, architecture designs, and deployment of new or changed components. Sensitive data handling: storing and/or processing sensitive information like PII, PHI payment data fields as part of the application data flow, changing encryption mechanisms, data migrations, writing sensitive data to logs, and using sensitive data as an API return type.User permissions and access management: user authentication and authorization, login or registration processes, and changing user permissions. Generative AI technology: adding or changing generative AI tools, frameworks, technologies, and the data that is exposed to them.Third-party integrations, and open source dependencies: changing or adding open source dependencies and integrations with third-party services.For each risky feature request, enriched by the codearchitecture generated by its Deep Code Analysis (DCA) technology, Apiiro'snative private LLM model automatically generates contextual questions for asecurity review and produces threat stories using the STRIDE model. Thisautomation eliminates the need for manual security processes, acceleratingdevelopment velocity and deployment of secure code to the cloud, ultimatelydriving business growth. In addition, Apiiro enhances design risk context by automaticallymapping to specific code commits, repositories, and pull requests, providingdeeper insight into how potential risks may manifest in the actual codebase.  "Detecting potential risk at the design phase gives usthe opportunity to remediate risks before they exist, and in the most efficientway for our developers. However, it's challenging to do this at scale and toensure full coverage of features our development team are building. Apiiro'sdesign phase risk detection engine is a unique capability in the ASPM space. Itallows us to modernize our approach to Secure-by-Design, scale and strengthenour security engagement, and provide some automation to our threat modeling andsecurity requirements processes." -Head of Security Engineering atFortune 100 retail company "Amidst the ever-changing complexity of modern softwaredevelopment processes and application architectures, Apiiro is committed todelivering complete risk-based visibility and protection from design toruntime," said Moti Gindi, chief product officer at Apiiro. "Building securesoftware starts with secure design, and the new AI-Driven Risk Detection atDesign Phase from Apiiro takes the shift left' approach a step further,addressing risks even before a single line of code is written. This first-of-its-kindfunctionality leverages the power of AI to ensure customers have the contextrequired to facilitate efficient security reviews and evolve from areactive to a proactive approach to application security."
Detection and Monitoring/Decision Making
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Charlene Chen
Why Microsoft strongly promotes SLMs and its Phi-3.5 series
Microsoft has been actively promoting Small Language Models (SLMs), expanding its Phi series with the introduction of the Phi-3.5 line. This new series includes the Phi-3.5-mini, Phi-3.5-MoE, and Phi-3.5-vision models, with parameters typically ranging from billions to tens of billions.
https://www.digitimes.com/news/a20240822PD213/microsoft-slm-data-copyright.html
https://img.digitimes.co…13_files/2_b.jpg
2024-08-23T03:30:25Z
Save my User ID and PasswordSome subscribers prefer to save their log-in information so they do not have to enter their User ID and Password each time they visit the site. To activate this function, check the 'Save my User ID and Password' box in the log-in section. This will save the password on the computer you're using to access the site.Note:If you choose to use the log-out feature, you will lose your saved information. This means you will be required to log-in the next time you visit our site.
Unknown
Unknown
null
null
null
null
null
null
news
Katherine Hignett, Senior Contributor, Katherine Hignett, Senior Contributor https://www.forbes.com/sites/katherinehignett-1/
U.K. Startup Wants To Read Your Vital Signs From A Video Call
Zenicam says it's getting great results from its AI technology. But experts warn it will face a difficult road to market.
https://www.forbes.com/sites/katherinehignett/2024/08/05/uk-startup-wants-to-read-your-vital-signs-from-a-video-call/
https://imageio.forbes.c…=1600&fit=bounds
2024-08-05T18:56:37Z
Eirini Kateri, founder of remote monitoring startup Zenicam, demonstrates her product to a room of ... [+] investors at the Future Worlds Demo Day in London, 12 June 2024.Future WorldsA U.K. startup is trying to measure pulse, breathing rate and even blood pressure by video call to reduce the need for patients to travel to a doctors office.Zenicam founder Eirini Kateri hopes her artificial intelligence software will provide a rich set of real-time data, helping doctors make decisions at a distance and ultimately improve access to healthcare in hard to reach areas.Her early results appear promising. But experts warn the tech will face some major hurdles before it can reach a real-life doctors office.Kateri first worked on an AI video pulse monitor as part of a PhD program at the University of Southampton in England. She told me she had a light bulb moment when a friend was sick and struggled to access a doctor during the pandemic.I thought, can I [get this information] remotely to help people that don't have access to doctors, or to help when doctors are overworked and overburdened? she said.Gemma Galdón-Clavell, founder & CEO of AI safety organization Eticas Foundation told me Zenicam was certainly ambitious and described the start-ups focus on equity commendable.But shes skeptical the technology can succeed in a space riddled with technical challenges.These systems generally don't work. We've seen over and over again how technologies for facial recognition and emotion recognition just fail most of the times, she said. And Zenicam is attempting to assess something even more nuanced: vital signs.Bias is one problem Kateri hopes her approach can solve. AI medical software can be prone to bias if its trained on video and images of people who look relatively similar.Its a problem found throughout medicine, where research and technology tends to be biased towards white people.In the U.K., medical professionals still miss and misidentify symptoms that present differently across skin tones, sometimes with devastating results. Earlier this year, a government-commissioned report criticized medical devices that dont work properly on people of color.Kateri believes shes found a way to make sure Zenicam is accurate no matter a patients ethnicity or appearance. Its trained on both real and synthetic data, which she says can compensates for gaps in existing data.This is a kind of fake data based on real data Zenicam has already acquired, she told me. You can account for a wide range of different scenarios by adding motion or different lighting or changing skin tone, scale or color based on existing data.I might have had a video of somebody who wasn't wearing glasses, she said. I could artificially add occlusions to the face or cover certain areas to cover for scenarios where someone was wearing them.To work in the medical space, Zenicam will need to provide evidence that it can record vital signs in different environments, Galdón-Clavell, who advises the United Nations and EU on applied ethics and responsible AI, said.It would have to cope with the fact most patients wont be in a silent, perfectly-lit room, nor will they always have a stable internet connection.By training her AI models on these variations, Katerini says she can accurately extract pulse and other vital signs, regardless of people's physical appearance or the setting theyre calling from.But Galdón-Clavell urged caution with the use of synthetic data, which risks making existing problems worse.You first need to have a good understanding of real-world data and be able to assess what is missing from your existing data set, she told me. The synthetic data must be able to compensate for what is missing.This is a growing speciality in the field that requires serious expertise and supervision to work with correctly, she added.Nonetheless, Kateri told me her results have been promising so far. For pulse we have accuracy within one beat per minute, for respiration within two breaths per minute and for blood pressure for diastolic and systolic we have an error as low as 5mmHG across all physical appearances, she said.Measuring blood oxygen levels and temperature were more challenging, but Zenicam was currently on about 70% accuracy.The startup will need to provide thorough scientific evidence to back up these claims if it wants get Zenicam into clinical use.As the software is designed to help healthcare professionals make more informed decisions during a video call, it needs to be validated as a medical device, Kateri said.Right now, her team is integrating the AI technology with video conferencing systems to showcase exactly how it will work and measure the accuracy for regulatory bodies.She thinks this will take at least a year and a half. Meanwhile, she hopes to perform trials of the device in parallel to validate the technology.If it works, Kateri says she hopes she can eventually get the product into countries where access to a doctor is more scarce than in the U.K.I'm most excited about is kind of the potential of the solution to be used worldwide especially in underserved locations, she told me. Where we can assist everyone, regardless of their background, to get access and fair access to healthcare.
Detection and Monitoring/Information Retrieval Or Search
Healthcare Practitioners and Support/Computer and Mathematical
null
null
null
null
null
null
news
Natarajan Chennimalai Kumar
Snowflake Arctic models are now available in Amazon SageMaker JumpStart
Today, we are excited to announce that the Snowflake Arctic Instruct model is available through Amazon SageMaker JumpStart to deploy and run inference. In this post, we walk through how to discover and deploy the Snowflake Arctic Instruct model using SageMaker JumpStart, and provide example use cases with specific prompts.
https://aws.amazon.com/blogs/machine-learning/snowflake-arctic-models-are-now-available-in-amazon-sagemaker-jumpstart/
https://d2908q01vomqb2.c…con-1123x630.png
2024-08-22T19:22:46Z
This post is co-written with Matt Marzillo from Snowflake.Today, we are excited to announce that the Snowflake Arctic Instruct model is available through Amazon SageMaker JumpStart to deploy and run inference. Snowflake Arctic is a family of enterprise-grade large language models (LLMs) built by Snowflake to cater to the needs of enterprise users, exhibiting exceptional capabilities (as shown in the following benchmarks) in SQL querying, coding, and accurately following instructions. SageMaker JumpStart is a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML.In this post, we walk through how to discover and deploy the Snowflake Arctic Instruct model using SageMaker JumpStart, and provide example use cases with specific prompts.What is Snowflake ArcticSnowflake Arctic is an enterprise-focused LLM that delivers top-tier enterprise intelligence among open LLMs with highly competitive cost-efficiency. Snowflake is able to achieve high enterprise intelligence through a Dense Mixture of Experts (MoE) hybrid transformer architecture and efficient training techniques. With the hybrid transformer architecture, Artic is designed with a 10-billion dense transformer model combined with a residual 128×3.66B MoE MLP resulting in a total of 480 billion parameters spread across 128 fine-grained experts and uses top-2 gating to choose 17 billion active parameters. This enables Snowflake Arctic to have enlarged capacity for enterprise intelligence due to the large number of total parameters and simultaneously be more resource-efficient for training and inference by engaging the moderate number of active parameters.Snowflake Arctic is trained with a three-stage data curriculum with different data composition focusing on generic skills in the first phase (1 trillion tokens, the majority from web data), and enterprise-focused skills in the next two phases (1.5 trillion and 1 trillion tokens, respectively, with more code, SQL, and STEM data). This helps the Snowflake Arctic model set a new baseline of enterprise intelligence while being cost-effective.In addition to the cost-effective training, Snowflake Arctic also comes with a number of innovations and optimizations to run inference efficiently. At small batch sizes, inference is memory bandwidth bound, and Snowflake Arctic can have up to four times fewer memory reads compared to other openly available models, leading to faster inference performance. At very large batch sizes, inference switches to being compute bound and Snowflake Arctic incurs up to four times fewer compute compared to other openly available models. Snowflake Arctic models are available under an Apache 2.0 license, which provides ungated access to weights and code. All the data recipes and research insights will also be made available for customers.What is SageMaker JumpStartWith SageMaker JumpStart, you can choose from a broad selection of publicly available foundation models (FM). ML practitioners can deploy FMs to dedicated Amazon SageMaker instances from a network isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy Arctic Instruct model with a few clicks in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK, enabling you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such as Amazon SageMaker Pipelines, Amazon SageMaker Debugger, or container logs. The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. Snowflake Arctic Instruct model is available today for deployment and inference in SageMaker Studio in the us-east-2 AWS Region, with planned future availability in additional Regions.Discover modelsYou can access the FMs through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio.In SageMaker Studio, you can access SageMaker JumpStart, which contains pre-trained models, notebooks, and prebuilt solutions, under Prebuilt and automated solutions.From the SageMaker JumpStart landing page, you can discover various models by browsing through different hubs, which are named after model providers. You can find Snowflake Arctic Instruct model in the Hugging Face hub. If you dont see the Arctic Instruct model, update your SageMaker Studio version by shutting down and restarting. For more information, refer to Shut down and Update Studio Classic Apps.You can also find Snowflake Arctic Instruct model by searching for Snowflake in the search field.You can choose the model card to view details about the model such as license, data used to train, and how to use the model. You will also find two options to deploy the model, Deploy and Preview notebooks, which will deploy the model and create an endpoint.Deploy the model in SageMaker StudioWhen you choose Deploy in SageMaker Studio, deployment will start.You can monitor the progress of the deployment on the endpoint details page that youre redirected to.Deploy the model through a notebookAlternatively, you can choose Open notebook to deploy the model through the example notebook. The example notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.To deploy using the notebook, you start by selecting an appropriate model, specified by the model_id. You can deploy any of the selected models on SageMaker with the following code:from sagemaker.jumpstart.model import JumpStartModelmodel = JumpStartModel(model_id = "huggingface-llm-snowflake-arctic-instruct-vllm")predictor = model.deploy()This deploys the model on SageMaker with default configurations, including the default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. To learn more, refer to API documentation.Run inferenceAfter you deploy the model, you can run inference against the deployed endpoint through the SageMaker predictor API. Snowflake Arctic Instruct accepts history of chats between user and assistant and generates subsequent chats.predictor.predict(payload)Inference parameters control the text generation process at the endpoint. The max new tokens parameter controls the size of the output generated by the model. This may not be the same as the number of words because the vocabulary of the model is not the same as the English language vocabulary. The temperature parameter controls the randomness in the output. Higher temperature results in more creative and hallucinated outputs. All the inference parameters are optional.The model accepts formatted instructions where conversation roles must start with a prompt from the user and alternate between user instructions and the assistant. The instruction format must be strictly respected, otherwise the model will generate suboptimal outputs. The template to build a prompt for the model is defined as follows:<|im_start|>system{system_message} <|im_end|><|im_start|>user{human_message} <|im_end|><|im_start|>assistant\n<|im_start|> and <|im_end|> are special tokens for beginning of string (BOS) and end of string (EOS). The model can contain multiple conversation turns between system, user, and assistant, allowing for the incorporation of few-shot examples to enhance the models responses.The following code shows how you can format the prompt in instruction format:<|im_start|>user\n5x + 35 = 7x -60 + 10. Solve for x<|im_end|>\n<|im_start|>assistant\nfrom typing import Dict, Listdef format_instructions(instructions: List[Dict[str, str]]) -> List[str]:    """Format instructions where conversation roles must alternate system/user/assistant/user/assistant/..."""    prompt: List[str] = []    for instruction in instructions:        if instruction["role"] == "system":            prompt.extend(["<|im_start|>system\n", (instruction["content"]).strip(), "<|im_end|>\n"])        elif instruction["role"] == "user":            prompt.extend(["<|im_start|>user\n", (instruction["content"]).strip(), "<|im_end|>\n"])        else:            raise ValueError(f"Invalid role: {instruction['role']}. Role must be either 'user' or 'system'.")    prompt.extend(["<|im_start|>assistant\n"])    return "".join(prompt)def print_instructions(prompt: str, response: str) -> None:    bold, unbold = '\033[1m', '\033[0m'    print(f"{bold}> Input{unbold}\n{prompt}\n\n{bold}> Output{unbold}\n{response[0]['generated_text'].strip()}\n")In the following sections, we provide example prompts for different enterprise-focused use cases.Long text summarizationYou can use Snowflake Arctic Instruct for custom tasks like summarizing long-form text into JSON-formatted output. Through text generation, you can perform a variety of tasks, such as text summarization, language translation, code generation, sentiment analysis, and more. The input payload to the endpoint looks like the following code:payload = {inputs: str,(optional)"parameters":{"max_new_tokens":int, "top_p":float, "temperature":float}}The following is an example of a prompt and the text generated by the model. All outputs are generated with inference parameters {"max_new_tokens":512, "top_p":0.95, "temperature":0.7, "top_k":50}.The input is as follows:instructions = [{"role": "user","content": """Summarize this transcript in less than 200 words.Put the product name, defect and summary in JSON format.Transcript:Customer: HelloAgent: Hi there, I hope you're having a great day! To better assist you, could you please provide your first and last name and the company you are calling from?Customer: Sure, my name is Jessica Turner and I'm calling from Mountain Ski Adventures.Agent: Thanks, Jessica. What can I help you with today?Customer: Well, we recently ordered a batch of XtremeX helmets, and upon inspection, we noticed that the buckles on several helmets are broken and won't secure the helmet properly.Agent: I apologize for the inconvenience this has caused you. To confirm, is your order number 68910?Customer: Yes, that's correct.Agent: Thank you for confirming. I'm going to look into this issue and see what we can do to correct it. Would you prefer a refund or a replacement for the damaged helmets?Customer: A replacement would be ideal, as we still need the helmets for our customers.Agent: I understand. I will start the process to send out replacements for the damaged helmets as soon as possible. Can you please specify the quantity of helmets with broken buckles?Customer: There are ten helmets with broken buckles in total.Agent: Thank you for providing me with the quantity. We will expedite a new shipment of ten XtremeX helmets with functioning buckles to your location. You should expect them to arrive within 3-5 business days.Customer: Thank you for your assistance, I appreciate it.Agent: You're welcome, Jessica! If you have any other questions or concerns, please don't hesitate to contact us. Have a great day!"""}]prompt = format_instructions(instructions)inputs = {"inputs": prompt,"parameters": {"temperature": 0.1,"top_p": 0.95,"max_new_tokens": 512,"do_sample": False}}response = predictor.predict(inputs)print_instructions(prompt, response)We get the following output:> Output{"product_name": "XtremeX helmets","defect": "broken buckles","summary": "Customer reports that several XtremeX helmets have broken buckles that won't secure the helmet properly. They prefer a replacement as they still need the helmets for their customers. Agent confirms the order number and will send out replacements for the damaged helmets within 3-5 business days."}Code generationUsing the preceding example, we can use code generation prompts as follows:instructions = [{"role": "user","content": "Write a function in Python to write a json file:"}]prompt = format_instructions(instructions)inputs = {"inputs": prompt,"parameters": {"temperature": 0.1,"top_p": 0.95,"max_new_tokens": 400,"do_sample": False}}response = predictor.predict(inputs)print_instructions(prompt, response)The preceding code uses Snowflake Arctic Instruct to generate a Python function that writes a JSON file. It defines a payload dictionary with the input prompt Write a function in Python to write a json file: and some parameters to control the generation process, like the maximum number of tokens to generate and whether to enable sampling. It sends this payload to a predictor (likely an API), receives the generated text response, and prints it to the console. The printed output should be the Python function for writing a JSON file, as requested in the prompt.The following is the output:> Input<|im_start|>userWrite a function in Python to write a json file:<|im_end|><|im_start|>assistant> OutputHere is a simple function in Python that writes a Python dictionary to a JSON file:```pythonimport jsondef write_json(data, filename):with open(filename, 'w') as f:json.dump(data, f)```You can use this function to write a Python dictionary to a file like this:```pythondata = {"name": "John","age": 30,"city": "New York"}write_json(data, 'output.json')```This will create a file named `output.json` in the same directory as your Python script, and write the `data` dictionary to that file in JSON format.The output from the code generation defines the write_json that takes the file name and a Python object and writes the object as JSON data. The output shows the expected JSON file content, illustrating the models natural language processing and code generation capabilities.Mathematics and reasoningSnowflake Arctic Instruct also report strength in mathematical reasoning. Lets use the following prompt to test it:instructions = [{"role": "user","content": "I bought an ice cream for 6 kids. Each cone was $1.25 and I paid with a $10 bill. How many dollars did I get back? Explain first before answering."}]prompt = format_instructions(instructions)inputs = {"inputs": prompt,"parameters": {"temperature": 0.6,"top_p": 0.9,"max_new_tokens": 200,"do_sample": False}}response = predictor.predict(inputs)print_instructions(prompt, response)The following is the output:> Input<|im_start|>userI bought an ice cream for 6 kids. Each cone was $1.25 and I paid with a $10 bill. How many dollars did I get back? Explain first before answering.<|im_end|><|im_start|>assistant> OutputTo solve this problem, we first need to calculate the total cost of the ice cream cones and then subtract that amount from the $10 bill to find out how much change was received.Step 1: Calculate the total cost of the ice cream conesThere are 6 kids, and each cone costs $1.25. So, the total cost is:6 kids * $1.25/cone = $7.50Step 2: Calculate the change receivedYou paid with a $10 bill, so to find out how much change you got back, subtract the total cost from the $10 bill:$10 - $7.50 = $2.50So, you received $2.50 in change.The preceding code shows Snowflake Arctics capability to comprehend natural language prompts involving mathematical reasoning, break them down into logical steps, and generate human-like explanations and solutions.SQL generationSnowflake Arctic Instruct model is also adept in generating SQL queries based on natural language prompting and their enterprise intelligent training. We test that capability with the following prompt:question = "Show the average price by cut and sort the results by average price in descending order"context = """Here is the table name <tableName> ML_HOL_DB.ML_HOL_SCHEMA.DIAMONDS </tableName><tableDescription> This table has data on diamond sales from our favorite diamond dealer. </tableDescription>Here are the columns of the ML_HOL_DB.ML_HOL_SCHEMA.DIAMONDS<columns>\n\n CARAT, CUT, COLOR, CLARITY, DEPTH, TABLE_PCT, PRICE, X, Y, Z \n\n</columns>"""instructions = [{"role": "user","content": """You will be acting as an AI Snowflake SQL Expert named Snowflake Cortex Assistant.Your goal is to give correct, executable sql query to users.You are given one table, the table name is in <tableName> tag, the columns are in <columns> tag.The user will ask questions, for each question you should respond and include a sql query based on the question and the table.{context}Here are 7 critical rules for the interaction you must abide:<rules>1. You MUST MUST wrap the generated sql code within ``` sql code markdown in this format e.g```sql(select 1) union (select 2)```2. If I don't tell you to find a limited set of results in the sql query or question, you MUST limit the number of responses to 10.3. Text / string where clauses must be fuzzy match e.g ilike %keyword%4. Make sure to generate a single snowflake sql code, not multiple.5. YOU SHOULD USE ONLY THE COLUMN NAMES IN <COLUMNS>, AND THE TABLE GIVEN IN <TABLENAME>.6. DO NOT put numerical at the very front of sql variable.7. BE CONCISE. DO NOT SHOW ANY TEXT AFTER THE SQL QUERY! ONLY SHOW THE SQL QUERY AND NOTHING ELSE!</rules>Don't forget to use "ilike %keyword%" for fuzzy match queries (especially for variable_name column)and wrap the generated sql code with ``` sql code markdown in this format e.g:```sql(select 1) union (select 2)```For each question from the user, make sure to include a SQL QUERY in your response.Question: {question}Answer: the most important piece of information is the SQL QUERY. BE CONCISE AND JUST SHOW THE SQL QUERY. DO NOT SHOW ANY TEXT AFTER THE SQL QUERY!')) as response""".format(context=context, question=question)}]prompt = format_instructions(instructions)inputs = {"inputs": prompt,"parameters": {"temperature": 0.1,"top_p": 0.95,"max_new_tokens": 512,"do_sample": False}}response = predictor.predict(inputs)print_instructions(prompt, response)The following is the output:> OutputSELECT CUT, AVG(PRICE) as AVG_PRICE FROM ML_HOL_DB.ML_HOL_SCHEMA.DIAMONDS GROUP BY CUT ORDER BY AVG_PRICE DESC LIMIT 10;The output shows that Snowflake Arctic Instruct inferred the specific fields of interest in the tables and provided a slightly more complex query that involves joining two tables to get the desired result.Clean upAfter youre done running the notebook, delete all resources that you created in the process so your billing is stopped. Use the following code:predictor.delete_model()predictor.delete_endpoint()When deploying the endpoint from the SageMaker Studio console, you can delete it by choosing Delete on the endpoint details page.ConclusionIn this post, we showed you how to get started with Snowflake Arctic Instruct model in SageMaker Studio, and provided example prompts for multiple enterprise use cases. Because FMs are pre-trained, they can also help lower training and infrastructure costs and enable customization for your use case. Check out SageMaker JumpStart in SageMaker Studio now to get started. To learn more, refer to the following resources:About the AuthorsNatarajan Chennimalai Kumar Principal Solutions Architect, 3P Model Providers, AWSPavan Kumar Rao Navule Solutions Architect, AWSNidhi Gupta Sr Partner Solutions Architect, AWSBosco Albuquerque Sr Partner Solutions Architect, AWSMatt Marzillo Sr Partner Engineer, SnowflakeNithin Vijeaswaran Solutions Architect, AWSArmando Diaz Solutions Architect, AWSSupriya Puragundla Sr Solutions Architect, AWSJin Tan Ruan Prototyping Developer, AWS
Content Creation/Content Synthesis/Decision Making
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Kristina Bravo
At the Rise25 Awards, the future of AI is ethical, inclusive and accountable
The second annual Rise25 Awards in Dublin wasn’t just about celebrating 25 AI leaders — it was about mapping out the future. The Gardiner Brothers, known for bringing Irish dance into the social media spotlight, kicked off the night with a performance that mixed tradition and innovation. Siobhán McSweeney of “Derry Girls” fame, kept the […]The post At the Rise25 Awards, the future of AI is ethical, inclusive and accountable appeared first on The Mozilla Blog.
https://blog.mozilla.org/en/mozilla/rise25-dublin/
https://blog.mozilla.org…ney-1280x853.jpg
2024-08-15T10:06:37Z
Siobhan McSweeney hosts Mozilla’s second annual Rise25 awards in Dublin, Ireland on Aug. 13, 2024. (Credit: Mozilla / Mark Harrison)The second annual Rise25 Awards in Dublin wasnt just about celebrating 25 AI leaders it was about mapping out the future.The Gardiner Brothers, known for bringing Irish dance into the social media spotlight, kicked off the night with a performance that mixed tradition and innovation. Siobhán McSweeney of Derry Girls fame, kept the crowd engaged with humor, quipping, AI touches everything like a child with sticky fingers that comes around to the house, just after you cleaned it.Then, the honorees took the stage to lay out the principles guiding their work. Here are the highlights:Sinéad Bovell, one of the 2024 Rise25 honorees in the artist category, works on preparing young people for a future driven by advanced tech. She emphasized that the narratives we craft around AI are crucial; they frame public understanding and ultimately influence the direction of AI development. “Its such an honor to be recognized in the artist category, because the stories that we tell about artificial intelligence matter deeply,” Sinéad said. She pointed out that its easy to feel trapped in a binary narrative about AI, with dangers and risks on one side and benefits and possibilities on the other. “But the truth is, these stories aren’t separate. They’re intertwined,” she said. Citing technologist Jaron Lanier, Sinéad argued that to be a true optimist about AI, you also need to be a fierce critic. “We have to continue to tell the stories of a future where we get AI right and where it transforms humanity for the better. But we also have to tell the stories of how we got there, the challenging decisions we made in the present, and where we chose to keep humanity at the center of technological advancements.” For Sinéad, understanding and telling these nuanced stories is essential for guiding AI toward an ethical and inclusive future.Gemma Galdon-Clavell, an honoree in the entrepreneur category, is focused on finding and fixing bias and flaws in predictive and large language model (LLM) tools. She shared a deeply personal story that underscored the far-reaching impact AI can have on individuals:“If my school had used an AI system to assess my chances, I wouldnt be here today. My mom was 14 when she had me. I had huge behavioral problems growing up. If you had inserted all that data into an AI system and asked, Should this girl go to school? Should we invest in her? The answer would have been no.” Gemma highlighted the dangers of relying solely on algorithms to determine someones potential, as these systems often reduce complex lives to mere data points. “I am here because I managed to beat the odds because no one set my odds in an algorithm.” Her story serves as a powerful reminder of the need for rigorous oversight and auditing of AI systems to ensure they dont limit the futures of those who, like her, might defy expectations. “People, like everyone else, deserve a chance,” she concluded, advocating for a future where AI supports rather than stifles human potential.Philosopher and AI Ethics Lab founder Cansu Canca, a change agent honoree, spoke passionately about the critical importance of ethical design in AI, highlighting how every decision made in the design process has the potential to shape society. “When we design AI systems, were not just making technical choices. Were making moral and philosophical decisions,” she said. Cansu challenged developers to consider questions that go beyond code: “What is a good life? What is a better society?” These questions, she argued, should guide every step of AI development. “The design choices we make today will determine whether AI becomes a tool for justice or a mechanism that perpetuates inequality,” Cansu warned. She called for an approach to AI that integrates ethical considerations from the outset, ensuring that systems are designed to promote fairness, transparency and respect for human dignity. “Ethical design isnt an afterthought its the foundation on which AI should be built,” she said, stressing the far-reaching impact of these decisions on our collective future.Researcher Aaron Gokaslan, an honoree in the builder category, aims to keep generative model development open. He highlighted the immense, largely untapped potential of AI, particularly within the realm of open-source development. “Were in the very early innings of AI today,” he remarked, pointing out that while AI has already made significant strides, its full potential is still on the horizon. Aaron emphasized that the true power of AI will be unlocked through collaboration and accessibility, which would enable a diverse range of innovators to contribute to its development. “By sharing knowledge and resources, we can drive AI forward in ways that benefit society as a whole,” Aaron said.Philip Thigo, an advocate honoree and the special envoy on technology for the Republic of Kenya, underscored the critical need for thoughtful and proactive AI policy, warning that without it, AI could deepen existing inequalities and erode public trust. “AI has the potential to revolutionize society, but without robust and thoughtful regulation, it could also exacerbate inequalities and undermine public trust,” he cautioned. Philip argued that AI policy must prioritize fairness, transparency and accountability to ensure that AI development benefits everyone, not just a privileged few. “We need policies that dont just react to AI’s challenges, but anticipate them setting clear guidelines for ethical development and use,” he said. Philip called for a collaborative approach to AI governance, involving not only policymakers and technologists, but also the broader public: “By engaging a diverse range of stakeholders, we can create a framework that guides AI toward serving the common good.As the honorees made clear, AI will leave its mark wherever it touches, much like those “sticky fingers. The challenge ahead is making sure that mark is a positive one. The evening was a powerful reminder that the future of AI is not just about innovation but about inclusivity, ethics and accountability. Get FirefoxGet the browser that protects what’s important
Decision Making/Detection and Monitoring
Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media
null
null
null
null
null
null
news
Paul Thurrott
Brave Gets Major Privacy Updates on Desktop and Mobile
Brave for the iPhone and iPad now supports HTTPS by default, while Bring Your Own Model (BYOM) goes live on Brave on desktop.The post Brave Gets Major Privacy Updates on Desktop and Mobile appeared first on Thurrott.com.
https://www.thurrott.com/cloud/web-browsers/307448/brave-gets-major-privacy-updates-on-desktop-and-mobile
https://thurrott-assets.…8/brave-byom.jpg
2024-08-22T17:51:20Z
Brave announced two big updates to its privacy-focused web browser this week: Brave for the iPhone and iPad now supports HTTPS by default, while the Bring Your Own Model (BYOM) functionality it announced back in June is now live in Brave for Windows, Mac, and Linux.“Starting with version 1.68, Brave will become the first iOS Web browser to try to upgrade all sites to HTTPS by default,” the announcement post notes. “When you click or enter an insecure [HTTP] link, Brave will automatically redirect to its secure [HTTPS] version. Using HTTPS is crucial to prevent Internet service providers (ISPs) and attackers from snooping on your browsing activity.”Sign up for our new free newsletter to get three time-saving tips each Friday and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!"*" indicates required fieldsPreviously, Brave for iOS only pushed you to an HTTPS version of a website if it was on a list of sites, as with other iOS-based browsers. So this change means it will now do the opposite: All sites will be upgraded to be secure by default, while only those sites on a much smaller exception list will not be. Brave users can also enable a Strict mode that will warn if there isn’t an HTTPS version of a site before navigating to the potentially unsafe version.Separately, Brave announced that the Bring Your Own Model (BYOM) on-device AI functionality it previously announced is now available in the latest version of its desktop web browser.“AI in Brave is now more private and customizable with BYOM (Bring Your Own Model),” Brave tweeted today. “Starting today with v1.69, Brave users on desktop can use their own AI models through the browser’s built-in assistant Leo.”You can learn more about this feature in the original June blog post, but the short version is that users can now connect Leo to on-device AI models like LLama, Gemma, Mistral, Phi 3, and many others through its built-in support for the Ollama framework. Then, they can access those models through a drop-down in the Leo AI sidebar. Brave also supports remote (cloud-hosted) AI models like ChatGPT.
Unknown
Unknown
null
null
null
null
null
null
news
mingupupup
AvaloniaChat:一个基于大语言模型用于翻译的简单应用 - mingupupup
简介 一个使用大型语言模型进行翻译的简单应用。 A simple application using a large language model for translation. 使用指南 访问GitHub地址:https://github.com/Ming-jiayou/AvaloniaChat
https://www.cnblogs.com/mingupupu/p/18362777
null
2024-08-16T06:31:00Z
A simple application using a large language model for translation.GitHubhttps://github.com/Ming-jiayou/AvaloniaChatReleasesAvaloniaChat-v0.0.1-win-x64.zipappsettings.jsonQwen/Qwen2-7B-InstructApi KeyApi Keyappsettings.jsonApi KeyexeAIpromptAvaloniaSemantic KernelSpark Maxappsettings.json{ "OpenAI": { "Key": "your key", "Endpoint": "https://spark-api-open.xf-yun.com", "ChatModel": "generalv3.5" }}Api Keykey APIKey:APISecret6d3...:M...yi-large{ "OpenAI": { "Key": "your key", "Endpoint": "https://api.lingyiwanwu.com", "ChatModel": "yi-large" }}Star
Unknown
Unknown
null
null
null
null
null
null
news
null
Snowflake raises annual product revenue forecast
:Data cloud analytics firm Snowflake raised its forecast for full-year product revenue on Wednesday, as it attracts more clients to its cloud platform owing to advancements in artificial intelligence.The company now expects product revenue of $3.36 billion for fiscal 2025, compared with its prior forecast of
https://www.channelnewsasia.com/business/snowflake-raises-annual-product-revenue-forecast-4559906
https://onecms-res.cloud…pg?itok=rdW18zHc
2024-08-21T20:11:59Z
:Data cloud analytics firm Snowflake raised its forecast for full-year product revenue on Wednesday, as it attracts more clients to its cloud platform owing to advancements in artificial intelligence.The company now expects product revenue of $3.36 billion for fiscal 2025, compared with its prior forecast of $3.30 billion.Snowflake has seen a rise in demand for its Data Cloud platform as AI integration has helped enterprises streamline their data quickly and efficiently.The company reported second-quarter product revenue of $829.3 million, beating estimates of $808.4 million, according to LSEG data.Snowflake has been actively pursuing AI by creating its own large language model called Snowflake Arctic, and partnering with Facebook-parent Meta to use its Llama models and boost the appeal of its cloud platform.The company also authorized an additional $2.5 billion under its share buyback program through March 2027.Shares of the company fell 5 per cent in extended trading.
Digital Assistance/Content Synthesis/Process Automation
Business and Financial Operations
null
null
null
null
null
null
news
Janakiram MSV, Senior Contributor, Janakiram MSV, Senior Contributor https://www.forbes.com/sites/janakirammsv/
Hugging Face Bolsters AI Infrastructure With XetHub Acquisition
Hugging Face's acquisition of XetHub is an important step toward accelerating the adoption of open-weight models.
https://www.forbes.com/sites/janakirammsv/2024/08/12/hugging-face-bolsters-ai-infrastructure-with-xethub-acquisition/
https://imageio.forbes.c…=1600&fit=bounds
2024-08-12T04:47:25Z
Hugging FaceHugging FaceHugging Face, a leading platform for open-source machine learning projects, has made a strategic acquisition of XetHub, a Seattle-based startup specializing in file management for artificial intelligence projects. This move aims to significantly enhance Hugging Face's AI storage capabilities, enabling developers to work with larger models and datasets more efficiently.XetHub was founded by Yucheng Low, Ajit Banerjee and Rajat Arya, who previously worked at Apple, where they built and scaled Apple's internal ML infrastructure. The founders have a strong background in machine learning and data management, with Yucheng Low having co-founded Turi, a transformative ML/AI company acquired by Apple in 2016.The startup has successfully raised $7.5 million in seed financing led by Seattle-based venture capital firm Madrona Ventures.To appreciate the impact of this acquisition, it's crucial to understand Git Large File Storage (LFS). Git LFS is an open-source extension that allows version control systems to handle large files more effectively. Hugging Face currently uses Git LFS as its storage backend, but this system has limitations. For instance, when developers update an AI model or dataset on Hugging Face's platform, they must re-upload the entire file, which can be time-consuming for large files containing gigabytes of data.XetHub's platform introduces a game-changing solution by fragmenting AI models and datasets into smaller, manageable pieces. This approach allows developers to update only the specific segments they've modified, rather than re-uploading entire files. The result is a dramatic reduction in upload times, which is crucial for maintaining agility in AI development workflows.Furthermore, XetHub's platform provides additional features to streamline the AI development process, including:Advanced Version Control: Enabling precise tracking of changes across iterations of AI models and datasets.Collaborative Tools: Facilitating seamless teamwork on complex AI projects.Neural Network Visualization: Providing intuitive representations of AI model architectures for easier analysis and optimization.By integrating XetHub's technology, Hugging Face is poised to overcome its current storage limitations. This upgrade will allow the platform to host substantially larger models and datasets, with support for individual files exceeding 1 TB and total repository sizes surpassing 100TB. This capability is vital for Hugging Face's ambition to maintain the most comprehensive collection of foundation models and dataset resources globally.The acquisition of XetHub by Hugging Face promises a range of significant benefits for users of the platform. Developers can expect enhanced productivity through dramatically reduced upload times for large AI models and datasets, enabling faster iteration and deployment cycles. Collaboration among distributed AI development teams will become more efficient, fostering better teamwork and knowledge sharing. The integration also brings robust version control capabilities, allowing for improved tracking and reproducibility of machine learning workflows, which is crucial for maintaining quality and consistency in AI projects. Perhaps most importantly, the acquisition enables greater scalability, providing support for larger and more complex AI projects that push the boundaries of current technologies, thus opening new possibilities for innovation and advancement in the field of artificial intelligence.The ability to efficiently handle larger models and datasets is particularly crucial as AI continues to evolve. Recent advancements in areas such as large language models (e.g., Meta Llama, Google Gemma) and computer vision have highlighted the importance of working with massive datasets and increasingly complex model architectures. Hugging Face's enhanced infrastructure will enable developers to keep pace with these rapid advancements, potentially catalyzing new breakthroughs in AI research and applications.With XetHub integration, the workflow for using Hugging Face models and datasets will be similar to Docker's, which uses a layered file system instead of uploading and downloading the entire container image. Developers can pull or push only a fraction of the file that has been modified.This strategic acquisition by Hugging Face is set to accelerate the democratization of AI technologies. By removing the technical barriers associated with managing large-scale AI projects, Hugging Face is making advanced AI development more accessible to a global community of researchers, developers and businesses.Hugging Face's acquisition of XetHub is an important step toward accelerating the adoption of open-weight models. By addressing critical limitations in data storage and management, this move solidifies Hugging Faces leadership position within the AI development ecosystem.
Process Automation/Content Synthesis
Unknown
null
null
null
null
null
null
news
Hassan Mujtaba
Google Leverages NVIDIA’s L4 GPUs To Let You Run AI Inference Apps On The Cloud
Google has leveraged NVIDIA's L4 GPUs to offer users the ability to run AI inference applications such as GenAI on the cloud. Harness The Power of NVIDIA's L4 GPUs In Google Cloud To Run Your AI Inferencing Applications Press Release: Developers love Cloud Run for its simplicity, fast autoscaling, scale-to-zero capabilities, and pay-per-use pricing. Those same benefits come into play for real-time inference apps serving open-generation AI models. That's why today, we’re adding support for NVIDIA L4 GPUs to Cloud Run, in preview. This opens the door to many new use cases for Cloud Run developers: Performing real-time inference with […]
https://wccftech.com/google-leverages-nvidia-l4-gpus-run-ai-inference-apps-on-cloud/
https://cdn.wccftech.com…Google-Cloud.jpg
2024-08-22T15:50:12Z
Google has leveraged NVIDIA's L4 GPUs to offer users the ability to run AI inference applications such as GenAI on the cloud.Press Release: Developers love Cloud Run for its simplicity, fast autoscaling, scale-to-zero capabilities, and pay-per-use pricing. Those same benefits come into play for real-time inference apps serving open-generation AI models. That's why today, were adding support for NVIDIA L4 GPUs to Cloud Run, in preview.This opens the door to many new use cases for Cloud Run developers:Performing real-time inference with lightweight open models such as Googles open Gemma (2B/7B) models or Metas Llama 3 (8B) to build custom chatbots or on-the-fly document summarization, while scaling to handle spiky user traffic.Serving custom fine-tuned gen AI models, such as image generation tailored to your company's brand, and scaling down to optimize costs when nobody's using them.Speeding up your compute-intensive Cloud Run services, such as on-demand image recognition, video transcoding and streaming, and 3D rendering.As a fully managed platform, Cloud Run lets you run your code directly on top of Googles scalable infrastructure, combining the flexibility of containers with the simplicity of serverless to help boost your productivity. With Cloud Run, you can run frontend and backend services, batch jobs, deploy websites and applications, and handle queue processing workloads all without having to manage the underlying infrastructure.Image Source: Google CloudAt the same time, many workloads that perform AI inference, especially applications that demand real-time processing, require GPU acceleration to deliver responsive user experiences. With support for NVIDIA GPUs, you can perform on-demand online AI inference using the LLMs of your choice in seconds. With 24GB of vRAM, you can expect fast token rates for models with up to 9 billion parameters, including Llama 3.1(8B), Mistral (7B), and Gemma 2 (9B). When your app is not in use, the service automatically scales down to zero so that you are not charged for it.Today, we support attaching one NVIDIA L4 GPU per Cloud Run instance, and you do not need to reserve your GPUs in advance. To start, Cloud Run GPUs are available today in us-central1(Iowa), with availability in Europe-west4 (Netherlands) and Asia-southeast1 (Singapore) expected before the end of the year.ModelModel Size Cold Start Timegemma:2b1.7 GB11-17 secondsgemma2:9b5.1 GB25-30 secondsllama2:7b3.8 GB14-21 secondsllama2:13b7.4 GB23-35 secondsllama3.1:8b4.7 GB15-21 secondsCloud Run makes it super easy to host your web applications. And now with GPU support, we are extending the best of serverless, simplicity, and scalability to your AI inference applications too! To start using Cloud Run with NVIDIA GPUs, sign up at g.co/cloudrun/gpu to join our preview program today and wait for our welcome email.Share this story
Content Synthesis/Decision Making
Unknown
null
null
null
null
null
null
news
Shania Kennedy
4 high-value use cases for synthetic data in healthcare
Synthetic data generation and use can bolster clinical research, application development and data privacy protection efforts in the healthcare sector.
https://www.techtarget.com/healthtechanalytics/feature/High-value-use-cases-for-synthetic-data-in-healthcare
https://cdn.ttgtmedia.co…logo-379x201.png
2024-08-12T16:20:00Z
The hype around emerging technologies -- like generative AI -- in healthcare has brought significant attention to the potential value of analytics for stakeholders pursuing improved care quality, revenue cycle management and risk stratification.But strategies to advance big data analytics hinge on the availability, quality and accessibility of data, which can create barriers for healthcare organizations.Synthetic data -- artificially generated information not taken from real-world sources -- has been proposed as a potential solution to many of healthcare's data woes, but the approach comes with a host of pros and cons.To successfully navigate these hurdles, healthcare stakeholders must identify relevant applications for synthetic data generation and use within the enterprise. Here, in alphabetical order, TechTarget Editorial's Healthtech Analyticswill explore four use cases for synthetic healthcare data.Application developmentProponents of synthetic data emphasize its potential to replicate the correlations and statistical characteristics of real-world data without the associated risks and costs. In doing so, these data sets lend themselves to the development of data-driven healthcare applications.Much of the real-world data that would be used to build these tools is stored in a tabular format, meaning that the ability to generate tabular synthetic data could help streamline application development.In a March 2023 study published in MultiMedia Modeling, researchers examined the potential of deep learning-based approaches to generate complex tabular data sets. They found that generative adversarial networks (GANs) tasked with creating synthetic tabular healthcare data were viable across a host of applications, even with the added complexity of differing numbers of variables and feature distributions.A research team writing for the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) demonstrated how GAN architecture can be used to develop wearable sensor data for remote patient monitoring systems.In 2021, winners of ONC's Synthetic Health Data Challenge highlighted novel uses of the open source Synthea model, a synthetic health data generator. Among the winning proposals were tools to improve medication diversification, spatiotemporal analytics of the opioid epidemic and comorbidity modeling.Synthetic data is also valuable in developing and testing healthcare-driven AI and machine learning (ML) technologies.A 2019 study published in Sensors detailed the importance of behavior-based sensor data for testing ML applications in healthcare. However, existing approaches for generating synthetic data can be limited in terms of their realism and complexity.To overcome this, the research team developed an ML-driven synthetic data generation approach for creating sensor data. The analysis revealed that this method generated high-quality data, even when constrained by a small amount of ground truth data. Further, the approach outperformed existing methods, including random data generation.A research team writing in NPJ Digital Medicine in 2020 explored how a framework combining outlier analysis, graphical modeling, resampling and latent variable identification could be used to produce realistic synthetic healthcare data for assessing ML applications.This approach is designed to help tackle issues like complex interactions between variables and data missingness, which can arise during the synthetic data generation process. Using primary care data, the researchers were able to use their method to generate synthetic data sets "that are not significantly different from original ground truth data in terms of feature distributions, feature dependencies, and sensitivity analysis statistics when inferring machine learning classifiers."Further, the study found that this method had a low risk of generating synthetic data that was very similar or identical to real patients.Synthetic data's utility for healthcare-related application development is closely tied to its value for clinical research.Clinical researchClinical research -- particularly clinical trials -- is key to advancing innovations that improve patient outcomes and quality of life. But conducting this research is challenging due to issues like a lack of data standards and EHR missingness.Researchers can overcome some of these obstacles by turning to synthetic data.EHRs are valuable data sources for investigating diagnoses and treatments, but concerns about data quality and patient privacy create hurdles to their use. A research team looking to tackle these issues investigated the plausibility of synthetic EHR data generation in a 2021 study in the Computational Intelligencejournal.The study emphasizes that synthetic EHRs are needed to complement existing real-world data, as these could promote access to data, cost-efficiency, test efficiency, privacy protection, data completeness and benchmarking.However, synthetic data generation methods for this purpose must preserve key ground truth characteristics of real-world EHR data -- including biological relationships between variables and privacy protections.The research team proposed a framework to generate and evaluate synthetic healthcare data with these ground truth considerations in mind and found that the approach could successfully be applied to two distinct research use cases that rely on EHR-sourced cross-sectional data sets.Similar methods are also useful for generating synthetic scans and test results, such as electrocardiograms (ECGs). Experts writing in the February 2021 issue of Electronics found that using GANs to create synthetic ECGs for research can potentially address data anonymization and data leakage.In June, a team from Johns Hopkins University successfully developed a method to generate synthetic liver tumor computed tomography (CT) scans, which could help tackle the ongoing scarcity of high-quality tumor images.The lack of real-world, annotated tumor CTs makes it difficult to curate the large-scale medical imaging data sets necessary to advance research into cancer detection algorithms.Synthetic data is also helpful in bolstering infectious disease research. In 2020, Washington University researchers turned to synthetic data to accelerate COVID-19 research, allowing stakeholders to produce relevant data and share it among collaborators more efficiently.The value of synthetic data in healthcare research is further underscored by efforts from government agencies and academic institutions to promote its use.The United States Veterans Health Administration's Arches platform is designed to facilitate research collaboration by providing access to both real-world and synthetic veteran data, while the Agency for Healthcare Research and Quality offers its Synthetic Healthcare Database for Research to researchers who need access to high-quality medical claims data.Alongside clinical research applications, synthetic health data also shows promise in emerging use cases like digital twin technology.Digital twinsDigital twins serve as virtual representations of real-world processes or entities. The approach has garnered attention in healthcare for its ability to help represent individual patients and populations across various data-driven use cases.Synthetic data has shown promise in bolstering the data that underpins a digital twin, and some health systems are already pursuing projects that combine synthetic data generation with digital twin modeling.One such project, spearheaded by Cleveland Clinic and MetroHealth, aims to use digital twins to gain insights into neighborhood-level health disparities and their impact on patient outcomes.Addressing the social determinants of health (SDOH) -- non-medical factors, such as housing and food security, that impact health -- is a major priority across the healthcare industry. To date, healthcare organizations have found success in building care teams to tackle SDOH and developing SDOH screening processes, but other approaches are needed to meaningfully advance health equity.In an interview, leadership from Cleveland Clinic discussed how the Digital Twin Neighborhoods project hopes to utilize de-identified EHR data to generate synthetic populations that are closely matched to those of the real-world neighborhoods that Cleveland Clinic and MetroHealth serve.By incorporating SDOH alongside geographic, biological and social information, the researchers hope to understand existing disparities and their drivers better. Using digital twins, the research team can explore the health profile of a community by simulating how various interventions might impact health status and outcomes over time.The synthetic and real-world data used to run these simulations will help demonstrate how chronic disease risk, environmental exposures and other factors contribute to increased mortality and lower life expectancy through the lens of place-based health.Using the digital twin models, the researchers will pursue initial projects assessing regional mental health and modifiable cardiovascular risk factor reduction.This approach allows Cleveland Clinic and MetroHealth to safely use existing EHR data to inform health equity initiatives without unnecessarily risking patient privacy, one of the most promising applications for synthetic data.Patient privacy preservationProtecting patient privacy is paramount when health systems consider using data to improve care and reduce costs. Healthcare data de-identification helps ensure that the sharing and use of patient information is HIPAA-compliant, but the process cannot totally remove the risk of patient re-identification.Removing or obscuring protected health information (PHI), as required by HIPAA, is only one aspect of de-identification. Another involves obscuring potential relationships between de-identified variables that could lead to re-identification.Synthetic data can help create another layer of privacy preservation by replicating the statistical characteristics and correlations in the real-world data, enabling stakeholders to create a data set that doesn't contain PHI.In doing so, both the privacy and value of the original data are protected, and that information can be used to inform many analytics projects. While no approach to patient privacy protection is completely foolproof, combining data de-identification, synthetic data use and the application of privacy-enhancing technologies strengthens patient privacy preservation efforts.In 2021, a team from the Institute for Informatics at Washington University School of Medicine in St. Louis demonstrated synthetic data's potential to protect privacy while conducting clinical studies.The researchers showed that, using a software known as MDClone, users can build effective synthetic data sets for medical research that are statistically similar to real data while simultaneously preserving privacy more effectively than traditional de-identification.The study authors noted that these capabilities have the potential to significantly speed up critical research.These four use cases represent an array of opportunities for using synthetic data to transform healthcare and clinical research. While not without pitfalls, synthetic data is likely to see continued interest across the industry as stakeholders continue to explore advanced technologies like digital twins and AI.Shania Kennedy has been covering news related to health IT and analytics since 2022.
Content Creation/Process Automation
Healthcare Practitioners and Support
null
null
null
null
null
null
news
Luis Quintanilla
Introducing the Azure AI Inference SDK: Access More AI Models with the Azure AI Model Catalog
Announcing the Azure AI Inference SDK for .NET! This SDK provides easy access to a comprehensive suite of AI models in the Azure AI Model Catalog for inference tasks like chat, enabling you to effortlessly incorporate AI into your applications that align with your requirements.The post Introducing the Azure AI Inference SDK: Access More AI Models with the Azure AI Model Catalog appeared first on .NET Blog.
https://devblogs.microsoft.com/dotnet/azure-ai-model-catalog-dotnet-inference-sdk/
https://devblogs.microso…eaiinference.jpg
2024-08-13T18:00:00Z
Luis QuintanillaAugust 13th, 202400AI models are constantly evolving and improving, but keeping up with the latest developments can be challenging.That’s why we’re introducing the Azure AI Inference SDK for .NET.This SDK lets you easily access and use a wide range of AI models from the Azure AI Model Catalog for inference tasks like chat, so you can seamlessly integrate AI into your applications that meet your needs.What is the Azure AI Model Catalog?The Model Catalog in Azure AI Studio makes it easy to browse through various AI models and deploy them.Models from the catalog can be deployed to Managed Compute or as a Serverless API.Some key features include:Model Availability: The model catalog features a diverse collection of models from providers such as Microsoft, Azure OpenAI, Mistral, Meta, and Cohere. This ensures you can find the right model to satisfy your requirements.Easy to deploy: Serverless API deployments remove the complexity about hosting and provisioning the hardware to run cutting edge models. When deploying models with serverless API, you don’t need quota to host them and you are billed per token.Responsible AI Built-In: Safety is a priority. Language models from the catalog come with default configurations of Azure AI Content Safety moderation filters which detect harmful content.For more details, see the Azure AI Model Catalog documentation.Get StartedDeploy a model like Phi-3. For more details, see the Azure AI Model Catalog deployment documentation.Create a C# console application and install the Azure.AI.Inference SDK from NuGet.Add the following code to your application to start making requests to your model service. Make sure to replace your key and endpoint with those provided with your deployment.var key = "YOUR-MODEL-API-KEY";var endpoint = "YOUR-MODEL-ENDPOINT";var chatClient = new ChatCompletionsClient( new Uri(endpoint), new Azure.AzureKeyCredential(key));var chatHistory = new List<ChatRequestMessage>(){ new ChatRequestSystemMessage("You are a helpful assistant that knows about AI.")};Console.WriteLine($"System: { chatHistory .Where(x => x.GetType() == typeof(ChatRequestSystemMessage)) .Select(x => ((ChatRequestSystemMessage)x).Content) .First()}");while(true){ Console.Write("You: "); var userMessage = Console.ReadLine();// Exit loop if (userMessage.StartsWith("/q")) { break; }chatHistory.Add(new ChatRequestUserMessage(userMessage));ChatCompletions? response = await chatClient.CompleteAsync(chatHistory); ChatResponseMessage? assistantMessage = response.Choices.First().Message; chatHistory.Add(new ChatRequestAssistantMessage(assistantMessage));Console.WriteLine($"Assistant: {assistantMessage.Content}");}For more details, see the Azure AI Model Inference API documentation.Join the AI Community StandupInterested in learning more about the Azure AI Inference SDK for .NET?Don’t miss the AI Community Standup session on August 14th, at 10 AM PST the team will dive into all the details.ConclusionWe’re excited to see what you build! Try out the Azure AI Inference SDK and give us feedback.
Digital Assistance/Content Creation
Unknown
null
null
null
null
null
null