AI & ML interests

Welcome to the SCAI Hugging Face Space! 🎓🤖 Join us in our mission to advance interdisciplinary research and education in AI, fostering collaboration between researchers, students, and industry partners. Together, we’re shaping the future of artificial intelligence! 🚀🔬🌟 🔍 Vision: Dive into image recognition and perception, driving advancements in mathematics, computer science, and robotics. 🧠 Explanation/Explicability: Enhance the transparency of complex systems, with a focus on health and medicine. 🌍 Ethics: Develop ethical AI solutions for climate, environment, and the universe, ensuring responsible and sustainable practices. 📚 Digital Humanities: Discover how AI transforms our understanding of history, literature, and social sciences. #SorbonneAI #Innovation #EthicalAI #DigitalHumanities

Recent Activity

SorbonneUniversity's activity

abidlabs 
posted an update 5 months ago
view post
Post
5704
👋 Hi Gradio community,

I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.

Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:

---------- Installation -------------

Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/

* Locally: If you are running gradio locally, simply install the release candidate with pip install gradio --pre
* Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the sdk_version to be 5.0.0b3 in the README.md file on Spaces.

In most cases, that’s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.

-----------------------------

Fore more information, please see: https://github.com/gradio-app/gradio/issues/9463
  • 2 replies
·
XavierF 
updated a Space 8 months ago
abidlabs 
posted an update 9 months ago
view post
Post
4672
𝗣𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗶𝗻𝗴 holds an important place in machine learning. But it has traditionally been quite difficult to go from prototype code to production-ready APIs

We're working on making that a lot easier with 𝗚𝗿𝗮𝗱𝗶𝗼 and will unveil something new on June 6th: https://www.youtube.com/watch?v=44vi31hehw4&ab_channel=HuggingFace
  • 2 replies
·
abidlabs 
posted an update 10 months ago
view post
Post
3634
Open Models vs. Closed APIs for Software Engineers
-----------------------------------------------------------------------

If you're an ML researcher / scientist, you probably don't need much convincing to use open models instead of closed APIs -- open models give you reproducibility and let you deeply investigate the model's behavior.

But what if you are a software engineer building products on top of LLMs? I'd argue that open models are a much better option even if you are using them as APIs. For at least 3 reasons:

1) The most obvious reason is reliability of your product. Relying on a closed API means that your product has a single point-of-failure. On the other hand, there are at least 7 different API providers that offer Llama3 70B already. As well as libraries that abstract on top of these API providers so that you can make a single request that goes to different API providers depending on availability / latency.

2) Another benefit is eventual consistency going local. If your product takes off, it will be more economical and lower latency to have a dedicated inference endpoint running on your VPC than to call external APIs. If you've started with an open-source model, you can always deploy the same model locally. You don't need to modify prompts or change any surrounding logic to get consistent behavior. Minimize your technical debt from the beginning.

3) Finally, open models give you much more flexibility. Even if you keep using APIs, you might want to tradeoff latency vs. cost, or use APIs that support batches of inputs, etc. Because different API providers have different infrastructure, you can use the API provider that makes the most sense for your product -- or you can even use multiple API providers for different users (free vs. paid) or different parts of your product (priority features vs. nice-to-haves)
abidlabs 
posted an update 11 months ago
view post
Post
3344
Introducing the Gradio API Recorder 🪄

Every Gradio app now includes an API recorder that lets you reconstruct your interaction in a Gradio app as code using the Python or JS clients! Our goal is to make Gradio the easiest way to build ML APIs, not just UIs 🔥

·
abidlabs 
posted an update about 1 year ago
view post
Post
Necessity is the mother of invention, and of Gradio components.

Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation.

Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world 🔥
abidlabs 
posted an update about 1 year ago
view post
Post
Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the modal component!

from gradio_modal import Modal

with gr.Blocks() as demo:
    gr.Markdown("### Main Page")
    gr.Textbox("lorem ipsum " * 1000, lines=10)

    with Modal(visible=True) as modal:
        gr.Markdown("# License Agreement")
abidlabs 
posted an update about 1 year ago
view post
Post
Just out: new custom Gradio component specifically designed for code completion models 🔥
  • 1 reply
·
abidlabs 
posted an update about 1 year ago
view post
Post
The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers @oobabooga @pseudotensor :)

The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending "diffs" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.

Very cool stuff @aliabid94 ! PR: https://github.com/gradio-app/gradio/pull/7102
abidlabs 
posted an update about 1 year ago
abidlabs 
posted an update about 1 year ago
view post
Post
Gradio 4.16 introduces a new flow: you can hide/show Tabs or make them interactive/non-interactive.

Really nice for multi-step machine learning ademos ⚡️
  • 6 replies
·
abidlabs 
posted an update about 1 year ago
view post
Post
✨ Excited to release gradio 4.16. New features include:

🐻‍❄️ Native support for Polars Dataframe
🖼️ Gallery component can be used as an input
⚡ Much faster streaming for low-latency chatbots
📄 Auto generated docs for custom components

... and much more! This is HUGE release, so check out everything else in our changelog: https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md
·