Monday, February 20, 2023

OpenAI on how AI systems should behave 🤔, GitHub Copilot upgrade 🧑‍💻, scaling media ML at Netflix 📺

Copilot has improved consistently since launch. 

TLDR

Daily Update 2023-02-20

🚀

Headlines & Launches

GitHub Copilot gets an upgrade (4 minute read)

With a stronger underlying model, new light-weight client side model, and Fill-In-the-Middle (FIM) capabilities, Copilot has improved consistently since launch. The acceptance rate of synthesized code has gone from 27% to 35%!
How should AI systems behave, and who should decide? (6 minute read)

ChatGPT users have raised concerns about biased or objectionable outputs. ChatGPT's behavior is shaped by fine-tuning massive neural networks, but the process is imperfect. The company is working to improve its methods for aligning AI systems with human values, and this blog post summarizes how ChatGPT's behavior is shaped, how the company plans to improve it, and its efforts to get more public input on its decision-making.
Coda AI: (Product Launch)

If you're a Coda power user, you'll love Coda AI. Combine the power of Coda's building blocks with OpenAI's GPT to accelerate your document creation. Coda AI also integrates with 600+ other applications to improve your workflow.
🧠

Research & Innovation

1B multimodal model out performs 175B counterpart (33 minute read)

Narrow models can still, unsurprisingly, beat larger general purpose models. This paper is another example, the authors use a small multimodal model to beat GPT3 by 16% on the visual / text benchmark ScienceQA. It's only one benchmark, but it's good to see improvements on more reasonably sized models.
The stable entropy hypothesis (45 minute read)

This paper submits an hypothesis that "human-like" text generation lies in a narrow region of low entropy. Degenerate text which has repeated words, incoherent sentences, and grammatical errors lie away from this region. They provide supporting evidence and present an entropy aware decoding scheme.
Moral self-correction in Large Language Models (20 minute read)

Large language models (LLMs) exhibit harmful biases that can get worse with size. Reinforcement learning from human feedback (RLHF) helps, but not always enough. The authors show that simple prompting approaches can help LLMs trained with RLHF produce less harmful outputs.
🧑‍💻

Engineering & Resources

Navigating the Manifold of Latents in Stable Diffusion (8 minute read)

This blog post offers a simple, intuitive metaphor to understand the manifold of latents in Stable Diffusion and diffusion-based generative models. The author shares their journey of exploring the latent space and various techniques to navigate it, offering insights into the main components of Stable Diffusion. The article is a good resource for those interested in controlling image generation and interpolation between them.
Langchain for the web (GitHub repo)

Similar to other recently released libraries, Langchain now has a JavaScript library. Use this to build LM powered web apps. Importantly, interfacing with the python version is a huge focus specifically around serialization of objects.
Talk-To-ChatGPT (GitHub Repo)

Talk-To-ChatGPT is a Google Chrome extension that allows users to talk with ChatGPT using their voice and listen to the AI's answer with a voice.
🎁

Miscellaneous

Scaling media ML at Netflix (7 minute read)

In this blog, Netflix outlines how their media and artwork ML pipeline works and scales. They discuss preprocessing, training, productionizing and storage. The article presents a case study demonstrating how these components improve an existing pipeline's scalability, optimization, and reliability. There are some interesting tidbits here specifically that they use Ray to power their GPU cluster.
How to fine-tune the most powerful open source LLM (11 minute read)

FlanT5 may be one of the best available open source language models. The largest versions (3B and 11B parameters) are the most performant but hardest to tune because of the need for parallelism. This post outlines how to use deepspeed to tune these models across multiple GPUs for summarization.
The Physics Principle That Inspired Modern AI Art (10 minute read)

This article dives into the system that underpins DALL-E and other generative AI models, which is heavily inspired by nonequilibrium thermodynamics.

Quick Links

Summary of creative AI research at Sony (3 minute read)

A list of papers from the Sony creative AI research group. They focus on deep generative modeling, music, and cinematic AI. Lots of great links here to put on your reading list if you want to understand the state of the art for music.
AI Can Help Design Opioid Drugs (2 minute read)

Artificial Intelligence is now being used to design drugs that block the kappa-opioid receptor, the key receptor in fighting against opioid drugs, much faster than humans.
AI breast cancer diagnostics (15 minute read)

Development and validation of an AI-enabled digital breast cancer assay to predict early-stage breast cancer recurrence within 6 years.
AI becomes Silicon Valley's next buzzy bandwagon as crypto boom fizzles (4 minute read)

Technologists agree that the generative AI that powers systems like ChatGPT has the potential to change how we live and work, despite the technology's clear flaws. But some people see signs of froth that remind them of the crypto boom that recently fizzled.
If you are in a hiring position, you may want to hire AI talent through our free job board.

If your company is interested in reaching an audience of AI decision-makers, researchers, and engineers, you may want to advertise with us.

If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan (@andrewztan) & Andrew Carr (@andrew_n_carr)

If you don't want to receive future editions of TLDR, please click here to unsubscribe.

 

No comments:

Post a Comment