<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[fastpapers.ai]]></title><description><![CDATA[TL;DR summaries of latest AI research papers and news]]></description><link>https://www.fastpapers.ai</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 11:33:27 GMT</lastBuildDate><atom:link href="https://www.fastpapers.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[fastpapers.ai]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[fastpapers@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[fastpapers@substack.com]]></itunes:email><itunes:name><![CDATA[Amine B.]]></itunes:name></itunes:owner><itunes:author><![CDATA[Amine B.]]></itunes:author><googleplay:owner><![CDATA[fastpapers@substack.com]]></googleplay:owner><googleplay:email><![CDATA[fastpapers@substack.com]]></googleplay:email><googleplay:author><![CDATA[Amine B.]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Test-Time Training (TTT) Layers for Efficient LLMs]]></title><description><![CDATA[RNNs with Expressive Hidden States]]></description><link>https://www.fastpapers.ai/p/test-time-training-ttt-layers-for</link><guid isPermaLink="false">https://www.fastpapers.ai/p/test-time-training-ttt-layers-for</guid><dc:creator><![CDATA[Amine B.]]></dc:creator><pubDate>Tue, 09 Jul 2024 14:26:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/52154382-3dc7-4dc0-a177-98dcaa09e9e3_1024x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there!</p><p>We're excited to share a recent paper that has caught our attention: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States". It introduces a novel approach to language modeling: Test-Time Training (TTT) layers. It presents a new class of sequence modeling layers that combine the efficiency of RNNs with the expressiveness of self-attention.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to fastpapers.ai to receive alerts when groundbreaking research is shared. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2><strong>Key Innovations</strong></h2><ol><li><p><strong>TTT Layers</strong>: The core idea is to make the hidden state of an RNN a machine learning model itself, with the update rule being a step of self-supervised learning.</p></li><li><p><strong>Linear Complexity</strong>: TTT layers maintain linear complexity.</p></li><li><p><strong>Two Instantiations</strong>: The paper introduces TTT-Linear and TTT-MLP, where the hidden state is a linear model and a two-layer MLP, respectively.</p><p></p></li></ol><h2><strong>Performance Highlights</strong></h2><ul><li><p>Both TTT-Linear and TTT-MLP match or exceed the performance of Transformer and Mamba (a modern RNN) baselines.</p></li><li><p>TTT layers show strength in long context scenarios, continuing to reduce perplexity with more tokens, unlike Mamba.</p></li><li><p>TTT-Linear is faster than Transformer at 8k context and matches Mamba in wall-clock time.</p></li></ul><h2><strong>Technical Details</strong></h2><ul><li><p>The hidden state in TTT layers is updated using gradient descent on a self-supervised loss.</p></li><li><p>The paper introduces "mini-batch TTT" and a "dual form" implementation.</p></li><li><p>TTT layers can be integrated into existing network architectures.</p></li></ul><h2><strong>Scientific Insights</strong></h2><p>The research highlights the advantage of TTT layers in improving the adaptability and efficiency of recurrent neural networks (RNNs). By integrating more expressive hidden states that learn at test time, these models can better capture and utilize context, leading to superior performance on tasks that involve long sequences. This innovation addresses a critical challenge in sequence modeling&#8212;efficiently managing long-range dependencies without excessive computational overhead.</p><h2><strong>Future Directions</strong></h2><p>This work opens up new possibilities for efficient language modeling, especially for tasks requiring long context understanding. There are several promising directions for future research, including:</p><ul><li><p>Exploring more sophisticated parameterizations of self-supervised tasks</p></li><li><p>Further systems optimizations for even better efficiency</p></li><li><p>Scaling to longer contexts (millions of tokens) and larger models</p></li><li><p>More ambitious instantiations of the inner loop model</p></li></ul><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/p/test-time-training-ttt-layers-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Did you find this newsletter valuable? Please share it with your colleagues who might be interested in staying up-to-date with the latest AI research!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/p/test-time-training-ttt-layers-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fastpapers.ai/p/test-time-training-ttt-layers-for?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><p><strong>Dive Deeper:</strong> Discover the future of sequence modeling by exploring this pioneering research. <a href="https://arxiv.org/pdf/2407.04620">Read the full paper</a>.</p><p> <br>Happy reading!</p>]]></content:encoded></item><item><title><![CDATA[AI This Week]]></title><description><![CDATA[Keeping up with exponentials]]></description><link>https://www.fastpapers.ai/p/ai-this-week-bab</link><guid isPermaLink="false">https://www.fastpapers.ai/p/ai-this-week-bab</guid><dc:creator><![CDATA[Amine B.]]></dc:creator><pubDate>Thu, 17 Nov 2022 17:43:19 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/7ccdcb99-b50a-47e7-b9c4-f130bd87585f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey there! I hope you're enjoying your week so far. It's been busy, and I know it can be hard to keep up with the latest news in AI (who can keep up with exponentials?), so here are the most important things you need to know.</p><p>In today's issue:</p><ul><li><p>Interesting papers from the week</p></li><li><p>Noteworthy open-source</p></li><li><p>From around the web</p></li><li><p>Featured blog article</p><p></p></li></ul><h2>1. Interesting papers</h2><div><hr></div><h4>Direct Inversion: Optimization-Free Text-Driven Real Image Editing with Diffusion Models</h4><p>This paper proposes an optimization-free and zero fine-tuning framework that applies complex and non-rigid edits to a single real image via a text prompt. Using widely-available generic pre-trained text-to-image diffusion models, the authors demonstrate the ability to modulate pose, scene, background, style, color, and even racial identity in an  flexible manner through a single target text detailing the desired edit. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6oCY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6oCY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 424w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 848w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 1272w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6oCY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png" width="1382" height="358" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:358,&quot;width&quot;:1382,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:372559,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6oCY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 424w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 848w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 1272w, https://substackcdn.com/image/fetch/$s_!6oCY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fac686be3-aa96-42bb-a5e3-924505012bb8_1382x358.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>&#8594; <a href="http://arxiv.org/pdf/2211.07825v1">Go to paper</a></p><div><hr></div><h4>Large Language Models Struggle to Learn Long-Tail Knowledge</h4><p>This paper investigates the relationship between the knowledge memorized by large language models and the information in their pre-training datasets. The authors find that there is a strong correlation between a model's accuracy in answering a fact-based question and how many documents associated with that question were seen during pre-training. They also find that retrieval-augmentation can reduce the dependence on relevant document count.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Virh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Virh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 424w, https://substackcdn.com/image/fetch/$s_!Virh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 848w, https://substackcdn.com/image/fetch/$s_!Virh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 1272w, https://substackcdn.com/image/fetch/$s_!Virh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Virh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png" width="396" height="386.3414634146341" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/c1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:640,&quot;width&quot;:656,&quot;resizeWidth&quot;:396,&quot;bytes&quot;:121404,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Virh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 424w, https://substackcdn.com/image/fetch/$s_!Virh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 848w, https://substackcdn.com/image/fetch/$s_!Virh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 1272w, https://substackcdn.com/image/fetch/$s_!Virh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1d97995-c61a-4e94-a758-7e72165dcf57_656x640.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8594; <a href="http://arxiv.org/pdf/2211.08411v1">Go to paper</a></p><div><hr></div><h4>Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding</h4><p>This paper presents MinD-Vis, a system which can reconstruct highly plausible images with semantically matching details from brain recordings using very few paired annotations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X9zJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X9zJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 424w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 848w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 1272w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X9zJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png" width="1456" height="524" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/d9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:524,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1230931,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X9zJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 424w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 848w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 1272w, https://substackcdn.com/image/fetch/$s_!X9zJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a221fc-7b1e-423c-badc-66b0d3058089_1540x554.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8594; <a href="http://arxiv.org/pdf/2211.06956v2">Go to paper</a></p><div><hr></div><h2>2. Noteworthy Open-Source</h2><ul><li><p><strong><a href="https://github.com/cmudig/AutoProfiler">Pandas Autoprofiler</a>:</strong> automatically visualize Pandas dataframes in Jupyter notebooks.</p></li><li><p><strong><a href="https://huggingface.co/spaces/sasha/StableDiffusionBiasExplorer">Diffusion Bias Explorer</a>: </strong>allows to explore how the text-to-image models like Stable Diffusion v1.4 and DALLE-2 represent different professions and adjectives.</p></li><li><p><strong><a href="https://github.com/jessevig/bertviz">BertViz</a>:</strong> an interactive tool for visualizing attention in Transformer language models such as BERT, GPT2, or T5. It can be run inside a Jupyter. </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MZ_R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MZ_R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 424w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 848w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 1272w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MZ_R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png" width="1456" height="776" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/f3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:776,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1324696,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MZ_R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 424w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 848w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 1272w, https://substackcdn.com/image/fetch/$s_!MZ_R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3bb4b78-2d2b-4715-8229-fdc368e341b0_1644x876.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>3. From Around The Web</h2><ul><li><p><strong>visualizing 1,000,000,000 points</strong></p><div id="youtube2-LKIRAzsqLb0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;LKIRAzsqLb0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/LKIRAzsqLb0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li></ul><p></p><ul><li><p><strong>Small rant about LLMs by Linus: don't ship the API to the user. Text generation is not the product!</strong></p></li></ul><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/thesephist/status/1592924891208372224?s=12&amp;t=fUYSYsGrLJb2MFl7hD4peg&quot;,&quot;full_text&quot;:&quot;Small rant about LLMs and how I see them being put, rather thoughtlessly IMO, into productivity tools. &#128196;\n\nTL;DR &#8212; Most knowledge work isn't a text-generation task, and your product shouldn't ship an implementation detail of LLMs as the end-user interface\n\n<a class=\&quot;tweet-url\&quot; href=\&quot;https://stream.thesephist.com/updates/1668617521\&quot;>stream.thesephist.com/updates/166861&#8230;</a> &quot;,&quot;username&quot;:&quot;thesephist&quot;,&quot;name&quot;:&quot;Linus&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Wed Nov 16 16:57:43 +0000 2022&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/FhszhhkX0AICTT4.png&quot;,&quot;link_url&quot;:&quot;https://t.co/eEedO8Zf00&quot;,&quot;alt_text&quot;:null}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:71,&quot;like_count&quot;:618,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><ul><li><p><strong><a href="https://www.chula.ai/">An AI assistant that creates graphics for presentations</a></strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.chula.ai/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w2Yn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 424w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 848w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w2Yn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png" width="578" height="342.19505494505495" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:862,&quot;width&quot;:1456,&quot;resizeWidth&quot;:578,&quot;bytes&quot;:678403,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.chula.ai/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w2Yn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 424w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 848w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 1272w, https://substackcdn.com/image/fetch/$s_!w2Yn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F14bb0af5-b6bd-44ac-99d5-cfe13d3614e4_1848x1094.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></li></ul><p></p><div><hr></div><h2>4. Featured Blog Post</h2><p>This is a new section of the newsletter in which I feature interesting posts from applied machine learning professionals. The goal is to share knowledge that you can directly use in your work.</p><h3><strong><a href="https://medium.com/@zz1409/using-large-language-models-for-data-labeling-1357f2880a38">Using Large Language Models for Data Labeling</a></strong></h3><p>By <a href="https://www.linkedin.com/in/zachzhanggridx/">Zachariah Zhang</a></p><p><strong>TLDR</strong> &#8212; We can leverage the text generation power of large language models like GPT3 to generate labeled data to use for supervised learning. We can do so using prompting, in which we give the LM a description of the task, some examples, and a new example to generate a label. The data generated from this is, in general, going to be noisy and of lower quality than human labels. However, the speed of data collection as well as the ability to use humans in the loop makes this an effective way to collect a significant amount of labeled data for tough to label tasks.</p><div><hr></div><p></p><p>Thanks for reading this week's issue of the newsletter! If you found this issue useful, please share it along with your friends and colleagues.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/p/ai-this-week-bab?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/p/ai-this-week-bab?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fastpapers.ai/p/ai-this-week-bab?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading fastpapers.ai! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Weekend Reads]]></title><description><![CDATA[Three favorite papers from this week]]></description><link>https://www.fastpapers.ai/p/weekend-reads</link><guid isPermaLink="false">https://www.fastpapers.ai/p/weekend-reads</guid><dc:creator><![CDATA[Amine B.]]></dc:creator><pubDate>Fri, 11 Nov 2022 16:58:26 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/9f6f8338-b2d7-4f9d-ad5f-8afd92a6c67e_964x964.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey there! I've been keeping an eye on the latest AI papers this week, and these are three of my favorites.</p><div><hr></div><h4>BLOOM: A 176B-Parameter Open-Access Multilingual Language Model</h4><p>This paper presents BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). The paper finds that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.05100v1">Go to paper</a></p><div><hr></div><h4>CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP</h4><p>The paper discusses how the CLIP model can be improved by using modern Hopfield networks to tackle the problem of explaining away. The new model, CLOOB, is shown to outperform CLIP in zero-shot transfer learning across all considered architectures and datasets.</p><p>&#8594; <a href="http://arxiv.org/pdf/2110.11316v4">Go to paper</a></p><div><hr></div><h4>Self-conditioned Embedding Diffusion for Text Generation</h4><p>This paper proposes a new method for continuous diffusion models that overcomes the limitations of previous models. The new model, self-conditioned embedding diffusion, is more efficient on accelerator hardware and produces comparable results to standard autoregressive language models.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.04236v1">Go to paper</a></p><div><hr></div><p>Hope you found this issue useful and informative. If so, please share it with your network!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading fastpapers.ai! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI This Week]]></title><description><![CDATA[Interesting papers published recently and more.]]></description><link>https://www.fastpapers.ai/p/ai-this-week</link><guid isPermaLink="false">https://www.fastpapers.ai/p/ai-this-week</guid><dc:creator><![CDATA[Amine B.]]></dc:creator><pubDate>Tue, 08 Nov 2022 14:00:45 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/6b97546f-9e0f-469a-8160-89aea421042e_964x964.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#128075; Hey there! Welcome to fastpapers, a weekly newsletter about the latest developments in artificial intelligence. It summarizes the best of the week's papers, news and a few fun links.</p><p></p><h4>In this week's issue:</h4><ol><li><p><a href="https://www.fastpapers.ai/i/82552836/research-papers-summaries">Summaries of Research papers</a></p></li><li><p><a href="https://www.fastpapers.ai/i/82552836/noteworthy-open-source-and-datasets">Noteworthy open-source &amp; datasets</a></p></li><li><p><a href="https://www.fastpapers.ai/i/82552836/icymi-interviews">ICYMI interviews</a></p></li><li><p><a href="https://www.fastpapers.ai/i/82552836/from-around-the-web">From around the web</a></p></li></ol><h4></h4><h3>1. Summaries of Research papers </h3><div><hr></div><h4>ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations</h4><p>This paper introduces ImageNet-X, a set of sixteen human annotations of factors such as pose, background, or lighting the entire ImageNet-1k validation set as well as a random subset of 12k training images. The paper investigates 2,200 current recognition models and study the types of mistakes as a function of model's (1) architecture, e.g. transformer vs. convolutional, (2) learning paradigm, e.g. supervised vs. self-supervised, and (3) training procedures, e.g., data augmentation.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.01866v1">Go to paper</a></p><div><hr></div><h4>TextCraft: Zero-Shot Generation of High-Fidelity and Diverse Shapes from Text</h4><p>This paper introduces TextCraft, a method for generating high-fidelity and diverse 3D shapes without the need for (text, shape) pairs for training. TextCraft uses CLIP and a multi-resolution approach to improve the fidelity of the generated shape. To improve shape diversity, TextCraft uses a discrete latent space which is modeled using a bidirectional transformer. TextCraft also uses a novel variant of classifier-free guidance to further improve the accuracy-diversity trade-off.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.01427v2">Go to paper</a></p><div><hr></div><h4>Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models</h4><p>This paper proposes an alternative mining-based approach for zero-shot learning that is more flexible and interpretable than prompting. This method outperforms prompting on a wide range of tasks when using comparable templates.</p><p>&#8594; <a href="http://arxiv.org/pdf/2210.14803v1">Go to paper</a></p><div><hr></div><h4>Large Language Models Are Human-Level Prompt Engineers</h4><p>This paper presents Automatic Prompt Engineer (APE), a method for automatically generating and selecting natural language instructions for large language models (LLMs). APE treats the instruction as the "program," optimized by searching over a pool of instruction candidates proposed by an LLM in order to maximize a chosen score function. Experiments on 24 NLP tasks show that APE-engineered prompts outperform the prior LLM baseline by a large margin and achieve better or comparable performance to the instructions generated by human annotators on 19/24 tasks.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.01910v1">Go to paper</a></p><div><hr></div><h4>Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models</h4><p>This paper discusses backdoor attacks against text-guided generative models. These attacks exploit the fact that many text-guided image generation models rely on pre-trained text encoders from external sources. By slightly altering an encoder, an attacker can trigger the model to generate images with pre-defined attributes or images following a hidden, potentially malicious description. The paper demonstrates the high effectiveness of these attacks and highlights that the injection process of a single backdoor takes less than two minutes.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.02408v1">Go to paper</a></p><div><hr></div><h4>Logits are predictive of network type</h4><p>It is possible to predict which deep network has generated a given logit vector with accuracy well above chance. A classifier is trained on the logit vectors of the trained set of a dataset to map the logit vector to the network index that has generated it. Results are better with randomly initialized networks, but also generalize to pretrained networks as well as fine-tuned ones. Classification accuracy is higher using unnormalized logits than normalized ones.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.02272v1">Go to paper</a></p><div><hr></div><h4>Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content Dilutions</h4><p>This paper investigates the robustness of multimodal classifiers to cross-modal dilutions. The authors develop a model that generates additional dilution text that leads to misclassification of the multimodal input. Experiments on two tasks show that the performance of task-specific fusion-based multimodal classifiers drops significantly in the presence of dilutions generated by the model.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.02646v1">Go to paper</a></p><div><hr></div><h4>Evaluating and Improving Factuality in Multimodal Abstractive Summarization</h4><p>This paper proposes a new metric, CLIPBERTScore, for evaluating the factuality of abstractive document summarization, which takes into account the vision modality. This metric is a combination of CLIPScore and BERTScore, and is designed to leverage the robustness and strong factuality detection performance between image-summary and document-summary. The paper shows that this new metric outperforms existing factuality metrics for document summarization, and performs competitively with strong multimodal factuality metrics specifically fine-tuned for the task.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.02580v1">Go to paper</a></p><div><hr></div><h4>The Path to Autonomous Learners</h4><p>This paper presents a new theoretical approach for enabling domain knowledge acquisition by intelligent systems. A hybrid model is introduced that starts with minimal input knowledge in the form of an upper ontology of concepts, stores and reasons over this knowledge through a knowledge graph database and learns new information through a Logic Neural Network. The behavior of this architecture when handling new data is studied and it is shown that the final system is capable of enriching its current knowledge as well as extending it to new domains.</p><p>&#8594; <a href="http://arxiv.org/pdf/2211.02403v1">Go to paper</a></p><div><hr></div><h3>2. Noteworthy open-source &amp; datasets</h3><ul><li><p><strong><a href="https://huggingface.co/spaces/facebook/ov-seg">Open-Vocabulary Semantic Segmentation</a></strong> with Mask-adapted CLIP</p><p></p></li><li><p><strong><a href="https://github.com/microsoft/FocalNet">Focal Modulation Networks</a></strong>: an attention-free architecture that achieves superior performance than SoTA self-attention</p><p></p></li><li><p><strong><a href="https://github.com/NVIDIA/warp">Warp</a></strong>: Python framework for writing high-performance simulation and graphics code</p><p></p></li><li><p><strong><a href="https://huggingface.co/datasets/bigcode/the-stack">The stack</a></strong>: contains over 3TB of permissively-licensed source code files covering 30 programming languages crawled from GitHub</p><p></p></li><li><p><strong><a href="https://facebookresearch.github.io/imagenetx/site/home">ImageNet-X</a></strong><a href="https://facebookresearch.github.io/imagenetx/site/home"> </a>is a set of human annotations pinpointing failure types for the popular ImageNet dataset</p><p></p><div><hr></div></li></ul><h3>3. ICYMI interviews</h3><ul><li><p>OpenAI's Greg Brockman: The Future of LLMs, Foundation &amp; Generative Models (DALL&#183;E 2 &amp; GPT-3)</p></li></ul><div id="youtube2-Rp3A5q9L_bg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Rp3A5q9L_bg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Rp3A5q9L_bg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><ul><li><p>Codex demo: solving complex problems with multiple iterations (minute ~4:30)</p></li></ul><div id="youtube2-_3MBQm7GFIM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_3MBQm7GFIM&quot;,&quot;startTime&quot;:&quot;269s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_3MBQm7GFIM?start=269s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p><div><hr></div><h3>4. From around the web</h3><ul><li><p><a href="http://gradientscience.org/photoguard/">PhotoGuard: Defending Against Diffusion-based Image Manipulation</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="http://gradientscience.org/photoguard/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X06C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 424w, https://substackcdn.com/image/fetch/$s_!X06C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 848w, https://substackcdn.com/image/fetch/$s_!X06C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!X06C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X06C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg" width="540" height="353.0769230769231" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:952,&quot;width&quot;:1456,&quot;resizeWidth&quot;:540,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;An overview of our \&quot;immunization\&quot; methodology.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:&quot;http://gradientscience.org/photoguard/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="An overview of our &quot;immunization&quot; methodology." title="An overview of our &quot;immunization&quot; methodology." srcset="https://substackcdn.com/image/fetch/$s_!X06C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 424w, https://substackcdn.com/image/fetch/$s_!X06C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 848w, https://substackcdn.com/image/fetch/$s_!X06C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!X06C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9aea156c-cf5e-4743-aad9-f16041551b53_2496x1632.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></li></ul><p></p><ul><li><p><a href="https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb#scrollTo=3jm8RYrLqvzz">CLIP Interrogator 2.1: figure out what a good prompt might be to create new images like an existing one</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1gZ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1gZ0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 424w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 848w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 1272w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1gZ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png" width="1456" height="695" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:695,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:643344,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1gZ0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 424w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 848w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 1272w, https://substackcdn.com/image/fetch/$s_!1gZ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7ac3a3ee-03cf-4295-b50e-3210cf4030b2_1664x794.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p></p></li><li><p><a href="https://twitter.com/shubroski/status/1587136794797244417">Run GPT-3 prompts in Google Sheets</a></p></li></ul><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/shubroski/status/1587136794797244417&quot;,&quot;full_text&quot;:&quot;This weekend I built =GPT3(), a way to run GPT-3 prompts in Google Sheets.\n\nIt's incredible how tasks that are hard or impossible to do w/ regular formulas become trivial.\n\nFor example: sanitize data, write thank you cards, summarize product reviews, categorize feedback... &quot;,&quot;username&quot;:&quot;shubroski&quot;,&quot;name&quot;:&quot;Shubhro Saha&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Mon Oct 31 17:37:54 +0000 2022&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://substackcdn.com/image/upload/w_1028,c_limit,q_auto:best/l_twitter_play_button_rvaygk,w_88/ytkp1s170ivzh50vjhfb&quot;,&quot;link_url&quot;:&quot;https://t.co/4fXOTpn2vz&quot;,&quot;alt_text&quot;:null}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:2891,&quot;like_count&quot;:21200,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{},&quot;video_url&quot;:&quot;https://video.twimg.com/ext_tw_video/1587134657878966272/pu/vid/408x270/kxT1bSFFmLFnkLom.mp4?tag=12&quot;,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><div><hr></div><p>Thank you for reading this week's newsletter! I hope you found it useful and that you learned something new. I'd love to know if you have feedback on any topic about which you'd like me to write more about or you just like to share some thoughts.</p><div class="poll-embed" data-attrs="{&quot;id&quot;:30124}" data-component-name="PollToDOM"></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fastpapers.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading fastpapers.ai! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>