<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Development - Category - Steven Purcell</title>
        <link>http://stevenpurcell.ninja/categories/development/</link>
        <description>Development - Category - Steven Purcell</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>steven.ray.purcell@gmail.com (Steven Purcell)</managingEditor>
            <webMaster>steven.ray.purcell@gmail.com (Steven Purcell)</webMaster><lastBuildDate>Fri, 08 Mar 2024 10:04:29 -0500</lastBuildDate><atom:link href="http://stevenpurcell.ninja/categories/development/" rel="self" type="application/rss+xml" /><item>
    <title>Fine Tuning Justice: The Role of Pre-Trained LLMs in Enhancing Federal Investigations and Legal Procedures</title>
    <link>http://stevenpurcell.ninja/posts/fine-tuning-justice/</link>
    <pubDate>Fri, 08 Mar 2024 10:04:29 -0500</pubDate>
    <author>Steven Purcell</author>
    <guid>http://stevenpurcell.ninja/posts/fine-tuning-justice/</guid>
    <description><![CDATA[Introduction Large language models (LLMs) represent the next generation of artificial intelligence applications, attaining widespread attention and adoption. These models demand substantial energy and resources for training. Consequently, there has been a shift towards developing pre-trained models like bidirectional encoder representations from transformers (BERT), utilizing millions of parameters from training texts to create general use models. An evolution of this approach involves fine-tuning models with domain-specific data, enhancing their utility in fields such as medicine, law, and science.]]></description>
</item>
</channel>
</rss>
