<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Data Science - Category - Steven Purcell</title>
        <link>http://stevenpurcell.ninja/categories/data-science/</link>
        <description>Data Science - Category - Steven Purcell</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>steven.ray.purcell@gmail.com (Steven Purcell)</managingEditor>
            <webMaster>steven.ray.purcell@gmail.com (Steven Purcell)</webMaster><lastBuildDate>Fri, 08 Mar 2024 10:04:29 -0500</lastBuildDate><atom:link href="http://stevenpurcell.ninja/categories/data-science/" rel="self" type="application/rss+xml" /><item>
    <title>Fine Tuning Justice: The Role of Pre-Trained LLMs in Enhancing Federal Investigations and Legal Procedures</title>
    <link>http://stevenpurcell.ninja/posts/fine-tuning-justice/</link>
    <pubDate>Fri, 08 Mar 2024 10:04:29 -0500</pubDate>
    <author>Steven Purcell</author>
    <guid>http://stevenpurcell.ninja/posts/fine-tuning-justice/</guid>
    <description><![CDATA[Introduction Large language models (LLMs) represent the next generation of artificial intelligence applications, attaining widespread attention and adoption. These models demand substantial energy and resources for training. Consequently, there has been a shift towards developing pre-trained models like bidirectional encoder representations from transformers (BERT), utilizing millions of parameters from training texts to create general use models. An evolution of this approach involves fine-tuning models with domain-specific data, enhancing their utility in fields such as medicine, law, and science.]]></description>
</item>
<item>
    <title>Optimizing Gradient Boosting Models</title>
    <link>http://stevenpurcell.ninja/posts/optimizing_gradient_boosted_models/</link>
    <pubDate>Sun, 17 Dec 2023 14:56:54 -0500</pubDate>
    <author>Steven Purcell</author>
    <guid>http://stevenpurcell.ninja/posts/optimizing_gradient_boosted_models/</guid>
    <description><![CDATA[Gradient Boosting Models Gradient boosting classifier models are a powerful type of machine learning algorithm that outperform many other types of classifiers. In simplest terms, gradient boosting algorithms learn from the mistakes they make by optmizing on gradient descent. A gradient boosting model values the gradient descent, or the direction of the steepest increase of a function, to make adjustments so that the function can increase rapidly over each iteration. Gradient boosting models can be used for classfication or regression.]]></description>
</item>
<item>
    <title>Traditional Methods vs AI Methods in Forecasting</title>
    <link>http://stevenpurcell.ninja/posts/tradtional_forecasting_vs_ai/</link>
    <pubDate>Wed, 13 Sep 2023 09:42:06 -0400</pubDate>
    <author>Steven Purcell</author>
    <guid>http://stevenpurcell.ninja/posts/tradtional_forecasting_vs_ai/</guid>
    <description><![CDATA[Traditional Methods vs AI Methods in Forecasting: When Simplicity Outperforms Complexity Forecasting remains an indispensable tool for businesses to make informed decisions about future trends, demands, and opportunities. Accurate forecasts can lead to better resource allocation, inventory management, and strategic planning. When it comes to forecasting, data scientists have many tools at their disposal, ranging from traditional and time-tested methods to advanced artificial intelligence (AI) techniques. While AI methods have gained significant attention in recent years, traditional forecasting methods like Moving Average, Exponential Moving Average, and Linear Regression Forecasting still hold their ground in many real-world scenarios.]]></description>
</item>
</channel>
</rss>
