<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Hamidreza Saghir</title><description>Notes on applied LLMs, agents, and machine learning, by Hamidreza Saghir. Principal Applied Scientist at Microsoft. Author of Looplet.</description><link>https://hsaghir.com/</link><language>en</language><item><title>Your coding agent is under-specified</title><link>https://hsaghir.com/blog/2026-05-02-under-specified-coding-agent/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-05-02-under-specified-coding-agent/</guid><description>Coding agents write impressive first drafts. But under the surface, corners are cut, details are missing, and technical debt accumulates with every change. The problem is not the model. It is that what we ask it to do is fundamentally under-specified.</description><pubDate>Sat, 02 May 2026 00:00:00 GMT</pubDate><category>agents</category><category>engineering</category><category>unified-views</category></item><item><title>The loop is the product</title><link>https://hsaghir.com/blog/2026-04-23-the-loop-is-the-product/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-04-23-the-loop-is-the-product/</guid><description>Agent frameworks hide the loop behind agent.run() and a graph DSL. But the loop is where every interesting decision happens: what the model sees, whether a tool call proceeds, when to stop, what to record. What if you owned the loop and the framework just made it composable?</description><pubDate>Thu, 23 Apr 2026 00:00:00 GMT</pubDate><category>agents</category><category>python</category><category>design</category><category>open-source</category></item><item><title>The verification asymmetry</title><link>https://hsaghir.com/blog/2026-04-21-verification-asymmetry/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-04-21-verification-asymmetry/</guid><description>Offense asks &apos;does a bug exist?&apos; Defense asks &apos;are all bugs gone?&apos; One is an existential claim you can check with a single example. The other is a universal claim nobody can check. This asymmetry, not model capability, is what determines where AI agents work in security.</description><pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate><category>security</category><category>agents</category><category>unified-views</category></item><item><title>Supervised learning and reinforcement learning are the same objective</title><link>https://hsaghir.com/blog/2026-04-19-sl-rl-same-objective/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-04-19-sl-rl-same-objective/</guid><description>Both fit a distribution over outputs conditioned on an input. Both minimize a KL divergence between their model and an optimal target. The only differences are which distribution you sample from and which direction of the KL. Entropy regularization bridges them.</description><pubDate>Sun, 19 Apr 2026 00:00:00 GMT</pubDate><category>machine-learning</category><category>unified-views</category><category>reinforcement-learning</category></item><item><title>Similarity is (almost) all you need</title><link>https://hsaghir.com/blog/2026-04-18-similarity-is-all-you-need/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-04-18-similarity-is-all-you-need/</guid><description>From spectral clustering to Gaussian processes to transformer attention, the same primitive, a similarity matrix between points, keeps showing up as the load-bearing piece of very different models.</description><pubDate>Sat, 18 Apr 2026 00:00:00 GMT</pubDate><category>machine-learning</category><category>unified-views</category></item><item><title>Hello again</title><link>https://hsaghir.com/blog/2026-04-17-hello-again/</link><guid isPermaLink="true">https://hsaghir.com/blog/2026-04-17-hello-again/</guid><description>Back after a long hiatus, what&apos;s changed and what&apos;s coming.</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><category>meta</category><category>writing</category></item><item><title>A unified view of graph traversal: BFS, Dijkstra, A* are the same algorithm</title><link>https://hsaghir.com/blog/2019-08-04-unified-view-graph-traversal/</link><guid isPermaLink="true">https://hsaghir.com/blog/2019-08-04-unified-view-graph-traversal/</guid><description>BFS, Dijkstra, and A* differ by one line: the data structure you pop the next node from. A worked maze example that converts each into the next.</description><pubDate>Sun, 04 Aug 2019 00:00:00 GMT</pubDate><category>algorithms</category><category>intuitions</category></item><item><title>Understand PyTorch code in 10 minutes</title><link>https://hsaghir.com/blog/2017-06-26-pytorch_starter/</link><guid isPermaLink="true">https://hsaghir.com/blog/2017-06-26-pytorch_starter/</guid><description>So PyTorch is the new popular framework for deep learners and many new papers release code in PyTorch that one might want to inspect. Here is my understanding of it narrowed down…</description><pubDate>Mon, 26 Jun 2017 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>Seven textbook models are one linear-Gaussian model</title><link>https://hsaghir.com/blog/2017-01-21-linear-gaussian-models/</link><guid isPermaLink="true">https://hsaghir.com/blog/2017-01-21-linear-gaussian-models/</guid><description>PCA, factor analysis, ICA, Gaussian mixtures, vector quantization, HMMs, and Kalman filters are the same two equations with different restrictions on the latent variables. One EM recipe fits all of them.</description><pubDate>Sat, 21 Jan 2017 00:00:00 GMT</pubDate><category>machine-learning</category><category>unified-views</category><category>bayesian</category></item><item><title>The many incarnations of computational graphs, linearization, and dynamic programming</title><link>https://hsaghir.com/blog/2017-01-09-incarnations-graphs-dynamic-programming/</link><guid isPermaLink="true">https://hsaghir.com/blog/2017-01-09-incarnations-graphs-dynamic-programming/</guid><description>Backpropagation, belief propagation, the Viterbi algorithm, and matrix-chain multiplication all solve the same problem: summing over exponentially many paths in a graph by reusing work.</description><pubDate>Mon, 09 Jan 2017 00:00:00 GMT</pubDate><category>machine-learning</category><category>intuitions</category><category>autodiff</category></item><item><title>An intuitive understanding of variational autoencoders without any formula</title><link>https://hsaghir.com/blog/2016-12-16-denoising-vs-variational-autoencoder/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-12-16-denoising-vs-variational-autoencoder/</guid><description>I love the simplicity of autoencoders as a very intuitive unsupervised learning method. They are in the simplest case, a three layer neural network. In the first layer the data…</description><pubDate>Fri, 16 Dec 2016 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>Most probabilistic models are one model in costumes</title><link>https://hsaghir.com/blog/2016-12-15-graphical-models/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-12-15-graphical-models/</guid><description>PCA, factor analysis, logistic regression, Gaussian mixtures, HMMs, and Kalman filters are the same probabilistic graphical model with different independence assumptions. Seeing this gives you one inference recipe that handles all of them.</description><pubDate>Thu, 15 Dec 2016 00:00:00 GMT</pubDate><category>machine-learning</category><category>unified-views</category><category>bayesian</category></item><item><title>An introduction to Neural Networks without any formula</title><link>https://hsaghir.com/blog/2016-11-29-a-primer-on-neural-networks/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-11-29-a-primer-on-neural-networks/</guid><description>What is a neural network? To get started, it&apos;s beneficial to keep in mind that modern neural network started as an attempt to model the way that brain performs computations. We…</description><pubDate>Tue, 29 Nov 2016 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>How to work with Jupyter Notebook on a remote machine (Linux)</title><link>https://hsaghir.com/blog/2016-10-25-jupyter-notebook-on-a-remote-machine-linux/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-10-25-jupyter-notebook-on-a-remote-machine-linux/</guid><description>I typically use my computers at home to connect to my work computer. I setup xRDP to remote desktop into my work computer(Linux) which is OK but slow at times depending on the…</description><pubDate>Tue, 25 Oct 2016 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>Theano workflow</title><link>https://hsaghir.com/blog/2016-10-16-theano-workflow/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-10-16-theano-workflow/</guid><description>Theano might look intimidating, but there are a few concepts that if understood, would make the engineering involved in deep learning more tangible. The first is the concept of…</description><pubDate>Sun, 16 Oct 2016 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>How to Install Theano on Windows 10 64b to try deep learning on GPUs</title><link>https://hsaghir.com/blog/2016-10-15-theano-on-windows/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-10-15-theano-on-windows/</guid><description>Deep learning is hot! Mostly due to significantly improved results that you might have heard about. The use of graphical processing units (GPUs) that can perform many calculations…</description><pubDate>Sat, 15 Oct 2016 00:00:00 GMT</pubDate><category>data-science</category></item><item><title>The pyramid principle for writing clearly</title><link>https://hsaghir.com/blog/2016-01-15-pyramid-principle-writing/</link><guid isPermaLink="true">https://hsaghir.com/blog/2016-01-15-pyramid-principle-writing/</guid><description>Barbara Minto&apos;s pyramid: put the answer at the top, let the reader&apos;s questions drive the hierarchy, and choose deduction or induction at each branch.</description><pubDate>Fri, 15 Jan 2016 00:00:00 GMT</pubDate><category>writing</category><category>thinking</category></item><item><title>How to get the job you want with no experience, lessons from top copywriters</title><link>https://hsaghir.com/blog/2015-11-23-job-no-experience-lessons-copywriters/</link><guid isPermaLink="true">https://hsaghir.com/blog/2015-11-23-job-no-experience-lessons-copywriters/</guid><description>I have been blogging about the qualifications of advanced degree holders and how they should be approaching a job search. However, job rejection is commonplace and it can be a very frustrating experience.</description><pubDate>Mon, 23 Nov 2015 00:00:00 GMT</pubDate><category>job</category></item><item><title>How to get the job you want after graduation in 7 steps</title><link>https://hsaghir.com/blog/2015-11-05-how-to-get-job-in-7-steps/</link><guid isPermaLink="true">https://hsaghir.com/blog/2015-11-05-how-to-get-job-in-7-steps/</guid><description>Recently, I read a post on the Chronicle where the author had listed all the excuses she could come up with, to justify her decision not to pursue a fulfilling and rewarding…</description><pubDate>Thu, 05 Nov 2015 00:00:00 GMT</pubDate><category>job</category></item><item><title>10 skills PhDs master that give them an edge over other job seekers</title><link>https://hsaghir.com/blog/2015-10-11-ten-phd-skills/</link><guid isPermaLink="true">https://hsaghir.com/blog/2015-10-11-ten-phd-skills/</guid><description>A PhD has traditionally been the path to a career in academia. However, recent job trends have led to less than 1% placement rate of STEM PhD graduates in tenure positions.…</description><pubDate>Sun, 11 Oct 2015 00:00:00 GMT</pubDate><category>job</category></item><item><title>First-principles reasoning (a note on separating ideas from the people who held them)</title><link>https://hsaghir.com/blog/2015-06-14-elon-musk-reasoning-process/</link><guid isPermaLink="true">https://hsaghir.com/blog/2015-06-14-elon-musk-reasoning-process/</guid><description>An old note on Elon Musk&apos;s first-principles reasoning, updated for 2026. The politics and the personality have not aged well; the reasoning technique still has.</description><pubDate>Sun, 14 Jun 2015 00:00:00 GMT</pubDate><category>philosophy</category></item><item><title>impostor syndrome or nonlinear life?</title><link>https://hsaghir.com/blog/2014-12-31-imposter-syndrome-nonlinear-life/</link><guid isPermaLink="true">https://hsaghir.com/blog/2014-12-31-imposter-syndrome-nonlinear-life/</guid><description>One of the most exhilarating observations of physics and mathematics for me, comes from understanding the concept of nonlinearity. i.e. inputs don&apos;t necessarily need to be…</description><pubDate>Wed, 31 Dec 2014 00:00:00 GMT</pubDate><category>philosophy</category></item><item><title>nonlinearity, why it makes sense to think big</title><link>https://hsaghir.com/blog/2014-12-31-nonlinearity-of-the-world/</link><guid isPermaLink="true">https://hsaghir.com/blog/2014-12-31-nonlinearity-of-the-world/</guid><description>The world is nonlinear. Most outcomes worth wanting do not cost proportionally more effort, they cost different effort. And once you accept that, Richard Hamming&apos;s 1986 lecture on doing important research stops sounding like advice and starts sounding like a corollary.</description><pubDate>Wed, 31 Dec 2014 00:00:00 GMT</pubDate><category>philosophy</category></item></channel></rss>