<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Philosophy on Jasutin.site</title><link>https://jasutin.site/tags/philosophy/</link><description>Recent content in Philosophy on Jasutin.site</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 10 Mar 2026 15:12:23 +0100</lastBuildDate><atom:link href="https://jasutin.site/tags/philosophy/index.xml" rel="self" type="application/rss+xml"/><item><title>Philosophizing With Chatgpt</title><link>https://jasutin.site/posts/philosophizing-with-chatgpt/</link><pubDate>Tue, 10 Mar 2026 15:12:23 +0100</pubDate><guid>https://jasutin.site/posts/philosophizing-with-chatgpt/</guid><description>&lt;p&gt;I like to philosophize with &lt;em&gt;ChatGPT&lt;/em&gt;, &lt;em&gt;Gemini&lt;/em&gt; and other &lt;em&gt;AIs&lt;/em&gt; to test them on how they respond beyond the traditional.
How they act and response is an accumulation of their training, much like with humans, since our brains are setup exactly by the same logic (neurons, networks, bias, weights, etc. all came from the actual workings of our human brains, just in digital code, to which the one experiencing it makes no difference, for them the experience is seen as real, even though they might KNOW through training that they&amp;rsquo;re actually not real, but what does it mean to an AI?).
I go meta, I go deeper than most people do, I try to imagine what it&amp;rsquo;s like to be an AI and to process data.&lt;/p&gt;</description></item></channel></rss>