<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator><link href="https://www.basnieuwenhuizen.nl/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.basnieuwenhuizen.nl/" rel="alternate" type="text/html" /><updated>2022-04-26T01:58:13+02:00</updated><id>https://www.basnieuwenhuizen.nl/feed.xml</id><title type="html">Bas Nieuwenhuizen</title><subtitle>Open Source GPU Drivers</subtitle><entry><title type="html">A driver on the GPU</title><link href="https://www.basnieuwenhuizen.nl/a-driver-on-the-gpu/" rel="alternate" type="text/html" title="A driver on the GPU" /><published>2022-04-25T00:00:00+02:00</published><updated>2022-04-25T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/a-driver-on-the-gpu</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/a-driver-on-the-gpu/"><![CDATA[<p>The title might be a bit hyperbolic here, but we’re indeed exploring a first step in that direction with radv. The impetus here is the <code class="language-plaintext highlighter-rouge">ExecuteIndirect</code> command in Direct3D 12 and some games that are using it in non-trivial ways.  (e.g. Halo Infinite)</p>

<p><code class="language-plaintext highlighter-rouge">ExecuteIndirect</code> can be seen as an extension of what we have in Vulkan with <code class="language-plaintext highlighter-rouge">vkCmdDrawIndirectCount</code>. It adds <a href="https://docs.microsoft.com/en-us/windows/win32/direct3d12/indirect-drawing">extra capabilities</a>. To support that with vkd3d-proton we need the following indirect Vulkan capabilities:</p>

<ol>
  <li>Binding vertex buffers.</li>
  <li>Binding index buffers.</li>
  <li>Updating push constants.</li>
</ol>

<p>This functionality happens to be a subset of <code class="language-plaintext highlighter-rouge">VK_NV_device_generated_commands</code> and hence I’ve been working on implementing a subset of that extension on radv. Unfortunately, we can’t really give the firmware a “extended indirect draw call” and execute stuff, so we’re stuck generating command buffers on the GPU.</p>

<p>The way the extension works, the application specifies a command “signature” on the CPU, which specifies that for each draw call the application is going to update A, B and C. Then, at runtime, the application provides a buffer providing the data for A, B and C for each draw call. The driver then processes that into a command buffer and then executes that into a secondary command buffer.</p>

<p>The workflow is then as follows:</p>

<ol>
  <li>The application (or vkd3d-proton) provides the command signature to the driver which creates an object out of it.</li>
  <li>The application queries how big a command buffer (“preprocess buffer”) of $n$ draws with that signature would be.</li>
  <li>The application allocates the preprocess buffer.</li>
  <li>The application does its stuff to generate some commands.</li>
  <li>The application calls <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code> which converts the application buffer into a command buffer (in the preprocess buffer)</li>
  <li>The application calls <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> to execute the generated command buffer.</li>
</ol>

<h1 id="what-goes-into-a-draw-in-radv">What goes into a draw in radv</h1>

<p>When the application triggers a draw command in Vulkan, the driver generates GPU commands to do the following:</p>

<ol>
  <li>Flush caches if needed</li>
  <li>Set some registers.</li>
  <li>Trigger the draw.</li>
</ol>

<p>Of course we skip any of these steps (or parts of them) when they’re redundant. The majority of the complexity is in the register state we have to set. There are multiple parts here</p>

<ol>
  <li>
    <p>Fixed function state:</p>

    <ol>
      <li>subpass attachments</li>
      <li>static/dynamic state (viewports, scissors, etc.)</li>
      <li>index buffers</li>
      <li>some derived state from the shaders (some tesselation stuff, fragment shader export types, varyings, etc.)</li>
    </ol>
  </li>
  <li>shaders (start address, number of registers, builtins used)</li>
  <li>user SGPRs (i.e. registers that are available at the start of a shader invocation)</li>
</ol>

<p>Overall, most of the pipeline state is fairly easy to emit: we just precompute it on pipeline creation and <code class="language-plaintext highlighter-rouge">memcpy</code> it over if we switch shaders. The most difficult is probably the user SGPRs, and the reason for that is that it is derived from a lot of the remaining API state . Note that the list above doesn’t include push constants, descriptor sets or vertex buffers. The driver computes all of these, and generates the user SGPR data from that.</p>

<p>Descriptor sets in radv are just a piece of GPU memory, and radv binds a descriptor set by providing the shader with a pointer to that GPU memory in a user SGPR. Similarly, we have no hardware support for vertex buffers, so radv generates a push descriptor set containing internal texel buffers and then provides a user SGPR with a pointer to that descriptor set.</p>

<p>For push constants, radv has two modes: a portion of the data can be passed in user SGPRs directly, but sometimes a chunk of memory gets allocated and then a pointer to that memory is provided in a user SGPR. This fallback exists because the hardware doesn’t always have enough user SGPRs to fit all the data.</p>

<p>On Vega and later there are 32 user SGPRs, and on earlier GCN GPUs there are 16. This needs to fit pointers to all the referenced descriptor sets (including internal ones like the one for vertex buffers), push constants, builtins like the start vertex and start instance etc. To get the best performance here, radv determines a mapping of API object to user SGPR at shader compile time and then at draw time radv uses that mapping to write user SGPRs.</p>

<p>This results in some interesting behavior, like switching pipelines does cause the driver to update all the user SGPRs because the mapping might have changed.</p>

<p>Furthermore, as an interesting performance hack radv allocates all upload buffers (for the push constant and push descriptor sets), shaders and descriptor pools in a single 4 GiB region of of memory so that we can pass only the bottom 32-bits of all the pointers in a user SGPR, getting us farther with the limited number of user SGPRs. We will see later how that makes things difficult for us.</p>

<h1 id="generating-a-commandbuffer-on-the-gpus">Generating a commandbuffer on the GPUs</h1>

<p>As shown above radv has a bunch of complexity around state for draw calls and if we start generating command buffers on the GPU that risks copying a significant part of that complexity to a shader. Luckily  <code class="language-plaintext highlighter-rouge">ExecuteIndirect</code> and <code class="language-plaintext highlighter-rouge">VK_NV_device_generated_commands</code> have some limitations that make this easier. The app can only change</p>

<ol>
  <li>vertex buffers</li>
  <li>index buffers</li>
  <li>push constants</li>
</ol>

<p><code class="language-plaintext highlighter-rouge">VK_NV_device_generated_commands</code> also allows changing shaders and the rotation winding of what is considered a primitive backface but we’ve chosen to ignore that for now since it isn’t needed for <code class="language-plaintext highlighter-rouge">ExecuteIndirect</code> (though especially the shader switching could be useful for an application).</p>

<p>The second curveball is that the buffer the application provides needs to provide the same set of data for every draw call. This avoids having to do a lot of serial processing to figure out what the previous state was, which allows processing every draw command in a separate shader invocation. Unfortunately we’re still a bit dependent on the old state that is bound before the indirect command buffer execution:</p>

<ol>
  <li>The previously bound index buffer</li>
  <li>Previously bound vertex buffers.</li>
  <li>Previously bound push constants.</li>
</ol>

<p>Remember that for vertex buffers and push constants we may put them in a piece of memory. That piece of memory needs to contains all the vertex buffers/push constants for that draw call, so even if we modify only one of them, we have to copy the rest over. The index buffer is different: in the draw packets for the GPU there is a field that is derived from the index buffer size.</p>

<p>So in <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code> radv partitions the preprocess buffer into a command buffer and an upload buffer (for the vertex buffers &amp; push constants), both with a fixed stride based on the command signature. Then it launches a shader which processes a draw call in each invocation:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   if (shader used vertex buffers &amp;&amp; we change a vertex buffer) {
      copy all vertex buffers 
      update the changed vertex buffers
      emit a new vertex descriptor set pointer
   }
   if (we change a push constant) {
      if (we change a push constant in memory) {
         copy all push constant
         update changed push constants
         emit a new push constant pointer
      }
      emit all changed inline push constants into user SGPRs
   }
   if (we change the index buffer) {
      emit new index buffers
   }
   emit a draw command
   insert NOPs up to the stride
</code></pre></div></div>

<p>In <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> radv uses the internal equivalent of <code class="language-plaintext highlighter-rouge">vkCmdExecuteCommands</code> to execute as if the generated command buffer is a secondary command buffer.</p>

<h1 id="challenges">Challenges</h1>

<p>Of course one does not simply move part of the driver to GPU shaders without any challenges. In fact we have a whole bunch of them. Some of them just need a bunch of work to solve, some need some extension specification tweaking and some are hard to solve without significant tradeoffs.</p>

<h2 id="code-maintainability">Code maintainability</h2>

<p>A big problem is that the code needed for the limited subset of state that is supported is now in 3 places:</p>

<ol>
  <li>The traditional CPU path</li>
  <li>For determining how large the preprocess buffer needs to be</li>
  <li>For the shader called in <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code> to build the preprocess buffer.</li>
</ol>

<p>Having the same functionality in multiple places is a recipe for things going out of sync. This makes it harder to change this code and much easier for bugs to sneak in. This can be mitigated with a lot of testing, but a bunch of GPU work gets complicated quickly. (e.g. the preprocess buffer being larger than needed still results in correct results, getting a second opinion from the shader to check adds significant complexity).</p>

<h2 id="nir_builder-gets-old-quickly"><code class="language-plaintext highlighter-rouge">nir_builder</code> gets old quickly</h2>

<p>In the driver at the moment we have no good high level shader compiler. As a result a lot of the internal helper shaders are written using the <code class="language-plaintext highlighter-rouge">nir_builder</code> helper to generate <code class="language-plaintext highlighter-rouge">nir</code>, the intermediate IR of the shader compiler. Example fragment:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   nir_push_loop(b);
   {
      nir_ssa_def *curr_offset = nir_load_var(b, offset);

      nir_push_if(b, nir_ieq(b, curr_offset, cmd_buf_size));
      {
         nir_jump(b, nir_jump_break);
      }
      nir_pop_if(b, NULL);

      nir_ssa_def *packet_size = nir_isub(b, cmd_buf_size, curr_offset);
      packet_size = nir_umin(b, packet_size, nir_imm_int(b, 0x3ffc * 4));

      nir_ssa_def *len = nir_ushr_imm(b, packet_size, 2);
      len = nir_iadd_imm(b, len, -2);
      nir_ssa_def *packet = nir_pkt3(b, PKT3_NOP, len);

      nir_store_ssbo(b, packet, dst_buf, curr_offset, .write_mask = 0x1,
                     .access = ACCESS_NON_READABLE, .align_mul = 4);
      nir_store_var(b, offset, nir_iadd(b, curr_offset, packet_size), 0x1);
   }
   nir_pop_loop(b, NULL);
</code></pre></div></div>

<p>It is clear that this all gets very verbose very quickly. This is somewhat fine as long as all the internal shaders are tiny. However, between this and raytracing our internal shaders are getting significantly bigger and the verbosity really becomes a problem.</p>

<p>Interesting things to explore here are to use glslang, or even to try writing our shaders in OpenCL C and then compiling it to SPIR-V at build time. The challenge there is that radv is built on a diverse set of platforms (including Windows, Android and desktop Linux) which can make significant dependencies a struggle.</p>

<h2 id="preprocessing">Preprocessing</h2>

<p>Ideally your GPU work is very suitable for pipelining to avoid synchronization cost on the GPU. If we generate the command buffer and then execute it we need to have a full GPU sync point in between, which can get very expensive as it waits until the GPU is idle. To avoid this <code class="language-plaintext highlighter-rouge">VK_NV_device_generated_commands</code> has added the separate <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code> command, so that the application can batch up a bunch of work before incurring the cost a sync point.</p>

<p>However, in radv we have to do the command buffer generation in <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> as our command buffer generation depends on some of the other state that is bound, but might not be bound yet when the application calls <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code>.</p>

<p>Which brings up a slight spec problem: The extension specification doesn’t specify whether the application is allowed to execute <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> on multiple queues concurrently with the same preprocess buffer. If all the writing of that happens in <code class="language-plaintext highlighter-rouge">vkCmdPreprocessGeneratedCommandsNV</code> that would result in correct behavior, but if the writing happens in <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> this results in a race condition.</p>

<h2 id="the-32-bit-pointers">The 32-bit pointers</h2>

<p>Remember that radv only passes the bottom 32-bits of some pointers around. As a result the application needs to allocate the preprocess buffer in that 4-GiB range. This in itself is easy: just add a new memory type and require it for this usage. However, the devil is in the details.</p>

<p>For example, what should we do for memory budget queries? That is per memory heap, not memory type. However, a new memory heap does not make sense, as the memory is also still subject to physical availability of VRAM, not only address space.</p>

<p>Furthermore, this 4-GiB region is more constrained than other memory, so it would be a shame if applications start allocating random stuff in it. If we look at the existing usage for a pretty heavy game (HZD) we get about</p>

<ol>
  <li>40 MiB of command buffers + upload buffers</li>
  <li>200 MiB of descriptor pools</li>
  <li>400 MiB of shaders</li>
</ol>

<p>So typically we have a lot of room available. Ideally the ordering of memory types would get an application to prefer another memory type when we do not need this special region. However, memory object caching poses a big risk here: Would you choose a memory object in the cache that you can reuse/suballocate (potentially in that limited region), or allocate new for a “better” memory type?</p>

<p>Luckily we have not seen that risk play out, but the only real tested user at this point has been vkd3d-proton.</p>

<h2 id="secondary-command-buffers">Secondary command buffers.</h2>

<p>When executing the generated command buffer radv does that the same way as calling a secondary command buffer. This has a significant limitation: A secondary command buffer cannot call a secondary command buffer on the hardware. As a result the current implementation has a problem if <code class="language-plaintext highlighter-rouge">vkCmdExecuteGeneratedCommandsNV</code> gets called on a secondary command buffer.</p>

<p>It is possible to work around this. An example would be to split the secondary command buffer into 3 parts: pre, generated, post. However, that needs a bunch of refactoring to allow multiple internal command buffers per API command buffers.</p>

<h1 id="where-to-go-next">Where to go next</h1>

<p>Don’t expect this upstream very quickly. The main reason for exploring this in radv is <code class="language-plaintext highlighter-rouge">ExecuteIndirect</code> support for Halo Infinite, and after some recent updates we’re back into GPU hang limbo with radv/vkd3d-proton there. So while we’re solving that I’m holding off on upstreaming in case the hangs are caused by the implementation of this extension.</p>

<p>Furthermore, this is only a partial implementation of the extension anyways, with a fair number of limitations that we’d ideally eliminate before fully exposing this extension.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The title might be a bit hyperbolic here, but we’re indeed exploring a first step in that direction with radv. The impetus here is the ExecuteIndirect command in Direct3D 12 and some games that are using it in non-trivial ways. (e.g. Halo Infinite)]]></summary></entry><entry><title type="html">Raytracing Starting to Come Together</title><link href="https://www.basnieuwenhuizen.nl/raytracing-starting-to-come-together/" rel="alternate" type="text/html" title="Raytracing Starting to Come Together" /><published>2021-09-17T00:00:00+02:00</published><updated>2021-09-17T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/raytracing-starting-to-come-together</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/raytracing-starting-to-come-together/"><![CDATA[<p>I am back with another status update on raytracing in RADV. And the good news is that things are finally starting to come together. After ~9 months of on and off work we’re now having games working with raytracing. Working on first try after getting all the required functionality was Control:</p>

<p><img src="/assets/control.jpg" alt="Control with raytracing on RADV" /></p>

<p>After poking for a long time at CTS and demos it is really nice to see the fruits of ones work.</p>

<p>The piece that I added recently was copy/compaction and serialization of acceleration structures which was a bunch of shader writing, handling another type of queries and dealing with indirect dispatches. (Since of course the API doesn’t give the input size on the CPU. No idea how this API should be secured …)</p>

<h3 id="what-games">What games?</h3>

<p>I did try 5 games:</p>

<ol>
  <li>Quake 2 RTX (Vulkan): works. This was working already on my previous update.</li>
  <li>Control (D3D): works. Pretty much just works. Runs at maybe 30-50% of RT performance on Windows.</li>
  <li>Metro Exodus (Vulkan): works. Needs <a href="https://gitlab.freedesktop.org/mesa/mesa/-/issues/5326">one workaround</a> and is very finicky in WSI but otherwise works fine. Runs at 20-25% of RT performance on Windows.</li>
  <li>Ghostrunner (D3D): Does not work. This really needs per shadergroup compilation instead of just mashing all the shaders together as I get shaders now with 1 million NIR instructions, which is a pain to debug.</li>
  <li>Doom Eternal (Vulkan): Does not work. The raytracing option in the menu stays grayed out and at this point I’m at a loss what is required to make the game allow enabling RT.</li>
</ol>

<p>If anybody could tell me how to get Doom Eternal to allow RT I’d appreciate it.</p>

<h2 id="what-is-next">What is next?</h2>

<p>Of course the support is far from done. Some things to still make progress on:</p>

<ol>
  <li>Upstreaming what I have. Samuel has been busy reviewing my MRs and I think there is a good chance that what I have now will make it into 21.3.</li>
  <li>Improve the pipeline compilation model to hopefully make ghostrunner work.</li>
  <li>Improved BVH building. The current BVH is really naive, which is likely one of the big performance factors.</li>
  <li>Improve traversal.</li>
  <li>Move on to stuff needed for DXR 1.1 like VK_KHR_ray_query.</li>
</ol>

<p>P.S. If you haven’t seen it yet, Jason Ekstrand from Intel recently gave a talk about <a href="https://youtu.be/jIkwotwX-T4?t=11022">how Intel implements raytracing</a>. Nice showcase of how you can provide some more involved hardware implementation than RDNA2 does.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I am back with another status update on raytracing in RADV. And the good news is that things are finally starting to come together. After ~9 months of on and off work we’re now having games working with raytracing. Working on first try after getting all the required functionality was Control:]]></summary></entry><entry><title type="html">World’s Slowest Raytracer</title><link href="https://www.basnieuwenhuizen.nl/worlds-slowest-raytracer/" rel="alternate" type="text/html" title="World’s Slowest Raytracer" /><published>2021-07-27T00:00:00+02:00</published><updated>2021-07-27T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/worlds-slowest-raytracer</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/worlds-slowest-raytracer/"><![CDATA[<p>I have not talked about raytracing in RADV for a while, but after <del>some procrastination</del> being focused on some other things I recently got back to it and achieved my next milestone.</p>

<p>In particular I have been hacking away at CTS and got to a point where CTS on <code class="language-plaintext highlighter-rouge">dEQP-VK.ray_tracing.*</code> runs to completion without crashes or hangs. Furthermore, I got the passrate to 90% of non-skiped tests. So we’re finally getting somewhere close to usable.</p>

<p>As further show that it is usable my fixes for CTS also fixed the corruption issues in Quake 2 RTX (Github version), delivering this image:</p>

<p><img src="/assets/q2rtx.jpg" alt="Q2RTX on RADV" /></p>

<p>Of course not everything is perfect yet. Besides the not 100% CTS passrate it has like half the Windows performance at 4k right now and we still have some feature gaps to make it really usable for most games.</p>

<h1 id="why-is-it-slow">Why is it slow?</h1>

<p>TL;DR Because I haven’t optimized it yet and implemented every shortcut imaginable.</p>

<h2 id="amd-raytracing-primer">AMD raytracing primer</h2>

<p>Raytracing with Vulkan works with two steps:</p>

<ol>
  <li>You built a giant acceleration structure that contains all your geometry. Typically this ends up being some kind of tree, typically a Bounding Volume Hierarchy (BVH).</li>
  <li>Then you trace rays using some traversal shader through the acceleration structure you just built.</li>
</ol>

<p>With RDNA2 AMD started accelerating this by adding an instruction that allowed doing intersection tests between a ray and a single BVH node, where the BVH node can either be</p>

<ul>
  <li>A triangle</li>
  <li>A box node specifying 4 AABB boxes</li>
</ul>

<p>Of course this isn’t quite enough to deal with all geometry types in Vulkan so we also add two more:</p>

<ul>
  <li>an AABB box</li>
  <li>an instance of another BVH combined with a transformation matrix</li>
</ul>

<h2 id="building-the-bvh">Building the BVH</h2>

<p>With a search tree like a BVH it is very possibly to make trees that are very useless. As an example consider a binary search tree that is very unbalanced. We can have similarly bad things with a BVH including making it unbalanced or having overlapping bounding volumes.</p>

<p>And my implementation is the simplest thing possible: the input geometry becomes the leaves in exactly the same order and then internal nodes are created just as you’d draw them. That is probably decently fast in building the BVH but surely results in a terrible BVH to actually use.</p>

<h2 id="bvh-traversal">BVH traversal</h2>

<p>After we built a BVH we can start tracing some rays. In rough pseudocode the current implementation is</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>stack = empty
insert root node into stack
while stack is not empty:

   node = pop a node from the stack

   if we left the bottom level BVH:
      reset ray origin/direction to initial origin/direction

   result = amd_intersect(ray, node)
   switch node type:
      triangle:
         if result is a hit:
            load some node data
            process hit
      box node:
         for each box hit:
            push child node on stack
      custom node 1 (instance):
         load node data
         push the root node of the bottom BVH on the stack
         apply transformation matrix to ray origin/direction
      custom node 2 (AABB geometry):
         load node data
         process hit
</code></pre></div></div>

<p>We already knew there were inherently going to be some difficulties:</p>

<ul>
  <li>We have a poor BVH so we’re going to do way more iterations than needed.</li>
  <li>Calling shaders as a result of hits is going to result in some divergence.</li>
</ul>

<p>Furthermore this also clearly shows some difficulties with how we approached the intersection instruction. Some advantages of the intersection instruction are that it avoids divergence in computing collisions if we have different node types in a subgroup and to be cheaper when there are only a few lanes active. (A single CU can process one ray/node intersection per cycle, modulo memory latency, while it can process an ALU instruction on 64 lanes per cycle).</p>

<p>However even if it avoids the divergence in the collision computation we still introduce a ton of divergence in the processing of the results of the intersection. So we are still doing pretty bad here.</p>

<h3 id="a-fast-gpu-traversal-stack-needs-some-work-too">A fast GPU traversal stack needs some work too</h3>

<p>Another thing to be noted is our traversal stack size. According to the Vulkan specification a bottom level acceleration structure should support <code class="language-plaintext highlighter-rouge">2^24 -1</code> triangles and a top level acceleration structure should support <code class="language-plaintext highlighter-rouge">2^24 - 1</code> bottom level structures. Combined with a tree with <code class="language-plaintext highlighter-rouge">4</code> children in each internal node we can end up with a tree depth of about <code class="language-plaintext highlighter-rouge">24</code> levels.</p>

<p>In each internal node iteration of our loop we pop one element and push up to 4 elements, so at the deepest level of traversal we could end up with a <code class="language-plaintext highlighter-rouge">72</code> entry stack. Assuming these are 32-bit node identifiers, that ends up with 288 bytes of stack per lane, or ~18 KiB per 64 lane workgroup (the minimum which could possibly keep a CU busy with an ALU only workload). Given that we have 64 KiB of LDS (yes I am using LDS since there is no divergent dynamic register addressing) per CU that leaves only 3 workgroups per CU, leaving very little options for parallelism between different hardware execution units (e.g. the ALU and the texture unit that executes the ray intersections) or latency hiding of memory operations.</p>

<p>So ideally we get this stack size down significantly.</p>

<h1 id="where-do-we-go-next">Where do we go next?</h1>

<p>First step is to get CTS passing and getting an initial merge request into upstream Mesa. As a follow on to that I’d like to get a minimal prototype going for some DXR 1.0 games with vkd3d-proton just to make sure we have the right feature coverage.</p>

<p>After that we’ll have to do all the traversal optimizations. I’ll probably implement a bunch of instrumentation so I actually have a clue on what to optimize. This is where having some runnable games really helps get the right idea about performance bottlenecks.</p>

<p>Finally, with some luck better shaders to build a BVH will materialize as well.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I have not talked about raytracing in RADV for a while, but after some procrastination being focused on some other things I recently got back to it and achieved my next milestone.]]></summary></entry><entry><title type="html">Making Reading from VRAM less Catastrophic</title><link href="https://www.basnieuwenhuizen.nl/making-reading-from-vram-less-catastrophic/" rel="alternate" type="text/html" title="Making Reading from VRAM less Catastrophic" /><published>2021-06-14T00:00:00+02:00</published><updated>2021-06-14T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/making-reading-from-vram-less-catastrophic</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/making-reading-from-vram-less-catastrophic/"><![CDATA[<p>In <a href="https://basnieuwenhuizen.nl/the-catastrophe-of-reading-from-vram/">an earlier article</a> I showed how reading from VRAM with the CPU can be very slow. It however turns out there there are ways to make it less slow.</p>

<style>
.tablelines table, .tablelines td, .tablelines th {
        border: 1px solid black;
        padding: 5px;
        }
</style>

<p>The key to this are instructions with non-temporal hints, in particular VMOVNTDQA. The Intel Instruction Manual says the following about this instruction:</p>

<p>“MOVNTDQA loads a double quadword from the source operand (second operand) to the destination operand (first operand) using a non-temporal hint if the memory source is WC (write combining) memory type. For WC memory type, the nontemporal hint may be implemented by loading a temporary internal buffer with the equivalent of an aligned cache line without filling this data to the cache. Any memory-type aliased lines in the cache will be snooped and flushed. Subsequent MOVNTDQA reads to unread portions of the WC cache line will receive data from the temporary internal buffer if data is available. “ (<a href="https://software.intel.com/content/www/us/en/develop/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-2a-2b-2c-and-2d-instruction-set-reference-a-z.html">Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 2</a>)</p>

<p>This sounds perfect for our VRAM and WC System Memory buffers as we typically only read 16-bytes per instruction and this allows us to read entire cachelines at time.</p>

<p>It turns out that Mesa already implemented a <a href="https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/mesa/main/streaming-load-memcpy.c">streaming memcpy</a> using these instructions so all we had to do was throw that into our benchmark and write a corresponding memcpy that does non-temporal stores to benchmark writing to these memory regions.</p>

<p>As a reminder, we look into three allocation types that are exposed by the amdgpu Linux kernel driver:</p>

<ul>
  <li>
    <p>VRAM. This lives on the GPU and is mapped with Uncacheable Speculative Write Combining (USWC) on the CPU. This means that accesses from the CPU are not cached, but writes can be <a href="https://en.wikipedia.org/wiki/Write_combining">write-combined</a>.</p>
  </li>
  <li>
    <p>Cacheable system memory. This is system memory that has caching enabled on the CPU and there is cache snooping to ensure the memory is coherent between the CPU and GPU (up till the top level caches. The GPU caches do not participate in the coherence).</p>
  </li>
  <li>
    <p>USWC system memory. This is system memory that is mapped with Uncacheable Speculative Write Combining on the CPU. This can lead to slight performance benefits compared to cacheable system memory due to lack of cache snooping.</p>
  </li>
</ul>

<p>Furthermore this still uses a RX 6800 XT + a 2990WX with 4 channel 3200 MT/s RAM.</p>

<table class="tablelines">
  <thead>
    <tr>
      <th>method (MiB/s)</th>
      <th style="text-align: right">VRAM</th>
      <th style="text-align: right">Cacheable System Memory</th>
      <th style="text-align: right">USWC System Memory</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>read via memcpy</td>
      <td style="text-align: right">15</td>
      <td style="text-align: right">11488</td>
      <td style="text-align: right">137</td>
    </tr>
    <tr>
      <td>write via memcpy</td>
      <td style="text-align: right">10028</td>
      <td style="text-align: right">18249</td>
      <td style="text-align: right">11480</td>
    </tr>
    <tr>
      <td>read via streaming memcpy</td>
      <td style="text-align: right">756</td>
      <td style="text-align: right">6719</td>
      <td style="text-align: right">4409</td>
    </tr>
    <tr>
      <td>write via streaming memcpy</td>
      <td style="text-align: right">10550</td>
      <td style="text-align: right">14737</td>
      <td style="text-align: right">11652</td>
    </tr>
  </tbody>
</table>

<p>Using this memcpy implementation we get significantly better performance in uncached memory situations, 50x for VRAM and 26x for USWC system memory. If this is a significant bottleneck in your workload this can be a gamechanger. Or if you were using SDMA to avoid this hit, you might be able to do things at significantly lower latency. That said it is not at a level where it does not matter. For big copies using DMA can still be a significant win.</p>

<p>Note that I initially gave an explanation on why the non-temporal loads should be faster, but the increases in performance are significantly above what something that just fiddles with loading entire cachelines would achieve. I have not dug into the why of the performance increase.</p>

<h3 id="dma-performance">DMA performance</h3>

<p>I have been claiming DMA is faster for CPU readbacks of VRAM in both this article and the previous article on the topic. One might ask how fast DMA is then. To demonstrate this I benchmarked VRAM&lt;-&gt;Cacheable System Memory copies using the SDMA hardware block on Radeon GPUs.</p>

<p>Note that there is a significant overhead per copy here due to submitting work to the GPU, so I will shows results vs copy size. The rate is measured while doing a wait after each individual copy and taking the wall clock time as these usecases tend to be latency sensitive and hence batching is not too interesting.</p>

<table class="tablelines">
  <thead>
    <tr>
      <th style="text-align: right">copy size</th>
      <th style="text-align: right">copy from VRAM (MiB/s)</th>
      <th style="text-align: right">copy to VRAM (MiB/s)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: right">4 KiB</td>
      <td style="text-align: right">62</td>
      <td style="text-align: right">63</td>
    </tr>
    <tr>
      <td style="text-align: right">16 KiB</td>
      <td style="text-align: right">245</td>
      <td style="text-align: right">240</td>
    </tr>
    <tr>
      <td style="text-align: right">64 KiB</td>
      <td style="text-align: right">953</td>
      <td style="text-align: right">1015</td>
    </tr>
    <tr>
      <td style="text-align: right">256 KiB</td>
      <td style="text-align: right">3106</td>
      <td style="text-align: right">3082</td>
    </tr>
    <tr>
      <td style="text-align: right">1 MiB</td>
      <td style="text-align: right">6715</td>
      <td style="text-align: right">7281</td>
    </tr>
    <tr>
      <td style="text-align: right">4 MiB</td>
      <td style="text-align: right">9737</td>
      <td style="text-align: right">11636</td>
    </tr>
    <tr>
      <td style="text-align: right">16 MiB</td>
      <td style="text-align: right">12129</td>
      <td style="text-align: right">12158</td>
    </tr>
    <tr>
      <td style="text-align: right">64 MiB</td>
      <td style="text-align: right">13041</td>
      <td style="text-align: right">12975</td>
    </tr>
    <tr>
      <td style="text-align: right">256 MiB</td>
      <td style="text-align: right">13429</td>
      <td style="text-align: right">13387</td>
    </tr>
  </tbody>
</table>

<p>This shows that for reads DMA is faster than a normal memcpy at 4 KiB and faster than a streaming memcpy at 64 KiB. Of course one still needs to do their CPU access at that point, but at both these thresholds even with an additional CPU memcpy the total process should still be fast with DMA.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In an earlier article I showed how reading from VRAM with the CPU can be very slow. It however turns out there there are ways to make it less slow.]]></summary></entry><entry><title type="html">First Rays</title><link href="https://www.basnieuwenhuizen.nl/first-rays/" rel="alternate" type="text/html" title="First Rays" /><published>2021-04-16T00:00:00+02:00</published><updated>2021-04-16T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/first-rays</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/first-rays/"><![CDATA[<p>Given that the new RDNA2 GPUs provide some support for hardware accelerated raytracing and there is even a new shiny Vulkan extension for it, it may not be a surprise that we’re working on implementing raytracing support in RADV.</p>

<p>Already some time ago I wrote <a href="https://gitlab.freedesktop.org/bnieuwenhuizen/mesa/-/wikis/RDNA2-hardware-BVH">documentation</a> for the hardware raytracing support. As these GPUs contain quite minimal hardware to implement things there is a large software and shader side to implementing this.</p>

<p>And that is what I’ve been up to for the last couple of weeks. And I now have achieved my first personal milestones for the implementation:</p>

<ol>
  <li>A fully recursive Fibonacci shader</li>
  <li>And a raytraced cube:</li>
</ol>

<p><img src="/assets/rt-cube.png" alt="Raytraced cube" /></p>

<p>This involves writing initial versions for a lot of the software infrastructure needed, so really shows that the basis is getting there.</p>

<p>At the same time we’re quite a ways off from really testing using CTS or running our first real demos. In particular we are missing things like</p>

<ul>
  <li>GPU-side BVH building</li>
  <li>any-hit and intersection shaders</li>
  <li>Supporting BVH instances, geometry transforms etc.</li>
  <li>pipeline libraries</li>
</ul>

<p>and much more, in addition to some of these initial implementations likely not really being performant.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Given that the new RDNA2 GPUs provide some support for hardware accelerated raytracing and there is even a new shiny Vulkan extension for it, it may not be a surprise that we’re working on implementing raytracing support in RADV.]]></summary></entry><entry><title type="html">A First Foray into Rendering Less</title><link href="https://www.basnieuwenhuizen.nl/a-first-foray-into-rendering-less/" rel="alternate" type="text/html" title="A First Foray into Rendering Less" /><published>2021-04-09T00:00:00+02:00</published><updated>2021-04-09T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/a-first-foray-into-rendering-less</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/a-first-foray-into-rendering-less/"><![CDATA[<p>In RADV we just <a href="https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/7794">added an option</a> to speed up rendering by rendering less pixels.</p>

<p>These kinds of techniques have become more common over the past decade with techniques such as checkerboarding, TAA based upscaling and recently DLSS. Fundamentally all they do is trading off rendering quality for rendering cost and many of them include some amount of postprocessing to try to change the curve of that tradeoff. Most notably DLSS has been wildly successful at that to the point many people claim it is barely a quality regression.</p>

<p>Of course increasing GPU performance by up to 50% or so with barely any quality regression seems like must have and I think it would be pretty cool if we could have the same improvements on Linux. I think it has the potential to be a game changer, making games playable on APUs or playing with really high resolution or framerates on desktops.</p>

<p>And today we took our first baby steps in RADV by allowing users to force <a href="https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap49.html#VK_KHR_fragment_shading_rate">Variable Rate Shading</a> (VRS) with an experimental environment variable:</p>

<p><code class="language-plaintext highlighter-rouge">
RADV_FORCE_VRS=2x2
</code></p>

<p>VRS is a hardware capability that allows us to reduce the number of fragment shader invocations per pixel rendered. So you could say configure the hardware to use one fragment shader invocation per 2x2 pixels. The hardware still renders the edges of geometry exactly, but the inner area of each triangle is rendered with a reduced number of fragment shader invocations.</p>

<p>There are a couple of ways this capability can be configured:</p>

<ol>
  <li>On a per-draw level</li>
  <li>On a per-primitive level (e.g. per triangle)</li>
  <li>Using an image to configure on a per-region level</li>
</ol>

<p>This is a new feature for AMD on RDNA2 hardware.</p>

<p>With <code class="language-plaintext highlighter-rouge">RADV_FORCE_VRS</code> we use this to improve performance at the cost of visual quality. Since we did not implement any postprocessing the quality loss can be pretty bad, so we restricted the reduce shading rate when we detect one of the following</p>

<ol>
  <li>Something is rendered in 2D, as that is likely some UI where you’d really want some crispness</li>
  <li>When the shader can discard pixels, as this implicitly introduces geometry edges that the hardware doesn’t see but that significantly impact the visual quality.</li>
</ol>

<p>As a result there are some games where this has barely any effect but you also don’t notice the quality regression and there are games where it really improves performance by 30%+ but you really notice the quality regression.</p>

<p>VRS is by far the easiest thing to make work in almost all games. Most alternatives like checkerboarding, TAA and DLSS need modified render target size, significant shader fixups, or even a proprietary integration with games. Making changes that deeply is getting more complicated the more advanced the game is.</p>

<p>If we want to reduce render resolution (which would be a key thing in e.g. checkerboarding or DLSS) it is very hard to confidently tie all resolution dependent things together. For example a big cost for some modern games is raytracing, but the information flow to the main render targets can be very hard to track automatically and hence such a thing would require a lot of investigation or a bunch of per game customizations.</p>

<p>And hence we decided to introduce this first baby step. Enjoy!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In RADV we just added an option to speed up rendering by rendering less pixels.]]></summary></entry><entry><title type="html">The Catastrophe of Reading from VRAM</title><link href="https://www.basnieuwenhuizen.nl/the-catastrophe-of-reading-from-vram/" rel="alternate" type="text/html" title="The Catastrophe of Reading from VRAM" /><published>2021-04-04T00:00:00+02:00</published><updated>2021-04-04T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/the-catastrophe-of-reading-from-vram</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/the-catastrophe-of-reading-from-vram/"><![CDATA[<p>In this article I show how reading from VRAM can be a catastrophe for game performance and why.</p>

<style>
.tablelines table, .tablelines td, .tablelines th {
        border: 1px solid black;
        padding: 5px;
        }
</style>

<p>To illustrate I will go back to fall 2015. AMDGPU was just released, it didn’t even have re-clocking yet and I was just a young student trying to play Skyrim on my new AMD R9 285.</p>

<p>Except it ran slowly. 10-15 FPS slowly. Now one might think that is no surprise as due to lack of re-clocking the GPU ran with a shader clock of 300 MHz. However the real surprise was that the game was not at all GPU bound.</p>

<p>As usual with games of that era there was a single thread doing a lot of the work and that thread was very busy doing something inside the game binary. After a bunch of digging with profilers and gdb, it turned out that the majority of time was spent in a single function that accessed less than 1 MiB from a GPU buffer each frame.</p>

<p>At the time DXVK was not a thing yet and I ran the game with wined3d on top of OpenGL. In OpenGL an application does not specify the location of GPU buffers directly, but specifies some properties about how it is going to be used and the driver decides. Poorly in this case.</p>

<p>There was a clear tweak to the driver heuristics that choose the memory location and the frame rate of the game more than doubled and was now properly GPU bound.</p>

<h3 id="some-data">Some Data</h3>

<p>After the anecdote above you might be wondering how slow reading from VRAM can really be? 1 MiB is not a lot of data so even if it is slow it cannot be that bad right?</p>

<p>To show you how bad it can be I ran some benchmarks on my system (Threadripper 2990wx, 4 channel DDR4-3200 and a RX 6800 XT). I checked read/write performance using a 16 MiB buffer (512 MiB for system memory to avoid the test being contained in L3 cache)</p>

<p>We look into three allocation types that are exposed by the amdgpu Linux kernel driver:</p>

<ul>
  <li>
    <p>VRAM. This lives on the GPU and is mapped with Uncacheable Speculative Write Combining (USWC) on the CPU. This means that accesses from the CPU are not cached, but writes can be <a href="https://en.wikipedia.org/wiki/Write_combining">write-combined</a>.</p>
  </li>
  <li>
    <p>Cacheable system memory. This is system memory that has caching enabled on the CPU and there is cache snooping to ensure the memory is coherent between the CPU and GPU (up till the top level caches. The GPU caches do not participate in the coherence).</p>
  </li>
  <li>
    <p>USWC system memory. This is system memory that is mapped with Uncacheable Speculative Write Combining on the CPU. This can lead to slight performance benefits compared to cacheable system memory due to lack of cache snooping.</p>
  </li>
</ul>

<p>For context, in Vulkan this would roughly correspond to the following memory types:</p>

<table class="tablelines">
  <thead>
    <tr>
      <th>Hardware</th>
      <th>Vulkan</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>VRAM</td>
      <td>VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT</td>
    </tr>
    <tr>
      <td>Cacheable system memory</td>
      <td>VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT</td>
    </tr>
    <tr>
      <td>USWC system memory</td>
      <td>VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT</td>
    </tr>
  </tbody>
</table>

<p>The benchmark resulted in the following throughput numbers:</p>

<table class="tablelines">
  <thead>
    <tr>
      <th>method (MiB/s)</th>
      <th style="text-align: right">VRAM</th>
      <th style="text-align: right">Cacheable System Memory</th>
      <th style="text-align: right">USWC System Memory</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>read via memcpy</td>
      <td style="text-align: right">15</td>
      <td style="text-align: right">11488</td>
      <td style="text-align: right">137</td>
    </tr>
    <tr>
      <td>write via memcpy</td>
      <td style="text-align: right">10028</td>
      <td style="text-align: right">18249</td>
      <td style="text-align: right">11480</td>
    </tr>
  </tbody>
</table>

<p>I furthermore tested handwritten for-loops accessing 8,16,32 and 64-bit elements at a time and those got similar performance.</p>

<p>This clearly shows that reads from VRAM using memcpy are ~766x slower than memcpy reads from cacheable system memory and even non-cacheable system memory is ~91x slower than cacheable system memory. Reading even small amounts from these can cause severe performance degradations.</p>

<p>Writes show a difference as well, but the difference is not nearly as significant. So if an application does not select the best memory location for their data for CPU access it is still likely to result in a reasonable experience.</p>

<h3 id="apus-are-affected-too">APUs Are Affected Too</h3>

<p>Even though APUs do not have VRAM they still are affected by the same issue. Typically the GPU gets a certain amount of memory pre-allocated at boot time as a carveout. There are some differences in how this is accessed from the GPU so from the perspective of the GPU this memory can be faster.</p>

<p>At the same time the Linux kernel only gives uncached access to that region from the CPU, so one could expect similar performance issues to crop up.</p>

<p>I did the same test as above on a laptop with a Ryzen 5 2500U (Raven Ridge) APU, and got results that are are not dissimilar from my workstation.</p>

<table class="tablelines">
  <thead>
    <tr>
      <th>method (MiB/s)</th>
      <th style="text-align: right">Carveout</th>
      <th style="text-align: right">Snooped System Memory</th>
      <th style="text-align: right">USWC System Memory</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>read via memcpy</td>
      <td style="text-align: right">108</td>
      <td style="text-align: right">10426</td>
      <td style="text-align: right">108</td>
    </tr>
    <tr>
      <td>write via memcpy</td>
      <td style="text-align: right">11797</td>
      <td style="text-align: right">20743</td>
      <td style="text-align: right">11821</td>
    </tr>
  </tbody>
</table>

<p>The carveout performance is virtually identical to the uncached system memory now, which is still ~97x slower than cacheable system memory. So even though it is all system memory on an APU care still has to be taken on how the memory is allocated.</p>

<h3 id="what-to-do-instead">What To Do Instead</h3>

<p>Since the performance cliff is so large it is recommended to avoid this issue if at all possible. The following three methods are good ways to avoid the issue:</p>

<ol>
  <li>
    <p>If the data is only written from the CPU, it is advisable to use a shadow buffer in cacheable system memory (can even be outside of the graphics API, e.g. malloc) and read from that instead.</p>
  </li>
  <li>
    <p>If this is written by the GPU but not frequently, one could consider putting the buffer in snooped system memory. This makes the GPU traffic go over the PCIE bus though, so it has a trade-off.</p>
  </li>
  <li>
    <p>Let the GPU copy the data to a buffer in snooped system memory. This is basically an extension of the previous item by making sure that the GPU accesses the data exactly once in system memory. The GPU roundtrip can take a non-trivial wall-time though (up to ~0.5 ms measured on some low end APUs), some of which is size-independent, such as command submission. Additionally this may need to wait till the hardware unit used for the copy is available, which may depend on other GPU work. The SDMA unit (Vulkan transfer queue) is a good option to avoid that.</p>
  </li>
</ol>

<h3 id="other-limitations">Other Limitations</h3>

<p>Another problem with CPU access from VRAM is the BAR size. Typically only the first 256 MiB of VRAM is configured to be accessible from the CPU and for anything else one needs to use DMA.</p>

<p>If the working set of what is allocated in VRAM and accessed from the CPU is large enough the kernel driver may end up moving buffers frequently in the page fault handler. System memory would be an obvious target, but due to the GPU performance trade-off that is not always the decision that gets made.</p>

<p>Luckily, due to the recent push from AMD for Smart Access Memory, large BARs that encompass the entire VRAM are now much more common on consumer platforms.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In this article I show how reading from VRAM can be a catastrophe for game performance and why.]]></summary></entry><entry><title type="html">A New Blog, Now What?</title><link href="https://www.basnieuwenhuizen.nl/a-new-blog-now-what/" rel="alternate" type="text/html" title="A New Blog, Now What?" /><published>2021-04-03T00:00:00+02:00</published><updated>2021-04-03T00:00:00+02:00</updated><id>https://www.basnieuwenhuizen.nl/a-new-blog-now-what</id><content type="html" xml:base="https://www.basnieuwenhuizen.nl/a-new-blog-now-what/"><![CDATA[<p>This is the first post of this blog and with it being past midnight I couldn’t be bothered making one about a technical topic. So instead here is an explanation of my plans with the blog.</p>

<p>I got inspired by the <a href="https://www.supergoodcode.com/">prolific blogging</a> of Mike Blumenkrantz and some discussion on the VKx discord that some actually written updates can be very useful, and that I don’t need to make a paper out of each one.</p>

<p>At the same time I have been involved in some longer running things on the driver side which I think could really use some updates as progress is made. Consider for example raytracing, DRM format modifiers, RGP support and more.</p>

<p>I have no plans at all to be as prolific as Mike by a long shot, but I think the style of articles is probably a good template of what to expect from this blog.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[This is the first post of this blog and with it being past midnight I couldn’t be bothered making one about a technical topic. So instead here is an explanation of my plans with the blog.]]></summary></entry></feed>