Stevus.com

Steven Wright

Written by Steven Wright who lives and works in Sacramento building useful things.

College PC Refresh Project

Sat May 23 2020

Before going to college, I saved up some money to do my first PC build, back when that was a thing and becoming popular with nerds.

My goal was to build a PC meant for gaming in the dorms since I was anticipating a heavy gaming community, which was pretty accurate.

That was back in 2002 when the parts I bought were nearly top of the line. I actually was able to find notes on what I used and the machine had a Pentium 4 3.2GHz CPU along with and ASUS P4C800 Deluxe motherboard, along with 4 x 1GB of PC3200 DDR memory, and a GeForce 4800 Ti 4800. I was very proud of this build when it came off the assembly line in my bedroom. If I remember correctly, all of the parts ran around ~$2,000 at Frys Electronics.

However, time took its toll, and I was forced to make some “upgrades” to it which really just meant fixing things to make it work so I could edit Word documents. Even Fry’s Electronics doesn’t exist anymore in the same fashion it did back in the early 2000’s. Everything now is NewEgg or Amazon, along with PC PartPicker.

Since the machine had suffered from years of neglect, the motherboard and CPU had gone out, and I was too cheap to buy new equipment in the same price bracket as the former components.

Basically what exists now is the bare minimum to run: a Celeron D CPU and an Abit IS-10 motherboard with 2 * 512MB PC2700 DDR memory.

Now that it’s 2020 and these parts are woefully underpowered, I’ve since had the idea to repurpose this machine to be a media server or something that can function with minimal CPU utilization.

Intel Celeron 2.66GHz 256K cache 533MHz FSB CPU

I’m not going to be breaking any speed records with this thing.

dirty cpu 1

This thing has sat in a closet or in the garage for years, and I think I’ve maybe turned it on 2 or 3 times since 2008? The CPU thermal paste is solid. Good thing I didn’t try to start it up. I’ll definitely have to add new paste before I try to use this again.

dirty cpu 2

Intel Socket 478 HeatSink Cooler Fan

The stock Intel CPU cooler was in pretty good shape, and just needed to be cleaned up. I also had to scrape off the old CPU thermal paste. I ended up breaking one of the clips on this thing at some point so I’ll either have to find a way to glue the broken pieces back together or, shrug, buy another one.

cpu cooler

Western Digital 100GB HDD

The 100GB HDD I had with this thing is still hopefully good and doesn’t have any bad sectors. It looks to be in good shape.

hdd

ABit IS-10 Motherboard

With basic CPUs come basic motherboards. This beast is an actually a MicroATX board and doesn’t use all the right standoffs, so it wobbles everywhere =/. Sadly I don’t think I wanna go down the road of finding an old motherboard to support a Celeron.

mobo

Corsair 512MB DDR 333MHz Memory

No that isn’t a typo, it’s actually just “DDR” =(. 512MB of memory isn’t going to get me very far, so I imagine I’ll have to order more. The Abit motherboard supported a max of 2GB so I’ll go on eBay and pick some up.

memory

Bestec 250W 12v ATX Power Supply

Upon testing the power supply, it made some clicking noises and failed to power a simple case fan so I can reliably say this thing is fried. At 250W anyways, who cares. I’ll go pick one up from Amazon for $15 and hope nothing else breaks =/.

This is also a 20 pin ATX power supply and most of the ATX ones out there now are 24 pin, so I need to find one that has a 20+4 pin configuration.

psu

Radeon 7000 64Mb Graphics Card

Not sure what happened to the GeForce card…

video

React Component Development Studio

Sun Nov 10 2019

It’s easier for developers to work on their new component in isolation. We have some UI elements where, if you want to see the “real” version on the site, then you have to do a bunch of steps (like log in, pick this option, go to this step, etc etc). That gets annoying when you have to repeat the process to test every single change. Much faster to just look at Storybook which shows the component in every possible state.

Doing so encourages better code quality, like components that are well encapsulated. Without Storybook, we saw more devs writing React components that were not very reusable. (one example, I saw a React component that was directly using window.location.search instead of taking that as a prop). If devs write the component in Storybook first, then that tends to make it more reusable.

That being said, if you’re a solo developer, I don’t know if it’s really worth the extra time investment to do something like this in the first place. However, it is a fulfilling process to complete if you’ve got a project you’re really passionate about.

I’d like to go over a few choices I investigated when trying to plan a component development solution of my own.

Homegrown

I initially started out building my own, but I quickly realized it was like developing another application just to manage the components for your actual application. It felt like it would get me a little bit of progress up front, but eventually would become unmaintainable and I’d stop using the tool.

Lots of smart developers, and even companies, have created sandbox environments for this. Why not just use them, and just leverage the time and knowledge that’s gone into them and use that to create better components that are easier to maintain?

Possible Component Development Process

  • Add new documentation
  • Add/update sandbox
  • Build the sandbox
  • Test your sandbox

Pros

  • Lots of control
  • Not dependent on other developers or buggy code or hacky implementations

Cons

  • Need to wire up components to sandbox manually
  • Create separate build and deployment system
  • Create sandbox app/page — Webpack? — Static HTML — Node server?
  • Its another you have to test
  • Its another you have to maintain and fix when it breaks
  • Too much control?

Styleguidist

React Styleguidist is a React component development environment with a living style guide. It provides a hot reloaded dev server and a living style guide that lists component propTypes and shows editable usage examples based on .md files. It supports ES6, Flow and TypeScript and works with Create React App out of the box. The auto-generated usage docs can help Styleguidist function as a documentation portal for your team’s different components.

I ultimately didn’t really use Styleguidist that much, it was really just something I switched to after I couldn’t get Storybook to work right away due to NFS issues I would later come back to solve.

Possible Component Development Process

Pros

  • The automatic documentation creation was a huge plus for me. I might go back to Styleguidist if I can find a way to bridge some of the features Storybook does.
  • Easy component search functionality

Cons

  • I could not get Hot Module Reload to work at all over NFS, especially after finding this trick for Storybook.

Storybook

Storybook is another playground for developing components and their behavior. It also serves as documentation for your component library, which you’ll have to create in your stories. You can showcase your components and their different alterations that are linked to props. While working with React for my project, it also supports other JavaScript frameworks like Angular or Vue.

The big cool thing is it allows you to browse a component library, view the different states of each component, and interactively develop and test components. When building a library, StoryBook is a neat way to visualize and document components and different addons, making it easier to integrate into your different tools and workflows. You can even reuse stories in unit-tests to confirm nuanced functionality.

Possible Component Development Process

Pros

  • Webpack is configured out of the gate, with no additional configuration on the part of the developer
  • Hot Module Reload support
  • Automatically detect your component and load stories
  • Easy component search functionality

Cons

  • Lots of headaches and issues configuring HMR when using files hosted over NFS mounts (such as when using Vagrant). I had to setup polling for Webpack in order to detect file system changes and trigger HMR.
  • Have to create documentation yourself
  • It takes a little extra time to write the “story” pages in addition to doing the page itself. I didn’t find this to be much of a hinderance, but it’s definitely something to think about.

Intro to the Cloudinary API

Mon Jun 17 2019

I needed to way to upload and serve thousands of images for a website I was designing. I didn’t want to waste storage space on hosting these images on my own servers, so started looking for CDNs that could do the job.

I found Cloudinary and was impressed that they had a free option with a good amount of storage and bandwidth available.

After readying the images, I got the Python library installed, and began to code up a solution.

export CLOUDINARY_URL=cloudinary URL from your account config
export CLOUDINARY_API_KEY=your Cloudinary API key
export CLOUDINARY_API_SECRET=your Cloudinary API secret
export CLOUDINARY_DOMAIN=http://res.cloudinary.com/
export CLOUDINARY_CLOUD_NAME=your Cloudinary cloud name

Then I could go ahead to import the library:

from cloudinary.uploader import upload

Once the environment variables and libraries were imported, all I had to do was call the upload method and I was in business.

response = upload(
    your_external_image_url,
    public_id = "your-folder/%s" % your_cloudinary_file_id,
    use_filename = 1,
    unique_filename = 0
)

The only thing I needed to make sure to store somewhere was the response['version']. This response field is how I’ll reference the uploaded file. Here is the final link I used:

https://res.cloudinary.com/CLOUDINARY_CLOUD_NAME/image/upload/response['version']/your-folder/your_cloudinary_file_id.jpg

All in all, now I have thousands and thousands of images available via CDN.

On Discovering GraphQL

Mon Jun 10 2019

While building this blog, I was really impressed with the simplicity of GraphQL, and its ability to quickly query any sort of relational database, given the right Gatsby plugin.

Imagine there is some kind of database with a couple of generic tables, each with some generic columns. Instead of having to write some kind of adapter to get DB data into your view, Gatsby along with GraphQL makes it super simple to keep the queries and data in one place.

If you wanted to query two tables for their data, all you would have to do is to define the outline of what you want in a JSON-like syntax (formally called the GraphQL Schema Definition Language (SDL)), and you’re all good to go!

query IndexQuery {
    rootNode {
        childNode {
            childNodeAttr
        }
    }
    allAnotherRootNode {
        childNodes {
            anotherChildNode {
                childNodeAttr
                anotherChildNodeAttr
            }
        }
    }
}

If you’re using Javascript, all you need to do to use the data would be to write something like this:

allAnotherRootNode.childNodes.forEach((node) => {
    console.log(node)
})

There is a little more to it, but I just wanted to go over a little about what I’ve covered while writing a super simple couple of blog posts.

Really good stuff here, more to come later. Stay tuned!

TCPDUMP to Postgres

Tue Apr 30 2019

Today I was trying to debug what my OrmLite queries were doing, but didn’t know how to print out the equivalent SQL being generated.

I needed to find a way to see this in real time as the queries were coming through.

After some quick consultation with a colleague, I needed to go into a little more depth on the research, so I turned to Google.

The recommendation I got was to tcpdump to postgresql, but there wasn’t time to elaborate further as they were busy with some other pressing issues. I decided to go out and search myself, armed with the basic knowledge of the basics of what to look for.

My first search was

This was sort of helpful, but I don’t know how to read HEX/Ascii. I needed a little bit more textual output. The search turned into

which seemed so much more helpful. I had to read some of the comments, but eventually came to the conclusion to use the following:

sudo tcpdump -i any -s 0 -l -w - dst port postgresql | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
    if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER|CALL)/i)
    {
        if (defined $q) { print "$q\n"; }
        $q=$_;
    } else {
        $_ =~ s/^[ \t]+//; $q.=" $_";
    }
}'

which produced:

SELECT min(checkpointnumber) from commits
SELECT max(checkpointnumber) from commits DISCARD ALL
SELECT "bucket_id", "stream_id", "stream_revision"  FROM "todo_projection" WHERE (1=0) OR ((("bucket_id" = $1) AND ("stream_id" = $2)) AND (("stream_revision" >= $3) OR ("stream_revision" < $4))) OR ((("bucket_id" = $5) AND ("stream_id" = $6)) AND (("stream_revision" >= $7) OR ("stream_revision" < $8))) OR ((("bucket_id" = $9) AND ("stream_id" = $10)) AND (("stream_revision" >= $11) OR ("stream_revision" < $12))) OR ((("bucket_id" = $13) AND ("stream_id" = $14)) AND (("stream_revision" >= $15) OR ("stream_revision" < $16))) OR ((("bucket_id" = $17) AND ("stream_id" = $18)) AND (("stream_revision" >= $19) OR ("stream_revision" < $20))) OR ((("bucket_id" = $21) AND ("stream_id" = $22)) AND (("stream_revision" >= $23) OR ("stream_revision" < $24))) OR ((("bucket_id" = $25) AND ("stream_id" = $26)) AND (("stream_revision" >= $27) OR ("stream_revision" < $28))) DISCARD ALL
SELECT min(checkpointnumber) from commits
SELECT max(checkpointnumber) from commits DISCARD ALL
SELECT "bucket_id", "stream_id", "stream_revision"  FROM "rdp" WHERE (1=0) OR ((("bucket_id" = $1) AND ("stream_id" = $2)) AND (("stream_revision" >= $3) OR ("stream_revision" < $4))) DISCARD ALL
SELECT min(checkpointnumber) from commits
SELECT max(checkpointnumber) from commits DISCARD ALL
SELECT min(checkpointnumber) from commits
SELECT max(checkpointnumber) from commits DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL bucketid, streamid, streamidoriginal, checkpointnumber, streamrevision, commitstamp FROM commits WHERE bucketid = $1 AND checkpointnumber BETWEEN $2 AND $3 DISCARD ALL
SELECT min(checkpointnumber) from commits

Success! I had never imagined to use tcpdump to get this data. I just assumed I would have to use a SQL profiler or some other obnoxious application.Â