Week 23 of 2020

World-Wide Meltdown / Pandoc Markdown / Knowledge Bases FTW / Producing Real Value / Dumb-Shit Documentation Sites / Fucking Stupid Likes and Dislikes / Back to 4-Space Tabs / Some People Just Don’t Have What It Takes and Don’t Care / Rust is Squeezing Out Go

Sunday, June 14, 2020, 1:21:44PM

Everyone knows how much I love Go. But something interesting has been happening since fully adopting Bash concurrency and learning a bit of Rust. Rust and Bash are squeezing out my personal need for Go.

Let me explain.

In the old days you had shell and C. That was it. C was even called a “high-level” language by Kernighan at the time. Today I’m finding the equivalent to be Bash and Rust. Go replaces Python. But the space occupied by Python is also becoming smaller, at least for me.

I’m not doing a lot of cross-platform general purpose and machine learning programming. I’m not even doing a lot of terminal UI program development or back-end web services development. These are Go’s sweet spots.

Bash has completely taken over almost everything I would ever need for Python and Perl (and I never did use Node, thank God). In fact, now that I am more fluent with coproc and & for backgrounding processes efficiently Bash has taken over a large part of what I would grab Go for before.

I also have been realizing just how bad Go is for beginners when specifically learning concurrency. I used to think Go was great for beginners because of its simplicity. But that is exactly why it is dangerous. Go does not protect beginners in any way from writing unsafe, concurrent code. In fact, it is even more likely because it is so easy to write for beginners to write concurrent code with goroutines. Beginners never learn the safeties built into the compiler to check for race conditions and such. It is never covered in any material — if you can even find any up to date material at all. That’s bad.

Meanwhile, despite the complexity of Rust for concurrency and general syntax that I’ve been railing on because it is so hard for beginners to get their heads around, when a beginner finally does learn concurrency in Rust they get safety automatically. They have to forcibly break the safeties of concurrency already in place. In fact, Rust is a much safer language all around to learn, far more safe than Go even though Rust syntax is wildly more complex than Go’s.

Rust clearly replaces C and C++ for Unix philosophy compliant commands. Rust is a very good candidate to rewrite all the old busted boomer GNU code. Go isn’t. In fact, it is a wish of mine that someone would write a 100% compatible Bash shell clone under a permissive license in Rust.

Then there’s the what-would-I-require approach.

What if I had to run a company and my very life depended on the team I hire to be able to produce safe, robust code quickly that would remain completely sustainable over time? Would I want to hire a bunch of Go programmers who possibly came from the Node and Python community? Or would I want the 10x developers who understand Rust and why it is so significant, like Brian Cantrill.

Picking Rust means rather than having to vet a potential Go programmer by asking all kinds of questions about compiling with the profiler and race condition checks I can simply ask to see an example of a candidate’s concurrent Rust code instead. It is much harder to identify a Go programmer experienced with safe concurrent programming than a Rust programmer of equal skill.

Clearly if my life depended on it I would want Rust developers over all of the others even if I had to spend 10x the effort to find them. If I made a bad hire I’d still happy with the fact that it is nearly impossible to write unsafe code in Rust. In other words, Rust covers me on two fronts: (a) only really well informed and naturally good programmers even learn Rust, and (b) beginning Rust programmer are more likely to produce good, strong code that will stand the test of time.

No wonder it seems like all the good Rust developer jobs are in Germany.

Rust makes it easier to filter out those who just aren’t safe programmers. If my life and company depended on it, the value proposition of the Rust language is much more compelling. It just depends on if Rust truly delivers on the promise of safe concurrency. That’s where I need to focus my research.

What I’m saying, I think, is that even though I’ve barely coded anything of any significance in Rust I now see why the counter-intuitive notion that Rust is better for beginners despite its complexity might, in fact, be objectively true. This compels me to fully master Rust by writing a few very significant projects in it — specially things that benefit from Rust’s strengths: small run-time, no garbage collection, memory safety, PEG https://pest.rs library, and raw speed.

Therefore, my plan is to entirely complete and grow kn into the primary utility and keep it in Bash. Then I will supplement it modularly by providing additional commands such as a Pandoc Light Parser in Rust can be be used independently in the Unix philosophy integrated into kn through regular command calls, just like I do with pandoc now.

Sunday, June 14, 2020, 1:01:33PM

Here are my needs when it comes my most common needs for tools and languages:

  1. Stuff to help me automate and simplify life on the command line
  2. Stuff to create highly efficient, Unix-philosophy compiled commands
  3. Stuff to make front-end web sites and apps
  4. Stuff to make back-end web services

I imagine myself looking into the toolbox when I’m about to work on something.

When it comes to getting shit done there is nothing faster than Bash. I have objectively demonstrated that to myself over and over again. The twitch tool, my kn utility, and my repo GitHub and GitLab tools have been in Bash and will never be rewritten in any other language. There just is no need. Now that I’ve discovered coproc and started using & to background processes more I simply do not need a complicated strictly typed language to achieve maximum concurrency as fast as possible — both in terms of execution and development speed.

Sunday, June 14, 2020, 12:45:17PM

So I have to admit Rust syntax and style is really bork, but I’m definitely okay with it. It did cause me to reconsider my fetish for 2-space tabs that I took to some two years back after doing a lot of JavaScript development. Now Linus Torvaldz and everyone else agrees 80 characters is too limiting given modern terminal widths all the arguments for 2-spaces really fall apart — especially since four spaces has been the standard for most languages for more than two decades. It is one of the few things that Python, PHP, Perl and shell coders generally all agree on. Plus it is just so much more readable — especially when disabling syntax highlighting of any kind.

So I have been slowly but surely changing all my code and writing to include the new 4-space standard. It’s amazing how much something so seemingly trivial can produce so much work.

I feel like I just can’t get my feet under me lately with so much going on. When I do run into old stuff, like my first knowledge base back in 2013 I realize how much progress I have made. The Knowledge Management Utility (kn) that I have been working with during all of this is so ideal because it just gets so much done so quickly. There is seriously nothing faster than prototyping in Bash. Bash blows away my best productivity with Python, Perl, Go and any other language I’ve used in the past.

It is really such a shame that more people don’t understand this fact because they haven’t been exposed to it. But I’m doing everything in my power to change that and shoot our collective productivity forward for those with the good sense to truly consider and understand why. Just like any age, most people will continue to simply no understand, but it warms my heart to encounter someone line @gamozo and be like, “Woah, this dude knows what I’m talking about.”

In fact, I find that people in the pentesting and security field almost automatically understand where I’m coming from. That is enough consolation for me since the %37 annual increase in demand for the fastest tech career in the world is specifically full of those people. We are all just hackers at heart even if we are doing the tools engineering and not the pentesting. All the other Java and C++ developers can just play with their toys and leave us alone. I won’t say it is a cliche, but its a cliche. You either get it and are one of the cool kids or you don’t and frankly don’t belong. No need to be mean, just realistic. A lot of people just don’t have what it takes to truly master the terminal including Bash scripting an prototyping. Most don’t have the commitment and ability to even master touch typing, let alone terminal skills. The world will always be filled with such people. They aren’t bad people or stupid people. They are just not hacker material and never will be. The end.

Sunday, June 14, 2020, 10:44:55AM

Today I opened up my Basic Markdown video to find a “dislike” for no apparent reason. Humanity has invented very few things as fucking stupid as “likes” and “dislikes”. The Black Mirror episode about it doesn’t even come close to capturing it.

Someone can be a complete fucking moron and simply dislike something voting it down. Then another dumb ass can upvote something else. You have no idea about the spineless cowards who refuse to take two minute to justify their position.

The only reason like and dislikes exist is so that the service providing them can suck in more shallow participants, ad revenue, and money. In other words, once again marketing and ad revenue are destroying authenticity and intelligent dialog.

Saturday, June 13, 2020, 11:34:18AM

Feels good getting all those Types captured. Now time to rip apart the entire “studio” and reorganize the house. It’s nice to have the weekend “off” but I’ve just filled it up again with other shit to do.

Saturday, June 13, 2020, 10:23:23AM

The list of knowledge node Types is shaping up nicely. It’s evolved a bit from earlier versions which did not allow nodes to be contained within others. This directory organization — along with the constraints on node directory IDs — has made for several improvements that were not possible before.

Paragraph Just a single paragraph which Pandoc Mardown emphasis.

Unordered Simple, one-level list containing markdown that will be rendered by looping through and loading the template partial specified for the node. By default each item in the list is a simple unordered list item with a bullet.

Ordered Simple, numbered, one-level list containing markdown that will be rendered by looping through and loading the template partial specified for the node. By default each item in the list is a simple numbered list item. Numbered means numbered, not lettered.

Blockquote Same as Pandoc Markdown.

Verbatim Same as Pandoc Markdown.

Code File contains only code in a given language.

Article (Default) Akin to a “page”, “document”, magazine article, blog post, definition, encyclopedia entry and such.

Spec A specification can be as informal as a challenge to test skills doing different programming tasks or as formal as a corporate project RFC. Stories contains the user stories, sentences or paragraphs describing examples of how stuff will work. Stories are usually no more than three lines of text. More formal specs can include Reqs lists to fulfill the full INVEST characteristics of a good spec. Think of how a highly skilled team of professional hackers would outline an op, or what corporate client might want built into an application. Both are Specs with Stories and Reqs describing the outcome at each step along the way to success.

HowTo Step by step, walkthrough of a how to execute a specific or generic task often outlines in a Spec node. Each step must have a summary of the step, followed by a detailed explanation and demonstration of how to do it. Use header levels to group and implicitely number the steps. There are no special YAML traits. Usually a single HowTo contains several Prereqs to help avoid maintaining an excessively hyperlinked body. Adding the Spoilers boolean will progressively reveal all headings and paragraphs incrementally.

Log Cronological listing where each second level header is a time stamp the format of which is defined for the knowledge base or individually in the YAML for a given node. Order is irrelevant since that is a matter of sorting for the consumer of the data, which can easily be altered through DOM manipulation with JavaScript, for example.

Collection A list of maps specified in a Schema header using YAML !! notation for the basic types float, str, list, map, bool, timestamp, set, binary, etc.

ParaList Similar to a Bulleted list but each item is a simple paragraph with an initial topical sentence that is usually bolded.

NumParaList Same as ParaList but numbered. Numbered means numbered, not lettered.

Table Simple table where Fields are specified and each record in the list is also a list. Generally a Collection is preferred for its superior ability to capture complex data structures.

Image Single image.

Video§ Single video reference. Usually video will not be stored locally. If so, the rendered node will contain an embedded player for the video. If the URL is external and Internet access is detected the video should be embedded using the default video partial or one for the specific node if available. A thumbnail.png matching one full frame of the video resolution should always be included. Any additional Markdown can be included in the body of the node to annotate the video. Nothing should refer to any specific video service using minutes and seconds notation instead to allow the viewer to find the location in the video themselves. Can be combined with other node types and will render differently depending on the combined type. When combined, the Titles are identical.

Audio§ Essentially the same as video but without a thumbnail.png. Can be combined with other node types and will render differently depending on the combined type. When combined, the Titles are identical.

ImageMap§ An image with clickable regions that link to internal nodes and external resources. Rendered as a map in HTML.

Diagram§ Any large image of any type that can be displayed inline as well as providing a link to a file containing the diagram. Usually contains annotations as well.

Slides Follows the specific Pandoc slides format conventions.

Chart§ Any of a number of chart format types. A fallback chart.{png,jpg,svg} must be provided but rendering formats can progressively replace it with interactive chart renderings of any kind.

Quiz Simple YAML list of Qs and As where the questions are single paragraphs of Markdown of reasonable size, which can contain images, and the answers are regular expressions for every possible accepted answer. Renderers can progressively allow the answers typed in and validated in real time. There will never be multiple choice of any kind, ever. Just a list of human-readable acceptable Answers that match the regex that can be revealed with a click or tap on mobile devices.

Outline An automatically generated node that recusively examines all the nodes listed by ID in the Outline property and creates a plain indented outline by title only.

Bookmarks An automatically generated node that reads the Titles of all the listed nodes in the Bookmarks property and links them to their IDs and URLs. Can include both knowledge nodes and external Web URLs.

Catalog An automatically generated node that reads all YAML properties from all of the listed nodes within Catalog making the data within them available to the template renderer. Rendering depends completely on the target rendering format.

Redirect A simple redirect to another page. The Redirect points to the new location. These are picked up by renders and collected into whatever form of redirection functionality is available for the rendered format. For example, an HTML renderer gathers them into a _redirects file.

Aggregate An aggregate of knowledge nodes into a single node containing all the content from the aggregate nodes in the order listed in Aggregate. Useful for composing books, articles, courses, and such from existing nodes. Nodes listed in Aggregate may be local or remote. If remote a full URI is required and the remote node must fully comply with knowledge node specifications, which are mostly just to have a README.md file contained within the URI and that only local dependencies marked with ./ will be pulled over.

Latest An automatically generated node that recursively examines all the nodes within itself for those with the latest changes and displays them. Ignores and other criteria are passed to the Latest property. ————- ————————————————————————————–

I’m really glad to see the notion of optionally inlined header includes again. That was a part of the original Essential Web design in 2014.

Saturday, June 13, 2020, 10:05:42AM

Feeling really triggered by something as simple as an unnecessarily complicated URL like those on the completely brain-dead documentation site https://readthedocs.org. Here’s an example, the URL to their “Getting Started” page:


Yeah right, like anyone is every going to commit that to memory. How fucking stupid do you have to be to build a system that results in URLs that span more than a screen of text?!

You know what this should have been?


But that is too much to expect from a bunch of Python-loving, Sphinx pushing, non-writers. Perhaps they thing that URL will do more for their SEO (like I once did). It doesn’t.

Sphinx is so fucking stupid! Oh my God, I’m so triggered.

You can tell engineers made it and not anyone who actually has to write for a living. You know how stupid their designs are just by noticing their focus on Python and reStructuredText. Hell, they don’t even use standard Markdown nor even seem to know what Pandoc is.

This triggers me because it is — once again — an example of people designing things without even imagining how the shit is going to be used. People completely put out of their mind the person using what they are designing.

Don’t get mad, get busy, Rob. Don’t get mad, get busy.

What we need is not yet another application or site, we need some standards and best practices for the content itself first and then any application can be layered on top of that.

Wednesday, June 10, 2020, 7:17:01PM

I’m revisiting the language and terms I use a lot lately. One term that really holds up is challenge. People really respond to it. It’s essentially the same thing as an exercise but invokes a lot more motivation and fun.

Another place that we see challenge-based learning really skyrocketing is in the CTF and security space. Essentially every “wargame” is a challenge. The only difference is that there isn’t something to find, no prize to unlock or discover. But maybe there should be?

The word challenge also can replace the word project where a specification is written out and the challenge is complete when the project matches the specifications.

After considering the boring term howto there really is no comparison. The howto is simply the walkthrough, the recipe, the solution.

I’m trying to reduce the complexity of terms and number of menus on RWX.GG and think I’ve got them now:

Welcome How to join our community and what we are about, how to connect.
Questions Dos, don’ts, who, what, when, where, why?
Boosts Combinations of Tools, Challenges, Definitions, Articles, and Tools
Challenges Projects small and large, howto, walkthroughs.
Latest Automated recent created, updated, or modified content. Also news.

Each of these is actually just an index.

Boosts are the trickiest to categorize.

Wednesday, June 10, 2020, 6:52:35PM

Made the mistake of clicking on my stream stats. My high point was 125 subs. I’m down to 67.

It actually just makes me laugh. It shows how fickle people are and how much they think that somehow by buying a book or subbing to a streamer that all their magical learning problems will go away and they’ll be able to suddenly get great jobs as coders or hackers.

So it’s back to streaming as usual. Eventually I’ll add the schedule back. But I’m not ready yet. I’m on the verge of committing to never do a video destined for YouTube until I have the entire write up complete so that I can read from it and people can follow along, no more summaries and overviews.

I really do love the people who have been tuning in and trying their best — especially those expressing their support for the herculean effort it takes to make all of this available.

That is why when I stopped the boost. People were dropping like flies. They couldn’t keep up. I said from the beginning it would require an eight hour per day commitment. But that was just too much. I knew it going into it, but even the fastest among the group couldn’t keep up. These are smart and dedicated people most of who have no idea how to learn and take responsibility for their own learning.

The answer is the same as it always was: modular content that they can consume when they want and need at their own pace and a community there to help them with motivation and answers. That is what I will continue to provide.

But I do laugh because as usual, the right way to do something is the opposite of the way that gets the most attention and money. I never did the boost to get any money. I always wanted a way to get feedback on preparation of materials for everyone. It’s a good thing too, because as soon as I made the change everyone just stopped coming.

Then to make matters “worse” (if my priorities were actually on my success as a streamer according to Twitch statistics) I went and started OverTheShoulder streaming again which is when I don’t respond at all to anyone in the chat and just live stream what I’m writing and doing. Only a hearty few really care for such things — especially when it involves just thinking through things like knowledge management and not showing everyone how to hack or make millions in the latest fad language.

I suppose the thing that saddens me the most is how frequently I encounter people unwilling — or worse — unable to simply communicate in written form at all. So much of the value I provide is by writing shit down that no one yet has and working on sustainable ways of keeping it up and maintained. Sometimes it seems that only a very few actually really appreciate that side of the work.

Wednesday, June 10, 2020, 6:10:33PM

The last few days the news has been filled with the gory scene of a 75-year-old activist doing nothing but standing being pushed to his brain-death as he fell on his head and bled out on the sidewalk in front of his cities town hall. Our shit-for-brains, legally certified insane non-president said essentially that it was his fault for being there and a bunch of other Republicans agreed.

This is why I don’t watch this shit very much despite how important it is. Watching that video is bad, but watching the unapologetic cops that did this be released to cheers from a full crowd of cops and other supporters is enough to seriously activate anyone with a heart. Empathy is dead.

Wednesday, June 10, 2020, 4:19:50PM

So hard going through knowledge base migrations. People want the old content because they don’t know something new is available and sometimes the new stuff has bugs and broken links. I’d go so far as to say that maintaining a knowledge base is way more difficult than maintaining any open source project.

Wednesday, June 10, 2020, 2:52:11PM

The fundamental to all the knowledge issue is the inability to audit the knowledge. If it was code or some type of system you would call it “technical debt” but we don’t have the same concept when it comes to our knowledge systems.

I think the fix might have something to do with purposefully breaking things, meaning that if something does not get an update it automatically disappears thus forcing content maintainers to receive notifications that stuff is explicitly breaking.

What are the things that can be audited in a knowledge base?

“Information is relevant as long as THIS is true. ‘Great phone to get.’ –> ‘This phone is top 5 best phone.’”

“Deep learning” depends on “Machine learning”

Wednesday, June 10, 2020, 9:51:20AM

One of the simplest questions that has driven me nearly insane is what specifically to call what O’Reilly starting calling “recipes” and others call “HowTos” and even “tutorials”. That last one is so phenomenally inaccurate yet popular that it actually makes my blood boil more than I’d like to admit. I have blogged about these definitions so much it all starts to seem to glaze over. So here’s another blog post about the same thing.

Thankfully we actually have some solid terms for this stuff in the technical realm, its just that muggles have no idea what they mean, without some explanation:

Term Definition
Operation Something done with parameters without specifying how it is accomplished.
Method The way an operation is completed.
Procedure Same as method but used more commonly when involving people.
Function Takes optional input and returns output with no side-effects. Also job with people.
Task Sometimes operation, others method or procedure. Usually time-bound.
Skill The capability to successfully execute a specific task.
Ability A capability that is generally more innate and much harder to learn. Superpower.
Job All over the map. A specific task underway. Also person’s occupation.
Occupation What a person does with their time usually to make money and fill a social need.
HowTo Technically a method for people. Often associated with task. Requires skill.
Knowledge The stuff that can be contained in human brain including skills and abilities.

You can see that there is a lot overlap between the terms — some of it very confusing. The review has been helpful, however. It seems clear that the term I should use on RWX.gg and in other knowledge bases is howto because it speaks so clearly to most people. “Oh, it’s ‘how to’ do something. Okay. *click*” It also makes the IDs read like English sentences like (/tools/ssh/howto/catpub). In a pseudo-OOP form for the human “class” it might be Human.ssh.catpub() and if my specific instance of Human called the method rwxrob.ssh.catpub().

I won’t lie. Imagining what code would look like that described real-world procedures being done by instances of humans makes me smile. I’m just that weird. Later when we find out that we actually are just computers at some level it will be even more entertaining. But I seriously doubt any algorithms in the natural world have ever been coded as opposed to discovered through millions of small improvements as we see with machine learning today, (which reminds me, I need to make time on one of my now free weekends to crack open the calculus of machine learning and really sink into it.)

There’s another interesting fact that emerges. Human brains can be discussed and approached just like any other device that can be programmed. It’s just that the method of programming a human brain is radically different than programming modern digital computers. This fact can inform decisions about how to organize knowledge and design methods of learning. It is the reason that knowledge source code is an accurate description of any stored knowledge that a human brain can consume, the most fundamental of which is text.

The one thing that the 90s OOP insanity got right was the clear distinction between operations and methods much like Pascal and SQL got functions and procedures right.

Thankfully it is just lazy people that call these things by their incorrect names today because the languages allow them to be referred to correctly in Bash, Python, and JavaScript. You don’t have to use the brain-dead “function” keywords in those languages. I mean, IT’S NOT A FUCKING FUNCTION! Okay, calm down. Rust and Go jumped on to the stupid-train by forcing the use of fn and func for things that having nothing to do with functions. That was sheer laziness by the creators of the language syntax. Perl has sub which is ideal because, after all, that is what most of those things actually are. But God help you if you use the term subroutine today. “OKAY BOOMER,” is the response even though it is so much more accurate its actually sad. All these people who don’t know the difference are actually the problem. They cannot decide if their code in this block right here is actually a function or a subroutine/procedure and that is what is causing the problem. Who am I kidding? No one gives a shit. The massive amounts of absolute crap code that hacker are walking through and that is killing people in their cars still isn’t enough to make people care. Besides, the people being affected aren’t the coders writing it. Its the poor souls being affected by it after the coder has cashed out and moved on to the next easy gig. That trend is only going to get worse as more and more people don’t even bother to vet the code at all.

Tuesday, June 9, 2020, 9:45:12PM

Realizing that I have to have a solid look at the data model for RWX before getting too much further into adding content to it. I’m up to some 200 or so nodes and the organization is starting to weigh a bit like it always does when I do a refactor. This time, however, it is rather obvious that YAML and partials are the answer.

There are a few specifics I have to work out for the common YAML meta data fields. I have a solid Category and Type grouping defined from before all this happened. But there are other state properties that need to come into play in order to allow them to be displayed in indices without a problem but also without providing any broken links. Here’s what I’m thinking so far:

Field Description
Title Most important and 100% relevant even if rhetorical.
Query Set to anything (true) to activate an external search query hyperlink for the Title.
Subtitle Snarky or clarifying addition. Not necessarily relevant. More rhetorical.
Category Specifies the topic of content. (ex: boost, course, howto, review)
Type Specifies the format of content. (ex: article (default), table, log, revlog, walkthrough, schedule, toc, index, list)
Summary Only requirement is that it be a single paragraph/line. Should cover everything from reading the whole thing including premise and conclusion with as much justification for the conclusion as possible. Reading the whole thing should be for those who want the details. This is the TLDR but better. In many cases the Summary might be all that is needed — particularly for definitions.
Created Optional date and time in ISO (without the T) when the original content was created. Particularly useful for articles.
Updated Last date and time in ISO of any official update or errata fix. Different than the Created time stamp.
Planned A rhetorical or ISO date or time span when this particilar node can be expected to be complete and available for full reading and use. This field allows the framing of content in an organized way without creating broken links. While having a Summary available along with Planned is preferred it is not required.
ExpiresOn Exact date on which this content will definitely have expired if not at least checked for relevance. If omitted let the auditing tool decide on a default age based on the Updated, Created or last modified dates.
PreReqs Stuff that is good to know and strongly recommended before consuming this node. This node cannot exist without the other prerequisite nodes also existing and being up to date. Critical+ relationship. Not only will this node be omitted if a prereq is not available, but the prereqs should be consumed before this node.
DependsOn Creates a composition relationship with the node or external content referenced meaning that if the dependency changes or is removed that this node should be made immediately defunct until the dependency can be resolved or a new dependency can be established. This means, for example, that if a dependency returns a 404 during a link scan that the node is omitted entirely until the link is resolved. Critical relationship.
SeeAlso Set to a list of any Markdown string containing local or exteral links, references, or even just words referring to things the reader should look into. *Be generous in adding entries — especially those that objectively and rationally take an opposing position to conclusions in the content — in order to encourage examining a bredth of input and data on the topic. Loose relationship.
Deprecated A text field explaining why the node is old enough to be ignored but leave it so that others understand that it is deprecated in case other external nodes have linked to it.
Contributors The Name and site ID of anyone who has contributed to this specific node. The ID is also the canonical link to /contrib/<ID>/ which any contributor can add and update for themselves at any time with a merge request. The Contributors of the root node (.) are those who have contributed to more than 20 specific nodes in such a way that being listed as a main or default contributor makes more sense.
Video The full URI to the accompanying video be it on the Web or elsewhere. If this is set any indeces created will have a television emoji 📺 that links directly to the video so as to make more sense than is possible through any sort of video service playlist organization even if such is in place as well.
Audio Same as Video but Audio. If Video can be consumed simply as Audio as well then just use Video instead. This distinction is more about the help application that will be used than the format itself. However, whenever possible all content should be consumable entirely as written text with everything else being secondary unless absolutely required to convey the knowledge (review of music or video, for example). Always take a progressive approach to knowledge. First words, then supplement the words with images, audio, and video. In other words, prepare everything as if for someone with visual and hearing impairements. Aethsethic appeal is fine so long as it does not put the content out of reach for anyone unnecessarily.
TODO Stuff that remains to be done on this node. However, never leave a node with significant stuff on the TODO list unless you have set Planned as well to indicate that work is, in fact, in planned for this node.

Well that was more than I was planning on. But looks good. I’ll need to move it into the /contrib/guide/metadata/ at some point. Feels good to have it done though because now I don’t have to change the format and organization yet again. Having a more structured YAML-centric knowledge base is definitely the way to go.

One thing that occurs to me writing up the Planned explanation is that I can frame the content that needs to be written and allow very specific contributions from others in the community who may be willing to flesh out the rest of the node, which prompts me to add a Contributors list to each node.

I did notice some redundancy between hyperlinks in the Summary and entries in SeeAlso but I suppose that is fine since both fit on the same screen when editing. Generally I think SeeAlso will tend to have more in it since it will include opposing sources as well eventually.

Categories are going to be a little tricky, but probably worth it.

Category Description
boost Just to get beginners going. Not necessarily remedial content, but content that leaves a lot for the person to figure out on their own. Even something like the Join Us welcome node is a boost because it doesn’t get into the particulars of how to join Discord and such. If it did it would be a walkthrough instead. If it where just a bunch of marketing speak tying to convince people to join then it might be an article instead. (By the way, that stuff usually goes in the root/cover node.)

Because of the nature of Pandoc templates there also needs to be a bunch of mechanical properties that have nothing to do with the meta-data itself. Since these are all related specifically to pandoc template generation (or whatever other rendering format or rules) they must begin with an underscore (_).

Tag Description
_boost, _article, etc. Same as Category but required for simple Pandoc boolean templates.

Tuesday, June 9, 2020, 9:44:32AM

I’m really glad I took some time to really dive into the new partials in Pandoc 2.9.2. It is rather obvious at this point that the Pandoc team is moving toward structured data documents with an emphasis on YAML. It’s amazing the conclusions like-minded people tend to make when they are all working on the same problems because they are things that they need. This was the exact reason that I created Hugonot and FADB back in the day. I was obsessed with TOML and thought that it was the best structured data format for such things. Clearly the right format is YAML. I especially believe that now that I understand YAML linking and types which allow YAML to move deeply into the database space, not just configuration files. People hate on YAML for having significant whitespace and normally I would agree, but this isn’t a programming language and readability and stylistic expression — without sacrificing consistency — is really the dominant priority. Most of the time when I encounter a YAML hater they have not even tried to use it once — typical.

So I am facing a real dilemma but I think I know the answer already. I just have to write it down to process it. It might be more informative to recount how I arrived at it.

The first few knowledge bases I created for skilstak.io, the Essential Web, the Knowledge Net and SOIL were all focused on content contained in Markdown. At first I forced myself to use only Basic Markdown but even before that I was using something from 2009 and didn’t even allow long single lines as paragraphs. I would never have allowed “meta data” into the Markdown and thought that mixing the two was a violation of separation of concerns. I abhorred Jekyll for “front-matter” and having forever ruined the cleanliness of pure Markdown. A lot of people don’t know that having YAML — or anything — at the front of a Markdown file was never original Markdown and still is not a part of CommonMark even though Pandoc and others wisely allowed it. I now have enough experience to realize keeping the metadata with the Markdown is preferred because it allows the file to be migrated rather easily. In fact, YAML specifically allows other body text to come after the YAML section. So in a very real way Pandoc Markdown files are, in fact, YAML files with Markdown in them, which brings me to the main point.

Knowledge bases are at least half structured YAML data, and half free form written content. But the more I factor out knowledge into automata that can be aggregated in any number of ways — in order to remove redundancies — the more I realize even a traditional concept like chapters gives way let alone the whole notion of a book. Instead, indexes that aggregate titles, links, and summaries make way more sense. Taken to its full form you get dynamically composable documents that either an author or reader can organize for themselves in a cafeteria style approach to knowledge consumption.

This isn’t hyperlinking but definitely builds on the idea. I abandoned self-populating dynamic documents with the essential web. But I find myself doing about the same things now as I create utilities for aggregating the knowledge. The simplest form is an index. A bit more and you get a long index with summaries. But it isn’t much further to load the whole knowledge node, statically or dynamically. This means that books, guides, and such can be compiled in different ways much like source code.

I am shaking my head at all the massive failures to accomplish similar things, ReadTheDocs, GitBook, Wikis, and more. None of them successfully understood and acted on the fundamental concept of factoring knowledge into the most finite form and then aggregating them back without regard for any particular target format or rendering. All of them assume something about the final output.

I’m more that a little annoyed that humans haven’t figured this out already and created something to do this instead of having to create it. But I take solace in the fact that the very brilliant people on the Pandoc project seem to have arrived more at that conclusion than any others with Pandoc 2.8 and partials. This allows the finite knowledge form to be a YAML document with text fields that include Pandoc Markdown. Conclusion? YAML all the knowledge.

Seriously, JGM and the gang are just so fucking amazing on so many levels. They bring practicality to everything they do rather than just creating yet another over-engineered piece of shit like so many knowledge solutions out there are that came from engineers (like Restructured Text, Gatsby, Hugo, even Jekyll). The Pandoc team has authors as their priority and never forgets who is using the product. The others seem to always target coders instead, which is why they continue to fail over and over again.