# Notes for Week 28 of 2020

## Sunday, July 19, 2020, 6:17:42PM

Getting the questions about why I’m not writing a book. I get that one a lot. The simple answer is that books take too much time and knowledge bases are more effective if I can ever get them updated enough. The challenge is getting your knowledge into others view. But the plan for Knowledge Net sharing and subs will cover that. I just have to fucking finish it!

## Sunday, July 19, 2020, 9:34:11AM

PEGn is really grabbing my attention. It’s becoming that perfect thing between ABNF, the original PEG (which has several substantial flaws in the “example” syntax), and the myriad PEG parsing engines out there — all of which suck at creating readable grammars. PEGn will boast the following when complete:

• Self-specifying PEGn grammar
• Most readable grammar specs on the planet
• Nearly identical semantics to original PEG “example”
• Semantic capitalization identifier naming conventions
• Full set of reserved classes and tokens
• Zero ambiguity semantics
• Full Unicode support

And eventually I plan on building the following tools for it as well:

• pegn - linter, validator, and code generator
• vim-pegn - vim plugin with PLS language server support

My code generator won’t clutter up the grammar itself with inline code (as cool as that is). Instead it will allow granular creation of the different language renderings. This is substantially better than anything else out there right now because they are all language specific which destroys the usefulness and ubiquity of the grammar itself. No, instead pegn will support modular code generation support allowing different implementations of a rendered parsing even in the same language. For example, say you want your grammar generated in interface-centric Go versus struct-centric. Or say you want to generate code that generates an AST, or other code that is focused entirely on handling parse events. There are so many different ways to implement a parser for different needs. The one flaw that every PEG-to-code generator has right now is the inability to adapt to these needs and the fucking gawd-awful grammar specification files that result.

## Sunday, July 19, 2020, 7:39:00AM

Been really conflicted about when to use Go interfaces and when to use structs. I tend to be a one thing or the other kind of guy. Using interface gets you immense flexibility while structs work better with marshaling and require far less code. I’ve decide to follow Goldmarks’ lead and create both my parsers leaning on interfaces more even if that means a few accessors and mutators. I am probably too abused by Java to look at them rationally. They probably do have good use sometimes.

## Saturday, July 18, 2020, 1:43:31PM

Got tinout moved over to https://gitlab.com/rwx.gg/tinout and push-mirroring to GitHub. I’ve decided nothing goes into rwx.gg that isn’t at least version 1.0 or higher. I want to have someplace where people can go and be reasonably sure that stuff will usable.

## Saturday, July 18, 2020, 11:56:35AM

Having writer’s remorse over writing that slam on tags and structs. As usual the truth is in between them. In fact, I love github.com/ghodss/yaml (and so does the Docker project) for parsing YAML into structs with the least amount of hassle — when structs make sense.

I’ve been really second guessing my decision to move to interfaces for all the knowledge package stuff. After all these things are just static data. I’ll move to struct approach for the AST from Ezmark before I make a final decision on the kn stuff.

## Saturday, July 18, 2020, 11:15:32AM

After facing the quirks of JSON and YAML tagging yet again I went ahead and wrote Golang YAML/JSON Tags Actually Suck.

## Friday, July 17, 2020, 6:51:31PM

Cloudflare just went down reportedly because of a “bad router rule” in a server in Atlanta taking out 1.1.1.1. The number of people depending on that central DNS provider is proof of how stupid people are. The entire point of DNS was to allow distributed DNS providers rather than have everyone depend on a service. It really revealed how stupid some companies are. GitLab was one of them. After seriously fighting with GitLab’s brain-dead flavored Markdown — despite their claim to be moving to CommonMark — I’ve gathering up reasons to give GitHub another look. But, honestly, I’m kinda tired of depending on any centralized service at all at this point.

## Friday, July 17, 2020, 8:13:13AM

Another amazing and unexpected advantage of writing in PEG is that you can specify ordered priority such that things that are more likely to occur in a language are examined first. This has never been something any specification language has allowed an author to communicate. It also brings forward some of the difficulties when a syntax would be easier to parse without the preferred position when examining the input. For example, Text is far more frequent than Tex. But checking for Tex lexically is easier because you look for $ and know you have it right away instead of maintaining the priority and checking that $ is not present so that you can continue with the Text parsing. This does cause a bit of redundancy in the parsing engine because to check for Text I have to rule out $ and then later have to check for $ to make sure I have a Tex inline. The cost is easily worth it, though, given all of the code that would have to be evaluated otherwise leaving Text as the plain option at the end of the list.

## Friday, July 17, 2020, 7:52:01AM

I cannot overstate how amazing PEG positive and negative lookahead and lookbehind are for specifying language grammars. It allows specifications to directly communicate the code that needs to be written including some idea of how much memory will be needed for any lookahead specified by the grammar as well as how many previous states will need to be saved (memoized) to assert any look behind.

This has been particularly useful when dealing with sets that can include other sets except for one specific thing. This is impossible to capture in EBNF or ABNF and requires resorting to “rhetorical” specification syntax.

Here’s an example: Markdown inlines. Often one inline can contain most of the others. In PEG you simple have to do a negative of the inlines another inline cannot contain (rather than explicitly rewriting every one of them).

Inline <- Text / Quote / Emph / Link / Pre
Quote  <- (!Quote Inline)+

## Thursday, July 16, 2020, 7:53:08PM

Yet another reason not to use Zsh(it). It doesn’t even have variable name references. Zsh is such a script kiddy toy, just so much evidence of that now. I’m beyond trying to listen people convince me otherwise. “Run along. I have work to do.”

## Thursday, July 16, 2020, 4:44:12PM

Finally got at least all the main pages on rwx.gg working again and put the old kn shell script back in play. It is so great for auditing.

## Thursday, July 16, 2020, 8:19:17AM

Playing around with a new morning routine. Been up since 6am today, yesterday 5:30. I have been naturally waking up earlier as I just write off the end of the day and go to bed around 10:30. They say you need less sleep as you age, but I don’t buy into that idea — especially if you are still regularly exercising. I’ve been running for an hour or more every day now for over a month. It’s been absolute bliss. I love yoga, but running on a good trail has always centered me mentally as much or more. I’m still planning on daily strenuous yoga asana again after I get my base health back.

Here’s my daily schedule lately:

Hour Activity
6:30a Up / Resting Heart Rate / Coffee & Walnuts / Code / Crap
7 Eat (Oatmeal, Protein, Coconut Oil, Coffee)
8 Code / Write / Think
9 Run
10 Eat (Protein and Avacado Toast or Pickle, Tomato Sandwich)
10:30 Stream / Teach
11 Stream / Teach
12p Stream / Teach
1 Eat / Relax / Coffee
2 Code / Write (Live)
3 Code / Write (Live)
4 Eat / Mentor
5 Mentor
6 Mentor
7 Eat / Mentor
8 Mentor
9 Walk Dog
10 Eat / Relax
11:00 Sleep

My best brain power of the day is — without a doubt — in the morning. It is also when it is the most peaceful around here.

I’m going to do a better job writing this personal stuff down in case it might help other people heading into old age watching their bodies freak out in ways they could not have anticipated. Mine happens to be chronic inflammation for reasons I cannot explain. Here’s how I’ve started to beat it:

1. Slow-paced running about an hour a day away from people
2. Completely eliminating any refined sugar or processed food
3. Dropping meat and carb-dense food from diet
4. Eating rather small portions of things more often
5. Setting an alarm to eat every three hours or so
6. Focusing on positive things instead of stressful stuff
7. Wearing a mask even in the house during pollen season
8. Taking my Xyzol to keep from reacting to our dog

I don’t have Diabetes but my family is a long history of type I and II so I figure treating myself as if I could have it eventually is just safe. So far my blood sugar insulin response seems like I could be subject to it a bit.

I just read on a site run by the Diabetes association that one way to treat it is to essentially track your food (and your blood glucose, which I’m not doing to do yet at this point) and eat about 1800 calories tops (for an average size person) with no high-carbs. It’s kind of like the Keto diet without the huge negative side-effects of Ketosis, replacing fruit with veggies, fluids, and fat — glorious, good fats.

In fact, fat is really the secret to a lot of good health (for me). It is so fucking ironic that one misunderstood study sent the entire world into an obesity and Diabetes epidemic mostly because everyone eliminated all fat from their diet.

Fat provides consistent energy and blood sugar without spiking. It satisfies you so you eat less. Some fats are essential to building brain cells.

One thing is for sure. Sugar is the fucking devil. It feeds Cancer. It spikes insulin and destroys the Pancreas. It rots your teeth. Statistically speaking Sugar is more deadly than Cocaine and yet they are basically the same thing, addictive isolates taken from natural sources.

## Thursday, July 16, 2020, 7:46:42AM

I’ve been cleaning up the sites with the old Bash kn script now that I’m taking all this luxurious time to actually finish Ezmark. It is always good to use the prototype again to get a sense of what I was trying to do in the first place. Keeps me grounded.

One thing that looks ludicrous to me now is adding so much data to the YAML metadata header. Back then I was convinced it was easier to use the YAML since it is more structured. But the truth is the YAML should always be about the meta. Content specifications can call for certain header names and structure to the Markdown (whatever flavor). Anything else probably deserves its own file that can be rendered inline, which is what the RenderMark approach is all about.

## Wednesday, July 15, 2020, 4:49:13PM

Great ideas from the stream today about rendering the TexExpr block and Tex inline as SVGs that are inlined into the HTML rather than depending on a JavaScript library at all!

## Wednesday, July 15, 2020, 4:27:32PM

Big great discussion about MathJax or KaTeX and what to call the AST element. Pandoc mistakenly called it Math.

Here is a $\epsilon$ thing.

$$\forall x \in X, \quad \exists y \leq \epsilon$$

So that turns into this:

Here is a ϵ thing.

x ∈ X,  ∃y ≤ ϵ

## Wednesday, July 15, 2020, 3:56:02PM

Was reminded that a {{.TOC}} in the template is a good idea and that TOC content is metadata not data that really has no place in the README.md file itself which is just bothersome to the content maintainer and redundant to those downloading the content who already have the TOC heading data in the BASE/json file.

Also decided that Heading attributes really need to be mandatory to allow Heading text to be changed without impact.

## Wednesday, July 15, 2020, 8:32:37AM

I love that Go’s creators were so fucking experienced that they could leave goto in Go without shame. It seems like the entire world of less-than programmers don’t get why they made this decision. But if you truly want to understand a specific case where it makes a ton of difference in efficiency take a look at Go’s own syntax parser. Yep, there’s goto in all its glory doing what it was meant to do in spectacular fashion. These are yet more reasons to truly understand why Go is a far more thoughtfully designed language than Rust. Very few people on planet Earth would even understand an explanation of why that is objectively true. But it is nice to know they do exist. I am such a Rob Pike fanboy. There I said it.

## Tuesday, July 14, 2020, 3:02:22PM

While doing the PEG for KN Ezmark I realized that BlockQuote and Div are effectively the same thing in the Pandoc AST. Both are containers, but the Div is far superior to maintain and parse. In fact, BlockQuotes have always been a pain in my ass. I’ve decided to try and get away with dropping them entirely from Ezmark. I am sure some people will scream and they can use other full parsers like Pandoc if they need them.

This does mean that Div is actually a SemDiv because it is not based on style. It denotes a semantic collection of content within the current content such as a callout, note, or even an actual block quote. People can use them for addresses and such as well. In fact, it is the exact same thing as a Fenced syntax aware code block, but for other stuff.

## Monday, July 13, 2020, 2:44:42PM

The Pandoc AST isn’t bad, it’s just not what I would do. It’s too much, doesn’t match CommonMark and just so damn annoying sometimes. I mean, Inline.Smallcaps Really?

The biggest problem of all is how much you cannot change it.

Goldmark grew their own AST as well. It’s not horrible, just not very well informed from experience looking at document structure for several years as I have been, obsessing over this stuff.

None of the document and knowledge solutions have ever started with the AST model and worked up. The closest thing we have to that was the DOM and we know how that ended up.

Back when I was doing the ABNF for BaseML (later EzMark) one thing really stood out. None of the existing Markdown formats — from the very beginning — were able to be rendered inline. They all required a first pass in order to parse all the block types and reattach the reference links and such. That has always annoyed me. Each block should be consumable and rendered immediately after it is read.

I really am having a hard time just shutting up and using Pandoc. There is so much bloat and overkill to use that method. Pandoc Markdown doesn’t even look CommonMark compliant.

What I really want to do is specify a superset of CommonMark that is 100% Pandoc compliant that can be rendered immediately and mastered very quickly. I want RenderMark.

## Monday, July 13, 2020, 5:21:09AM

Up early after a long sleep. Must have been that three hour run/hike on Saturday. Feels good though. I can tell my old body is returning.