hello world2023-10-14T22:36:20.140Zhttp://kflu.github.io/KLHexoHow to scan large size pages to PDf using GIMPhttp://kflu.github.io/2022/07/21/2022-07-21-scan-large-book-pages-crop-pdf/2022-07-21T07:00:00.000Z2023-10-14T22:36:20.140Z
<h3>Step 1: How to scan larger-than-letter/A4 sized pages from a book?</h3>
<p>For scanning, don't need to perfectly align to a page size using the scanner.
Use the scanbed, put the page in the <strong>middle</strong> - we'll crop later. Scan all
the pages using this method. Scan to JPG files.</p>
<h3>Step 2: How to crop them efficiently?</h3>
<p>In GIMP, drag the first page in, manually select the desired page region, then <strong>right click -> image -> crop to selection</strong>. This will crop the canvas to the desired page size.</p>
<p>Drap the 2nd page in. Note that it's on the 2nd layer. Notice that only a "window" (of selection size) of the 2nd image is shown. Use the <strong>drag tool</strong> to move the desired portion of this image into the "window".</p>
<p>Repeat the above process for the rest of the pages. Now you should have each page in their own layers.</p>
<h4>Rotate Layers</h4>
<p>Shift-R (or "Tools -> Rotate") for layer rotation (not whole image). Drag to rotate. Shift-Drag to rotate at certain degrees.</p>
<h3>Step 3: How to compile them into PDF?</h3>
<p>GIMP 2.10 supports exporting layers as pages to a single PDF file. Export -> set file name to some_thing.pdf -> press export. A dialog pops up about PDF exporting, choose "layers as pages" (it lets you specify the
Exposing GIT repository via SSH and HTTPhttp://kflu.github.io/2022/04/25/2022-04-25-git-http/2022-04-25T07:00:00.000Z2023-10-14T22:36:20.140Z
<p>You don't need a git hosting service to manage and share your git repo. All you
need is a server which is:</p>
<ul>
<li>accessible via SSH (to yourself, not others), for pushing changes, and</li>
<li>has a public accessible HTTP server (for others to clone)</li>
</ul>
<p>Firstly, on the remote server, let's prepare a <em>bare</em> repo:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">mkdir repo.git; <span class="built_in">cd</span> repo.git</span><br><span class="line">git init --bare</span><br></pre></td></tr></table></figure>
<p>Also, move <code>repo.git</code> directory to a place under HTTP server directory. I put it under <code>~/html/git/</code> so there can host multiple git repos.</p>
<p>Now, with SSH, you should be able to push changes via SSH already:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># In your git repo:</span></span><br><span class="line"><span class="comment"># ~USER is standard UNIX way to expand into USER's home directory</span></span><br><span class="line">URL=ssh://USER@SERVER/~USER/path/to/repo</span><br><span class="line">git remote <span class="built_in">set</span>-url origin <span class="string">"<span class="variable">$URL</span>"</span></span><br><span class="line">git push</span><br></pre></td></tr></table></figure>
<p>Now, exposing the repo to HTTP mostly following <a href="https://mirrors.edge.kernel.org/pub/software/scm/git/docs/user-manual.html#setting-up-a-public-repository" target="_blank" rel="noopener">this guide</a>:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># in the bare repo:</span></span><br><span class="line">git update-server-info</span><br><span class="line">mv hooks/post-update.sample hooks/post-update</span><br></pre></td></tr></table></figure>
<p>Note:</p>
<ol>
<li>the hook calls update-server-info, so everytime there's a push, the server info is updated (needed to serve the repo via HTTP).</li>
<li>the default hook script seems to have a bug in it:</li>
</ol>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">exec git-update-server-info</span><br></pre></td></tr></table></figure>
<p>On my system <code>git-update-server-info</code> does not exit. So I've to change it to</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">exec git update-server-info</span><br></pre></td></tr></table></figure>
<ol start="3">
<li>I host the repo on a public unix system, so I have <code>umask 077</code>. So, in order for the updated git repo to be always exposed to public (HTTP), I need to <code>chmod</code> all subdir/files to <code>go+rX</code>. So I also do this in the post-update hook:</li>
</ol>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"># hooks/post-update</span><br><span class="line"></span><br><span class="line">(</span><br><span class="line">exec git update-server-info</span><br><span class="line">)</span><br><span class="line"></span><br><span class="line"># Note that the hooks of a bare repo is always run with PWD set to the repo</span><br><span class="line"># root, but just to be safe, let's make sure it looks like a git repo:</span><br><span class="line">if test -f HEAD; then</span><br><span class="line"> echo "updating repo permission..."</span><br><span class="line"> chmod -R go+rX .</span><br><span class="line">fi</span><br></pre></td></tr></table></figure>
<ol>
<li>
<p>exec must be in a sub-shell, otherwise commands following it won't run
echo "Updating server info..."</p>
</li>
<li>
<p>the hook is run in a bare repo's root dir (<a href="https://stackoverflow.com/a/9229463/695964" target="_blank" rel="noopener">link</a>)</p>
</li>
<li>
<p>When you push remotely, the output of the hooks are shown in the <code>git push</code> output.</p>
</li>
</ol>
<p>After these steps, you can pull it from HTTP:</p>
<ol>
<li>Pull it from HTTP</li>
</ol>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">git pull http://$SERVER_URL/path/to/repo_dir repo.http</span><br></pre></td></tr></table></figure>
<ol start="2">
<li>Test that repo changes and file permissions are are indeed updated via the hooks:</li>
</ol>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># Make some change to the repo and push via SSH</span><br><span class="line">git push ssh://USER@SERVER/~USER/path/to/repo</span><br></pre></td></tr></table></figure>
<p>Then pull via HTTP again</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># in repo.http/</span><br><span class="line">git pull</span><br></pre></td></tr></table></figure>
<p>Make sure you see the
PDB, Exceptions, Tracebackshttp://kflu.github.io/2022/04/18/2022-04-18-pdb-exceptions-traceback/2022-04-18T07:00:00.000Z2023-10-14T22:36:20.140Z
<p><code>sys.exc_info()</code> - the exception info that's been currently handled. Note that the exception could contain inner exceptions via <code>__context__</code>, or <code>__cause__</code>. And this could be nested for multiple levels.</p>
<p><code>pdb.post_mortem(traceback)</code> - once found the exception you want to post-mortem debug, use:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">pdb.post_mortem(exception.__traceback__)</span><br></pre></td></tr></table></figure>
<p>About exception chaining (both implicit and explicit), see <a href="https://peps.python.org/pep-3134/" target="_blank" rel="noopener">PEP-3134</a>.</p>
<p>Explicit chaining is via:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span>:</span><br><span class="line"> ...</span><br><span class="line"><span class="keyword">except</span> Exception <span class="keyword">as</span> e:</span><br><span class="line"> <span class="keyword">raise</span> SomeErr(...) <span class="keyword">from</span> e</span><br><span class="line"> <span class="comment"># ~~~~~~ this!</span></span><br></pre></td></tr></table></figure>
<p>In above case, the <code>SomeErr</code> instance contains <code>__cause__</code> which is <code>e</code>.</p>
<p>Implicit chaining:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">try</span>:</span><br><span class="line"> ...</span><br><span class="line"><span class="keyword">except</span> Exception <span class="keyword">as</span> e:</span><br><span class="line"> <span class="keyword">raise</span> SomeErr(...)</span><br><span class="line"> <span class="comment"># ~~~~~~~~ implicit chaining (by interpreter)</span></span><br></pre></td></tr></table></figure>
<p>In implicit chaining, interpreter <em>automatically</em> sets <code>__context__</code> on the new exception instance <code>SomeErr</code>. TBH, I feel the differentiation of implicit and explicit chaining via <code>__cause__</code> and <code>__context__</code> is unnecessary. And it causes extra complexity to handle them.</p>
<p>PDB's handling of inner exceptions</p>
<p>...there is none. PDB's post mortem <code>pdb.pm()</code> doesn't debug the inner-most exception of <code>sys.last_value</code>. In fact, <code>sys.last_value</code> isn't always present. So <code>pm()</code> is not a reliable way of post-mortem debugging.</p>
<p>To debug the inner-most exception:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> sys, pdb</span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">debug_inner</span><span class="params">()</span>:</span></span><br><span class="line"> <span class="string">'''Debug the inner-most exception's traceback'''</span></span><br><span class="line"> exc = sys.exc_info()[<span class="number">1</span>]</span><br><span class="line"> <span class="keyword">while</span> getattr(exc, <span class="string">'__context__'</span>, <span class="literal">None</span>):</span><br><span class="line"> exc = exc.__context__</span><br><span class="line"> pdb.post_mortem(exc.__traceback__)</span><br></pre></td></tr></table></figure>
<p>I created a <code>.pdbrc</code> alias for this:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">_LOCAL = dict()</span><br><span class="line">_DEF = ""</span><br><span class="line">_DEF += "\ndef debug_inner():"</span><br><span class="line">_DEF += "\n '''Debug the inner-most exception's traceback'''" </span><br><span class="line">_DEF += "\n exc = __import__('sys').exc_info()[1]"</span><br><span class="line">_DEF += "\n while getattr(exc, '__context__', None):"</span><br><span class="line">_DEF += "\n exc = exc.__context__"</span><br><span class="line">_DEF += "\n __import__('pdb').post_mortem(exc.__traceback__)"</span><br><span class="line">exec(_DEF, dict(), _LOCAL)</span><br><span class="line">alias di
OSC52 hackhttp://kflu.github.io/2022/01/06/2022-01-06-osc-52-hack/2022-01-06T08:00:00.000Z2023-10-14T22:36:20.140Z
<p>OSC52 is a terminal control sequence that allows remote side to set the system
clipboard of the local side. This is the best way to get clipboard support when
working with remote ssh sessions.</p>
<p>However, here're a couple of limitations to consider:</p>
<ol>
<li>Your terminal app needs to support OSC52. MacOS terminal.app doesn't support it. iterm2 does. xterm actually supports it, just require some additional setting.</li>
<li>Mosh does not support OSC52. Native ssh supports it. Eternal Terminal (et) does support it.</li>
<li>tmux supports passing copied text via OSC52, see tmux man, <code>set-clipboard</code>.</li>
<li>In Vim, <code>ojroques/vim-oscyank</code> is a plugin to support yanking text via OSC52</li>
</ol>
<p>With above consideration, so far the best combination that works for me has been using iterm2 + et.</p>
<p>Today it comes to my attension this little utility <a href="https://github.com/roy2220/osc52pty" target="_blank" rel="noopener">osc52pty</a>, which adds OSC52 support for any terminal application (e.g., a shell). It basically removes the terminal app limitation from above consideration. I.e., with</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"># Do this in terminal.app:</span><br><span class="line">osc52pty zsh</span><br><span class="line">ssh foo@bar</span><br></pre></td></tr></table></figure>
<p>For any terminal app, it detects osc52 bytes and pipes content to pbcopy. However this seems to only work with native ssh client. Neither et nor mosh
MacVim as System Vimhttp://kflu.github.io/2022/01/03/2022-01-03-macvim-as-system-vim/2022-01-03T08:00:00.000Z2023-10-14T22:36:20.140Z
<p>I made a <a href="https://superuser.com/a/1697239/114255" target="_blank" rel="noopener">post</a> here on how to use macvim installation as the default vim commands (vim, vimdiff, view, etc.)</p>
<p>In short do:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">curl https://gist.githubusercontent.com/kflu/c492dfcba2a32965918f920b046f8d19/raw/aa6da495ee8579afd0ec8235911b10d31ec5555f/mvim_as_sys | sh</span><br><span class="line">echo "== Don't forget to add ~/.local/bin to PATH"</span><br></pre></td></tr></table></figure>
<p>The script is <a href="https://gist.github.com/kflu/c492dfcba2a32965918f920b046f8d19" target="_blank"
Useful Tmux Settingshttp://kflu.github.io/2021/10/26/2021-10-26-tmux-pass-selected-text-to-command/2021-10-26T07:00:00.000Z2023-10-14T22:36:20.140Z
<h1>Pass selected text to system commands in Tmux</h1>
<p>What I want to achieve is similar to Vim's <code>:'<,'>!some_command</code>. In other words, in Tmux, once some text is selected, I want to press a key that triggers a prompt for me to enter a system command. Then the text will be piped to the command as stdin. The stdout of the command will be displayed in a temporary tmux buffer.</p>
<p>I have answered it <a href="https://unix.stackexchange.com/a/674820/38968" target="_blank" rel="noopener">here</a> but it's kind of hard to find, so I'll document it here:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">bind-key -T copy-mode ! command-prompt -p "cmd:" "send-keys -X copy-selection-no-clear \; run-shell \"tmux show-buffer | %1\" "</span><br><span class="line">bind-key -T copy-mode-vi ! command-prompt -p "cmd:" "send-keys -X copy-selection-no-clear \; run-shell \"tmux show-buffer | %1\" "</span><br></pre></td></tr></table></figure>
<p>Several notes:</p>
<ul>
<li><code>-T copy-mode-vi</code> is essential if you use tmux's vi mode.</li>
<li><code>command-prompt</code> prompts for some user input and run the template as tmux command - it is the core part that makes this run <strong>arbitrary</strong> user specified commands.</li>
<li><code>tmux show-buffer</code> is for dumping previosly selected text to stdout</li>
<li>I could have used <code>pipe-selection</code> which pipes selection to arbitrary command, however, this does not display the command stdout back to tmux. On the contrary, <code>run-shell</code> does this.</li>
</ul>
<h1>KEY BINDING FOR SWAPPING PANES</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">bind-key -T prefix C-s display-panes \; command-prompt -p "<pane1>:,<pane2>:" "swap-pane -s %1 -t %2"</span><br></pre></td></tr></table></figure>
<p>Press <code>C-b C-s</code> (can hold ctrl) will:</p>
<ol>
<li>First, display the pane numbers as if you pressed <code><prefix>q</code></li>
<li>Prompt for 1st pane to swap</li>
<li>Prompt for 2nd pane to swap</li>
</ol>
<p>Note:</p>
<ul>
<li><code>-p "<pane1>:,<pane2>:"</code> the comma seperates two prompts to the user
pkgsrc survival guidehttp://kflu.github.io/2021/07/19/2021-07-19-pkgsrc-survival-guide/2021-07-19T07:00:00.000Z2023-10-14T22:36:20.140Z
<p>pkgsrc is NetBSD's ports system. But it's also cross-platform, available on Mac and Linux. It also support unprivileged use. This is a quick reference for it.</p>
<p><a href="https://www.netbsd.org/docs/pkgsrc/pkgsrc.html" target="_blank" rel="noopener">The document is awesome</a></p>
<p>First let's make a few decisions about the installation locations:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># Location of the pkgsrc</span><br><span class="line">PKGSRC_DIR= /data/user/<u>/pkgsrc</span><br><span class="line"></span><br><span class="line"># Location of the built package installation site</span><br><span class="line">PKG_DIR= /data/user/<u>/pkg</span><br></pre></td></tr></table></figure>
<h1>BOOTSTRAPPING</h1>
<p>First download pkgsrc - it's a collection of configurations instructing how to download and build different packages.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">wget https://ftp.netbsd.org/pub/pkgsrc/stable/pkgsrc.tar.gz </span><br><span class="line">tar xf pkgsrc.tar.gz -C $PKGSRC_DIR</span><br></pre></td></tr></table></figure>
<p>pkgsrc needs to be bootstrapped on the target machine where packages are to be built. The bootstrapping process builds the necessary tools pkgsrc will be using later on. We will focus on <em>unprivileged</em> use, since that's more fun and less covered by documents everywhere else:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">cd $PKGSRC_DIR/bootstrap</span><br><span class="line">./bootstrap --help</span><br><span class="line">./bootstrap --unprivileged --prefix $PKGSRC_DIR --make-jobs <num_cpus></span><br></pre></td></tr></table></figure>
<p>Finally we need to export <code>PATH</code> and <code>MANPATH</code> to include <code>$PKG_DIR</code>.</p>
<h1>PARALLEL BUILD</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">MAKE_JOBS=<num_cpus> bmake</span><br></pre></td></tr></table></figure>
<h1>USEFUL TARGETS, GETTING HELP</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">bmake help</span><br><span class="line">bmake help topic=:index # prints all helps</span><br><span class="line">bmake help topic=<target|option></span><br><span class="line"></span><br><span class="line">bmake show-all</span><br><span class="line">bmake show-depends # seems only incl. direct deps</span><br><span class="line">bmake show-depends-dirs # include transient deps</span><br><span class="line"></span><br><span class="line">bmake show-var VARNAME=SOME_VAR_NAME</span><br></pre></td></tr></table></figure>
<p>See <a href="https://wiki.netbsd.org/pkgsrc/targets/" target="_blank" rel="noopener">pkgsrc targets</a>
The following targets may be useful to invoke from keyboard:</p>
<ul>
<li><code>depends</code> to build and install dependencies</li>
<li><code>fetch</code> to fetch distribution file(s)</li>
<li><code>checksum</code> to fetch and check distribution file(s)</li>
<li><code>extract</code> to look at unmodified source</li>
<li><code>patch</code> to look at initial source</li>
<li><code>configure</code> to stop after configure stage</li>
<li><code>all</code> or build to stop after build stage</li>
<li><code>stage</code>-install to install under stage directory</li>
<li><code>test</code> to run package's self-tests, if any exist and supported</li>
<li><code>package</code> to create binary package before installing it</li>
<li><code>replace</code> to change (upgrade, downgrade, or just replace) installed package in-place</li>
<li><code>deinstall</code> to deinstall previous package</li>
<li><code>package</code>-install to install package and build binary package</li>
<li><code>install</code> to install package</li>
<li><code>bin-install</code> to attempt to skip building from source and use pre-built binary package</li>
<li><code>show-depends</code> print dependencies for building</li>
<li><code>show-options</code> print available options from options.mk</li>
</ul>
<p>Cleanup targets (in separate section because of importance):</p>
<ul>
<li><code>clean</code>-depends to remove work directories for dependencies</li>
<li><code>clean</code> to remove work directory</li>
<li><code>distclean</code> to remove distribution file(s)</li>
<li><code>package-clean</code> to remove binary package</li>
</ul>
<p>The following targets are useful in development and thus may be useful for an advanced user:</p>
<ul>
<li><code>makesum</code> to fetch and generate checksum for distributed file(s)</li>
<li><code>makepatchsum</code> to (re)generate checksum for patches</li>
<li><code>makedistinfo</code> to (re)generate distinfo file (creating checksums for distributed file and patches)</li>
<li><code>mps</code> short for makepatchsum</li>
<li><code>mdi</code> short for makedistinfo</li>
<li><code>print-PLIST</code> to attempt to generate correct packaging list (NB! It helps, but it doesn't eliminate manual work.)</li>
</ul>
<h1>USEFUL PKG TOOLS</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">(cd pkgtools && bmake install)</span><br><span class="line"></span><br><span class="line">man pkg_info</span><br><span class="line">pkg_info -L <pkg_name> # list package contents</span><br></pre></td></tr></table></figure>
<h1>DISTFILES, FETCHING ETC</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># generate sh script which downloads distfiles when run</span><br><span class="line">bmake fetch-list</span><br><span class="line"></span><br><span class="line">bmake checksum # fetch distfile and do checksum</span><br><span class="line">bmake depends-checksum # also for deps</span><br></pre></td></tr></table></figure>
<h1>BUILD OPTIONS</h1>
<p><code>$PKG_DIR/etc/mk.conf</code> is where to set build options. For specific package, add line:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">PKG_OPTIONS.<pkg_name>+= <option></span><br></pre></td></tr></table></figure>
<p><code>bmake show-all</code> is helpful in discovering available build options, and what are enabled/disabled.</p>
<h1>USEFUL ENVIRONMENT VARS</h1>
<p>These can be specified in <code>mk.conf</code>, or on each build instance:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">DISTDIR=... bmake</span><br></pre></td></tr></table></figure>
<ul>
<li><code>DISTDIR</code>: directory where to look for distfiles</li>
<li><code>FETCH_USING</code>: the program for fetching (can set to wget)</li>
</ul>
<h1>TROUBLESHOOTING</h1>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># Prints detailed build command</span><br><span class="line">PKG_DEBUG_LEVEL=1 bmake</span><br></pre></td></tr></table></figure>
<h1>BUILD PHASES</h1>
<p>https://www.netbsd.org/docs/pkgsrc/build.html</p>
<ul>
<li>The fetch phase</li>
<li>The checksum phase</li>
<li>The extract phase</li>
<li>The patch phase</li>
<li>The tools phase</li>
<li>The wrapper phase</li>
<li>The configure phase</li>
<li>The build phase</li>
<li>The test phase</li>
<li>The install phase</li>
<li>The package phase</li>
</ul>
<h1>Use pkgsrc along side Joyent pkgsrc binary distribution</h1>
<p>Sat May 28 14:58:51 PDT 2022</p>
<ol>
<li>Install Joyent's pkgsrc binary distribution (this must be installed at the pre-determined location. E.g., for MacOS, at <code>/opt/pkg</code>)</li>
<li>At any location, git clone pkgsrc source repo (e.g., clone into <code>~/work/pkgsrc</code>)</li>
<li>To build from source, just <code>cd ~/work/pkgsrc/net/sshping && bmake install</code>
<ul>
<li><code>bmake</code> is from <code>/opt/pkg/bin</code>. Installed package will also go there.</li>
</ul>
</li>
</ol>
<p>Since this installs to system location, there maybe frequent prompt for root
password. Use this patch can fix it. Note that it assumes you have <code>sudo</code>
already installed at /usr/bin/sudo. Otherwise, follow the pkgsrc
<a href="https://www.netbsd.org/docs/pkgsrc/faq.html" target="_blank" rel="noopener">FAQ</a> (use pkgsrc to install
<code>sudo</code> first):</p>
<figure class="highlight diff"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span
Stereo audio recording setuphttp://kflu.github.io/2021/06/29/2021-06-29-stereo-audio-recording/2021-06-29T07:00:00.000Z2023-10-14T22:36:20.136Z
<p>Need this to record classic grand piano. Setup diagram:</p>
<ul>
<li>LCT 040 x2 <a href="https://www.amazon.com/dp/B07MZXKZVG/?coliid=I1XHTOO4CCDH01&colid=3POR139P6KGAA&psc=1&ref_=lv_ov_lig_dp_it" target="_blank" rel="noopener">link</a></li>
<li>2i2 gen3 USB audio interface <a href="https://www.amazon.com/dp/B07QR73T66/?coliid=I2IU6RKAJY88OW&colid=3POR139P6KGAA&psc=1&ref_=lv_ov_lig_dp_it" target="_blank" rel="noopener">link</a>
<ul>
<li>this is the main magic. it can directly connect to computer and provide 48v phantom power.</li>
</ul>
</li>
<li>Need another mic stand too!</li>
</ul>
<p>On the Mac side, I can use GarageBand. Because of the audio interface, I shouldn't need to create aggregated input device.</p>
<p><img src="IMG_4460.jpg"
irc, znc, and erchttp://kflu.github.io/2021/06/04/2021-06-04-irc--znc--and-erc/2021-06-04T07:00:00.000Z2023-10-14T22:36:20.136Z
<p>This article discusses ZNC setup and how to connect with Emacs ERC.</p>
<h3>ZNC admin interface</h3>
<p>ZNC's IRC service and web admin service can share the same port, magical, but confusing. I seperated them into two ports, and restricted the web port to be only accessible from LAN.</p>
<h3>ZNC Setup & Connection</h3>
<p>ZNC user and password are different than IRC network's nick and password, which is usually managed by the network's NickServ.</p>
<p>A ZNC user can define multiple (network, nick) pairs.</p>
<p>To connect to ZNC, using any IRC client, use the following <strong>password</strong> format to authenticate:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">znc_user@client_id/znc_network_name:znc_user_pass</span><br></pre></td></tr></table></figure>
<p>The <code>@client_id</code> part can be omitted, if not using <a href="https://wiki.znc.in/Clientbuffer" target="_blank" rel="noopener">clientbuffer</a> module.</p>
<p>Note for ERC: <code>M-x erc-tls</code> prompts for nick, DO NOT leave it empty, but put in the network nick, so ERC is not confused and have the buffer window bug (TODO: link).</p>
<h3>ZNC management inside IRC</h3>
<p>The <code>*status</code> bot (prefixed by <code>*</code>, configurable) is ZNC specific. You can <code>/query *status</code> and <code>help</code> from there. It provides all the functionalities for managing ZNC.</p>
<p>The modules all have corresponding bot: <code>*module_name</code>.</p>
<h3>Multiple client and clientbuffer</h3>
<p>This article: <a href="https://blog.jay2k1.com/2016/02/04/how-to-configure-znc-backlog-for-multiple-clients/" target="_blank" rel="noopener">How to configure multiple clients against single ZNC network nick</a> talks about the buffer playback problem and the solution by <strong>using clientbuffer</strong>.</p>
<p>Note that clientbuffer is a network level module, so you'll have to enable it per network.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">msg *status loadmod clientbuf autoadd</span><br></pre></td></tr></table></figure>
<p>Also at ZNC user level, don't forget to <strong>disable</strong> clear chan/query buffers.</p>
<h3>ZNC external module building</h3>
<p>Take clientbuffer as example:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"># ----</span><br><span class="line"># DO ALL THESE AS USER znc:</span><br><span class="line"># ----</span><br><span class="line"></span><br><span class="line">git clone https://github.com/CyberShadow/znc-clientbuffer.git</span><br><span class="line"></span><br><span class="line"># requires znc-buildmod</span><br><span class="line"># produces clientbuffer.so</span><br><span class="line">make </span><br><span class="line"></span><br><span class="line"># "install" it</span><br><span class="line">mv clientbuffer.so ~/.znc/modules</span><br><span class="line"></span><br><span class="line"># To load the module, use `loadmod` with *status
You don't need an IRC client to IRChttp://kflu.github.io/2021/05/26/2021-05-26-You-don-t-need-IRC-clients-to-IRC/2021-05-26T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>The IRC spec is in <a href="https://datatracker.ietf.org/doc/html/rfc1459#section-4.6.4" target="_blank" rel="noopener">RFC 1459</a>. The IRC protocol is suprisingly elegant and simple. For basic operation, there is no need for a dedicated client (although a client makes chatting more enjoyable).</p>
<p>In general, an IRC message looks like:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><command> [arg] [arg] ... :[arg with spaces]</span><br></pre></td></tr></table></figure>
<p>or</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[:prefix] <command> [arg] [arg] ... :[arg with spaces]</span><br></pre></td></tr></table></figure>
<p>This format is the same for both client -> server, and server's replies. The prefix is usually the an identity of the message's originator.</p>
<h3>Basic Operations</h3>
<p>To connect:</p>
<ol>
<li>Connect to the server using TLS: <code>openssl s_client -connect <server>:<port></code></li>
<li>Set the nick: <code>nick <your_nick></code></li>
<li>Set username: <code>user <your_nick> . .</code></li>
</ol>
<p>To chat:</p>
<ol>
<li>Join channel with <code>join <channel_name></code></li>
<li>Chat in channel with <code>privmsg <channel_name> :<msg></code></li>
<li>Chat with other user with <code>privmsg <user> :<msg></code></li>
<li>List channel users with <code>lusers <channel></code></li>
</ol>
<p>To authenticate your nick with NickServ, you need to issue the following command to the <em>user</em> <code>NickServ</code>:</p>
<pre><code>identify <your nick's password>
</code></pre>
<p>To do this, issue the following:</p>
<pre><code>privmsg nickserv :identify <nick's password>
</code></pre>
<p>Note that <code>channel_name</code> starts with <code>#</code>, while user name doesn't.</p>
<h3>An Example Log</h3>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br></pre></td><td class="code"><pre><span class="line">>>> openssl s_client -connect irc.libera.chat:6697</span><br><span class="line"></span><br><span class="line">:platinum.libera.chat NOTICE * :*** Checking Ident</span><br><span class="line">:platinum.libera.chat NOTICE * :*** Looking up your hostname...</span><br><span class="line">:platinum.libera.chat NOTICE * :*** Found your hostname: xxxx</span><br><span class="line">:platinum.libera.chat NOTICE * :*** No Ident response</span><br><span class="line">nick jsbach</span><br><span class="line">user jsbach . . .</span><br><span class="line">:platinum.libera.chat 001 jsbach :Welcome to the Libera.Chat Internet Relay Chat Network jsbach</span><br><span class="line">:platinum.libera.chat 002 jsbach :Your host is platinum.libera.chat[188.240.145.102/6697], running version solanum-1.0-dev</span><br><span class="line">:platinum.libera.chat 003 jsbach :This server was created Sat May 22 2021 at 19:04:12 UTC</span><br><span class="line">:platinum.libera.chat 004 jsbach platinum.libera.chat solanum-1.0-dev </span><br><span class="line">:platinum.libera.chat 005 jsbach SAFELIST ELIST=CTU FNC WHOX CALLERID=g KNOCK MONITOR=100 ETRACE CHANTYPES=# EXCEPTS INVEX CHANMODES=eIbq,k,flj,CFLMPQScgimnprstuz :are supported by this server</span><br><span class="line">:platinum.libera.chat 005 jsbach CHANLIMIT=#:250 PREFIX=(ov)@+ MAXLIST=bqeI:100 MODES=4 NETWORK=Libera.Chat STATUSMSG=@+ CASEMAPPING=rfc1459 NICKLEN=16 MAXNICKLEN=16 CHANNELLEN=50 TOPICLEN=390 DEAF=D :are supported by this server</span><br><span class="line">:platinum.libera.chat 005 jsbach TARGMAX=NAMES:1,LIST:1,KICK:1,WHOIS:1,PRIVMSG:4,NOTICE:4,ACCEPT:,MONITOR: EXTBAN=$,ajrxz CLIENTVER=3.0 :are supported by this server</span><br><span class="line">:platinum.libera.chat 251 jsbach :There are 42 users and 19058 invisible on 19 servers</span><br><span class="line">:platinum.libera.chat 252 jsbach 32 :IRC Operators online</span><br><span class="line">:platinum.libera.chat 253 jsbach 3 :unknown connection(s)</span><br><span class="line">:platinum.libera.chat 254 jsbach 15309 :channels formed</span><br><span class="line">:platinum.libera.chat 255 jsbach :I have 1914 clients and 1 servers</span><br><span class="line">:platinum.libera.chat 265 jsbach 1914 2025 :Current local users 1914, max 2025</span><br><span class="line">:platinum.libera.chat 266 jsbach 19100 19898 :Current global users 19100, max 19898</span><br><span class="line">:platinum.libera.chat 250 jsbach :Highest connection count: 2026 (2025 clients) (15019 connections received)</span><br><span class="line">:platinum.libera.chat 375 jsbach :- platinum.libera.chat Message of the Day -</span><br><span class="line">:platinum.libera.chat 372 jsbach :- This server provided by NORDUnet/SUNET</span><br><span class="line">:platinum.libera.chat 372 jsbach :- Welcome to libera.chat, the IRC network for free & open-source software</span><br><span class="line">:platinum.libera.chat 372 jsbach :- and peer directed projects.</span><br><span class="line">:platinum.libera.chat 372 jsbach :-</span><br><span class="line">:platinum.libera.chat 372 jsbach :- Please visit us in #libera for questions and support.</span><br><span class="line">:platinum.libera.chat 376 jsbach :End of /MOTD command.</span><br><span class="line">:jsbach MODE jsbach :+RZi</span><br><span class="line">join #test</span><br><span class="line">:jsbach!~jsbach@xxxx JOIN #test</span><br><span class="line">:platinum.libera.chat 353 jsbach = #test :jsbach NightMonkey Daniel071_ python PowaBanga ExEsPi simong maximagui voliborfrey Koragg jstoker</span><br><span class="line">:platinum.libera.chat 366 jsbach #test :End of /NAMES list.</span><br><span class="line">privmsg #test
tmux trickshttp://kflu.github.io/2021/05/24/2021-05-24-tmux-tricks/2021-05-24T07:00:00.000Z2023-10-14T22:36:20.135Z
<h4>Useful key bindings</h4>
<ul>
<li><code>PREFIX-w</code>: <code>choose-window</code> - displays windows for user to choose interactively</li>
<li><code>PREFIX-:</code>: activate tmux command line (wtih tab completion)</li>
<li><code>PREFIX-q</code>: display window ID for each window</li>
</ul>
<p>The choose window is so useful when I'm terminal'ing from iPhone that I assign F10 to it:</p>
<pre><code>bind-keys -n F10 choose-window
</code></pre>
<h4>Deal with nested sessions</h4>
<p>Tmux offers two main benefit - terminal multiplexing, and process persistence. Usually you should avoid nested tmux sessions. But in case inside a tmux session, you ssh to a remote machine and do stuff, you would want a tmux session on the remote as well. First of, I suggest using that inner tmux only for process persistence, not terminal multiplexing, b/c passing tmux prefix keys into that inner one is not convenient.</p>
<p>Then here's the magic for the second point - inner tmux (remote) is only for process persistence, meaning, that tmux session only host one process (be it a shell, or vim, etc.). In this case, <strong>you don't really need</strong> the inner tmux to have a status bar, nor a prefix key.</p>
<ol>
<li>To getting rid of the status bar, use <code>set status off</code>.</li>
<li>To disable prefix keys, use <code>set prefix None</code>.</li>
</ol>
<p>Actually, I have a little shell script for launching this:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#!/bin/sh</span></span><br><span class="line">session=<span class="string">"<span class="variable">$1</span>"</span>; <span class="built_in">shift</span></span><br><span class="line">tmux new -A -s <span class="string">"<span class="variable">$session</span>"</span> \; <span class="built_in">set</span> prefix None \; <span class="built_in">set</span> status off</span><br></pre></td></tr></table></figure>
<p>Note that after disabling prefix key, you won't be able to manipulate that tmux session from inside it. Well most likely you don't need to, because it's single process, and you have a outer tmux. But just for entertaining purpose, you can control it from shell, inside that session:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">tmux detach</span><br><span class="line"><span class="comment"># or more generally, run any tmux command:</span></span><br><span class="line">tmux
emailing with gnus - imap, smtp, gmail, etchttp://kflu.github.io/2021/05/18/2021-05-18-gnus--imap--gmail--etc/2021-05-18T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>You don't need any configuration to use Gnus. Just <code>emacs -f gnus</code> then type <code>B</code>, you'll be prompted to enter all the information ephemerally:</p>
<ul>
<li>use <code>nnimap</code> as "methd"</li>
<li><code>imap.gmail.com</code> as server (Gmail needs to be prepared to enable IMAP though)</li>
<li>type username and password</li>
</ul>
<h2>Gnus Basic Concepts</h2>
<p>Gnus support newsgroup (NNTP), email (IMAP/POP), sending mail (IMAP). These are called "<strong>servers</strong>". Each server has many <strong>groups</strong>. For email servers, groups are just mail folders.</p>
<h3>Groups Buffer</h3>
<p>When Gnus starts up, you see the list of groups that you subscribed (which will be empty). This is called the <strong>group buffer</strong>. In group buffer, you can:</p>
<ul>
<li>Press <code>^</code> for the servers list, where you can enter each server and subscribe their groups (pressing <code>u</code>)
<ul>
<li>Subscribe means if there're new items then they'll show in group buffer</li>
</ul>
</li>
<li>Press <code>g</code> to manually fetch</li>
<li>Press <code>m</code> for composing new message
<ul>
<li>This is different from <code>c-x m</code> sending mail, which has nothing to do with Gnus. The difference is Gnus can "gcc" the sent item into a specified Sent (archive) folder.</li>
</ul>
</li>
</ul>
<h3>Summary Buffer</h3>
<p>Entering each subscribed group, you'll see the articles, or posts, or messages. This is called the <strong>summary buffer</strong>. Here most commands are prefixed with <code>B</code>:</p>
<ul>
<li><code>B <DEL></code> to purge</li>
<li><code>d</code> to mark as read (Gnus has many type of read marks but they don't matter for beginners)</li>
</ul>
<h3>Draft Buffer</h3>
<p>In the draft buffer, press <code>e</code> to edit (entering composing buffer)</p>
<h3>Composing Buffer</h3>
<p>This is where new messages (<code>m</code>) or drafts (<code>e</code>) are edited. In this view:</p>
<ul>
<li><code>c-c c-c</code> to send
<ul>
<li>If not configured, <code>smtpmail</code> prompts you on how to send - this is not part of Gnus</li>
</ul>
</li>
<li><code>c-x k</code> to drop message</li>
<li><code>c-x c-s</code> to save message</li>
<li><code>c-c y</code> to yank original message when replying</li>
</ul>
<h2>Setting up Gmail IMAP with GNUS</h2>
<p>Gmail makes it really hard to use IMAP:</p>
<ol>
<li>Turn on IMAP access in gmail settings</li>
<li>Turn on "Less secure app access" - (<em>NOTE</em> DO NOT do this by switching account if your gmail is multi-accounts and it's not your primary - it will turn on "less secure" for the main account. Do this in incognito mode.)</li>
<li>Now in GNUS (or any other client), use <code>imap.gmail.com</code> as server, and <strong>full</strong> gmail address as user name, regular password as password.</li>
</ol>
<h2>Setting up SDF SMTP</h2>
<p>SDF SMTP's CRAM-MD5 auth is broken, even though it advertises that it's supported, it always fail client authentication when this method is used. This breaks Emacs smtpmail - which although supports other auth methods (plain, login), are not attempted when cram-md5 fails (which seems to be a <a href="https://lists.gnu.org/archive/html/bug-gnu-emacs/2021-05/msg01669.html" target="_blank" rel="noopener">bug</a>). So to work around, I had to manually disable cram-md5 when SDF SMTP is used:</p>
<figure class="highlight lisp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">(<span class="name">setq</span> smtpmail-smtp-server <span class="string">"mx.sdf.org"</span>)</span><br><span class="line">(<span class="name">setq</span> smtpmail-smtp-service <span class="number">587</span>)</span><br><span class="line">(<span class="name">setq</span> mtpmail-stream-type 'starttls)</span><br><span class="line"></span><br><span class="line"><span class="comment">; SDF cram-md5 fails and emacs doesn't know to try the rest,</span></span><br><span class="line"><span class="comment">; so, forcefully set the auth method:</span></span><br><span class="line">(<span class="name">setq</span> smtpmail-auth-supported '(plain))</span><br></pre></td></tr></table></figure>
<h2>References</h2>
<ul>
<li><a href="https://www.gnu.org/software/emacs/manual/html_mono/smtpmail.html" target="_blank" rel="noopener">SMTPMail doc</a></li>
<li><a href="https://gnus.org/manual/gnus_toc.html" target="_blank" rel="noopener">Gnus doc</a></li>
<li><a href="https://github.com/redguardtoo/mastering-emacs-in-one-year-guide/blob/master/gnus-guide-en.org" target="_blank" rel="noopener">A practical guide to
Pelican Site Generator Setup Noteshttp://kflu.github.io/2021/05/07/2021-05-07-pelican-notes/2021-05-07T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>Pelican is a popular static site generator written in Python. I find it has a better design and provide saner tooling than Hexo. So maybe I'll switch at some point.</p>
<h2>HTML directory setup</h2>
<p>the ultimate publish dir will be ~/public_html. But for containing the source, create ~/public_html.src, and will use ~/public_html.src/output/ to hold the generated site. So, eventually,</p>
<pre><code>ln -s ~/public_html.src/output ~/public_html
</code></pre>
<p>will do.</p>
<p>NOTE: make sure all dirs on ~/public_html.src/output have <code>go+x</code> set.</p>
<h2>Pelican setup</h2>
<p>Pelican can be installed in isolated way using venv. From inside ~/public_html.src/:</p>
<pre><code>python3 -m venv .ve
# no need to activate the venv:
.ve/bin/python3 -m pip install 'pelican[markdown]'
# initialize the site
.ve/bin/pelican-quickstart
</code></pre>
<p>Now a Makefile is created to use <code>pelican</code> for various tasks. However, <code>make</code> for this Makefile requires venv activation. So we create a make wrapper to avoid the ve activation:</p>
<pre><code>$ cat >./make
#!/bin/sh
cd "$( dirname "$0" )"
PATH=./.ve/bin:"$PATH" make "$@"
</code></pre>
<p>From now on, all site related tasks can be done via:</p>
<pre><code>./make clean publish
</code></pre>
<h2>Pelican config</h2>
<p>Configs are in <code>pelicanconf.py</code> and <code>publishconf.py</code>. Note that variables can be defined in both, so make sure you only set a varaible at one of them. I recommend setting everything in <code>pelicanconf.py</code>.</p>
<p>Notable config variables:</p>
<ul>
<li><code>MENUITEMS</code>: defines the "links" displayed in the site's menu bar.</li>
</ul>
<h2>Pelican themes setup</h2>
<p>All themes are in https://github.com/getpelican/pelican-themes.</p>
<p>The most basic way to use one is (in ~/public_html.src):</p>
<pre><code>git clone https://github.com/getpelican/pelican-themes.git
</code></pre>
<p>Then set the <code>THEME</code> variable (which takes relative path):</p>
<pre><code>THEME = 'pelican-themes/<some_theme>'
</code></pre>
<h3>Deep customization</h3>
<p>Sometimes you need to edit the theme template directly, for example, I want to remove the footer of the "simple" theme, and there's no way to do so with configuration.</p>
<p>For deep customization, we also want to version control the theme files, so it's not desirable to clone <code>getpelican/pelican-themes</code>. Instead, create a <code>themes</code> folder with only the ones you need and customize inside:</p>
<pre><code>mkdir themes
# assume you cloned pelican-themes *outside* of the site repo:
cp -r ~/Downloads/pelican-themes/basic themes/basic
</code></pre>
<p>Then make edits to <code>themes/basic</code> and set <code>THEME = 'themes/basic'</code>.</p>
<h3>Customizing theme template which is inherited</h3>
<p>If the template you want to edit is <code>extended</code>, e.g., <code>basic/templates/base.html</code> has:</p>
<pre><code>{% extends "!simple/base.html" %}
</code></pre>
<p>What if it is the <code>simple</code> theme you want to edit? We can make a copy (see note 1) of the simple theme into <code>themes/simple2</code>, and have <code>basic</code> points to it. To do so, we need to set variable:</p>
<pre><code>THEME_TEMPLATES_OVERRIDES = [ 'themes' ]
</code></pre>
<p>Then change <code>basic</code> theme's <code>extends</code> directive to:</p>
<pre><code>{% extends "simple2/templates/base.html" %}
</code></pre>
<p>Note 1: where do we copy the <code>simple</code> theme, since it comes with <code>pelican</code> itself? You can list all installed themes' locations with:</p>
<pre><code>.ve/bin/pelican-themes
Markdown Editors Surveyhttp://kflu.github.io/2021/05/02/2021-05-02-markdown-editors/2021-05-02T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>Recently my interest in using Markdown documents as personal notes have re-emerged.</p>
<h2>Typora</h2>
<p>This looks like a really good markdown editor. Desirable features include:</p>
<ul>
<li>Cross-platform and free</li>
<li>Paste images from clipboard
<ul>
<li>There's an option to save image to current folder</li>
</ul>
</li>
<li>Cmd-K to insert link</li>
<li>Follow links with cmd-click</li>
<li>Able to follow local links (important for wiki like personal notes)</li>
<li>Outline mode</li>
<li>Always-on-Top (useful for note-taking)</li>
<li><a href="https://support.typora.io/Use-Typora-From-Shell-or-cmd/" target="_blank" rel="noopener">Launch Typora from command line</a>
<ul>
<li>Mac: <code>open -a typora <file></code></li>
</ul>
</li>
</ul>
<p>This feels like a quality build software. I predict it will become wildly more popular.</p>
<p><strong>Note - launch typora from command line</strong></p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">cat ><span class="variable">$HOME</span>/.<span class="built_in">local</span>/bin/typora <<<span class="string">'EOF'</span></span><br><span class="line"><span class="meta">#!/bin/sh</span></span><br><span class="line"></span><br><span class="line">(</span><br><span class="line">/Applications/Typora.app/Contents/MacOS/Typora <span class="string">"<span class="variable">$@</span>"</span> >/dev/null 2>&1 &</span><br><span class="line"><span class="built_in">command</span> osascript -e <span class="string">"delay 0.2"</span> -e <span class="string">"tell application \"Typora\" to activate"</span></span><br><span class="line">)&</span><br><span class="line">EOF</span><br><span class="line">chmod +x <span class="variable">$HOME</span>/.<span class="built_in">local</span>/bin/typora</span><br></pre></td></tr></table></figure>
<h2>Vim</h2>
<p>In the past 10+ years I've been using Vim for basically everything everyday. So my default markdown editor has been vim. For quick edits I still prefer vim. Some nice plugins for markdown editing are:</p>
<ul>
<li>Goyo: distraction free writing</li>
<li>vim-markdown: better syntax highlighting
<ul>
<li>ToC: <code>:TOC</code></li>
<li>Link conceal: <code>:set conceallevel=2 concealcursor=nc</code></li>
</ul>
</li>
<li>Follow local link: the good old <code>gf</code></li>
<li>Preview image (maybe <code>gx</code>)</li>
</ul>
<p>If you choose to not using hard wrap, then following config makes long lines look nice:</p>
<pre><code>set breakindent " soft wrapped lines are indented too
let &showbreak = '↳ ' " visual line break
Archlinux noteshttp://kflu.github.io/2021/01/21/2021-01-21-arch-linux-install/2021-01-21T08:00:00.000Z2023-10-14T22:36:20.135Z
<h2>ISSUE: NO INTERNET AFTER INSTALLATION AND REBOOT</h2>
<p>Symptom:</p>
<ul>
<li>the interface is DOWN</li>
<li>no name resolve after manually bring the interface UP</li>
<li>systemd-networkd and systemd-resolved services are not started</li>
</ul>
<p>Enable temporary (lost after reboot) internet access:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">ip link <interface> up</span><br><span class="line">dhcpcd</span><br></pre></td></tr></table></figure>
<p>The persistent fix is to use and configure a network manager. Choices are <a href="https://wiki.archlinux.org/index.php/Network_configuration#Network_managers" target="_blank" rel="noopener">here</a>. My experience are:</p>
<ol>
<li>systemd-networkd and systemd-resolved is arch/systemd built-in, so requires no additional download. However, it is not "smart engouth" to support zero-configuration. So you'll have to write a network config.</li>
<li>NetworkManager is used by anarchy-linux installer. It is zero-configuration (only requires install and enable). However, it's an additional 100+ MB download (maybe due to its tie to Gnome).</li>
</ol>
<p>Below describes how to configure systemd-networkd|resolved.</p>
<p>First, configure systemd-networkd. Find the interface name (e.g., <code>enp0s3</code>),
then create file <code>/etc/systemd/network/200-enp0s3.network</code>:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[Match]</span><br><span class="line">Name=enp0s3</span><br><span class="line"></span><br><span class="line">[Network]</span><br><span class="line">DHCP=yes</span><br></pre></td></tr></table></figure>
<p>Finally, enable the services and reboot:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">systemctl enable --now systemd-networkd.service systemd-resolved.service</span><br><span class="line">reboot</span><br></pre></td></tr></table></figure>
<p>You should have internet connection now.</p>
<p>References:</p>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/Systemd-networkd" target="_blank" rel="noopener">arch systemd-networkd</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Systemd#Enable_installed_units_by_default" target="_blank" rel="noopener">arch systemd</a></li>
</ul>
<h2>INSTALLERS AND ARCH-BASED DISTROS</h2>
<p>To simplify installation, one can choose to use a installer program (once
booted into Arch live CD), or choose a Arch-based distro.</p>
<p><a href="https://itsfoss.com/arch-based-linux-distros/" target="_blank" rel="noopener">This</a> article lists the arch-based distros. Manjaro seems to be the most popular. However I tried Manjaro Architect, the installer seems to be too low level - it's basically waling through Arch's installation guide, so one might just go through the guide.</p>
<p>Among the installers:</p>
<ul>
<li><a href="https://anarchyinstaller.org/" target="_blank" rel="noopener">Anarchy</a> is most polished. Installer experience is close to what I imagine.</li>
<li><a href="https://github.com/MatMoul/archfi" target="_blank" rel="noopener">archfi</a> feels rough edged. Disk partitioning (and others) is very limiting. Not so useful</li>
<li><a href="https://picodotdev.github.io/alis/" target="_blank" rel="noopener">alis</a> is not interactive, but config based.</li>
</ul>
<p>I pick Anarchy - note it's an ISO bundled with Arch image. So it's a blury line between installers and distro.</p>
<p>Note on internet connection: it looks like if choosing anarchy-desktop, then NetworkManager is configured as the network manager, and internet is enabled. If choosing anarchy-server, then no network manager is installed, and no internet.</p>
<h2>ABS & MAKEPKG</h2>
<p>makepkg verifies GPG signature for each package. But that needs to be explicitly trusted
by user:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># makepkg complains can't verify gpg key xxxxxx</span><br><span class="line">gpg --recv-keys
luit Usagehttp://kflu.github.io/2020/12/20/2020-12-20-luit-usage/2020-12-20T08:00:00.000Z2023-10-14T22:36:20.135Z
<p>luit can translate non-UTF-8 char encodings into UTF-8, but <strong>cannot</strong> do the
reverse. There're many ways to invoke luit, first read the manual.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">luit -encoding <enc> -- prog [args...] # (1)</span><br><span class="line">prog_outputs_gbk | luit -c -encoding gbk # (2)</span><br><span class="line">luit -encoding <enc> # (3) invokes a nested shell using given encoding</span><br></pre></td></tr></table></figure>
<p>The limitation of usage (1) is if the program also redirects its output, it's
tricky:</p>
<p>You hope this would writes GBK chars into <code>file</code>, but it doesn't. What you types
is repeated to the stdout. Because luit merely runs <code>cat</code>, not <code>cat >file</code>.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">luit -encoding gbk cat >file</span><br></pre></td></tr></table></figure>
<p>Also, this won't work either:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">cat | luit -encoding gbk -c >file</span><br><span class="line"># or simpler:</span><br><span class="line">luit -encoding gbk -c >file</span><br></pre></td></tr></table></figure>
<p><code>-c</code> makes <code>luit</code> take from stdin and writes to stdout. It doesn't work b/c luit
translates gbk to utf8, from stdin to stdout. If you want the <code>file</code> to have gbk
content, it won't work.</p>
<p>Finally, to make it work:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">luit -encoding gbk sh -c 'cat >file'</span><br></pre></td></tr></table></figure>
<p>or run <code>luit -encoding gbk</code> which opens a nested shell, then in that shell, do
<code>cat
Working with FIFOs (Named Pipes)http://kflu.github.io/2020/12/15/2020-12-15-working-with-fifo/2020-12-15T08:00:00.000Z2023-10-14T22:36:20.135Z
<p>How to properly read from FIFO:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">cat <>fifo</span><br></pre></td></tr></table></figure>
<p>Stéphane Chazelas on StackExchange shared some really good explanation
<a href="https://unix.stackexchange.com/a/392754/38968" target="_blank" rel="noopener">here</a>, especially on the strange
behavior of using <code>tail -f fifo</code> to read.</p>
<blockquote>
<p>Like cat, tail will wait for a process to open a file for writing. But here,
since you didn't specify a -n +1 to copy from the beginning, tail will need to
wait until eof to find out what the last 10 lines were, so you won't see
anything until the writing end is closed.</p>
<p>After that, tail will not close its fd to the pipe which means the pipe instance
won't be destroyed, ... That read() will return with eof ... until some other
process opens the file again for writing.</p>
</blockquote>
<p>Another answer also by Stéphane is at
<a href="https://unix.stackexchange.com/a/522881/38968" target="_blank" rel="noopener">here</a>. This one let me realize
that <code>dd</code> can also be used for reading fifos (more flexible):</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">dd bs=64k if="$my_named_pipe" iflag=nonblock status=noxfer</span><br></pre></td></tr></table></figure>
<p>Another flexible reading approach is socat.</p>
<p>Side note - someone else mentioned that UDP can be used for "non-blocking"
data transfer: one end can send/receive data without the other end present. This
could be quite convenient/easy to work
Setting up environment to play Chinese MUD on MacOS (with Tintin++)http://kflu.github.io/2020/09/05/2020-09-05-macos-chinese-mud/2020-09-05T07:00:00.000Z2023-10-14T22:36:20.135Z
<p><em>Update: See bottom for an update</em></p>
<p>Old Chinese MUDs uses BG* charsets, e.g., GBK, GB2312. Both Terminal app and
iTerm2 can be configured and used to play. To set up the terminal:</p>
<p>Hint: Create a dedicated profile for MUD playing, call it "CN MUD"</p>
<p><strong>To properly display GBK characters</strong></p>
<ol>
<li>Change terminal app character encoding to BGK (in "advanced")</li>
<li>Check the box "Unicode East Asian Ambiguous characters are wide" (in
"advanced"). With this, the ASCII arts in the game (e.g. maps) will have
misalignment.</li>
<li>After that, the font can be <strong>left</strong> unchanged, as the English, mono font</li>
</ol>
<p><strong>To input GBK characters in shell</strong></p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"># zsh autocompletion helps prompting the values</span><br><span class="line">export LC_ALL=zh_CN.GBK</span><br><span class="line"></span><br><span class="line"># or, make a function to fire up tintin++:</span><br><span class="line">LC_ALL=zh_CN.GBK tt++</span><br></pre></td></tr></table></figure>
<p><strong>To input GBK characters in Tintin++</strong></p>
<p>When tt++ is launched, type in:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">#config CHARSET GBK</span><br></pre></td></tr></table></figure>
<p>This config is only documented <a href="https://tintin.sourceforge.io/board/viewtopic.php?p=5795&sid=f408c5eb7c78012d690369390b84458d" target="_blank" rel="noopener">here</a>. It also didn't mention <code>GBK</code> as a
valid value, but it seems so.</p>
<p><em>Without this step, typing GBK characters would freeze tt++ application and
you'll have to kill it from another tty.</em></p>
<h2>Update: Tue Sep 22 23:42:37 PDT 2020</h2>
<p>I found tmux doesn't work with above solution. Because tmux doesn't support
non-UTF8 locales:</p>
<pre><code> LC_CTYPE The character encoding locale(1). It is used for two separate
purposes. For output to the terminal, UTF-8 is used if the -u
option is given or if LC_CTYPE contains "UTF-8" or "UTF8".
Otherwise, only ASCII characters are written and non-ASCII
characters are replaced with underscores (`_')...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</code></pre>
<p>Luckily I found another more elegant way to solve the problem without changing
terminal locale - using <code>luit</code></p>
<p><code>luit</code> can convert a single non-UTF8 program's input and output between UTF8 and
a specified locale. Since tmux doesn't non-UTF8, so it is NOT useful to attempt
<code>luit -encoding GBK tmux</code>. However, within a tmux pane, it is useful to do:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">luit -encoding GBK tt++</span><br><span class="line"># within tt++ do:</span><br><span class="line">#config CHARSET GBK</span><br></pre></td></tr></table></figure>
<p>In another tmux pane for displaying logging:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">luit -encoding tail -f chat.log</span><br></pre></td></tr></table></figure>
<h2>TMUX setup</h2>
<p>No matter luit or not, the terminal app needs to display east asian wide
characters wide. However, once it's enabled, tmux's pane splitter would look
strange. The vertical splitter would be invisible most of the time.</p>
<p>To fix this, tmux 3.2 introduced a setting <code>pane-boarder-lines</code> that allows use
only ASCII characters for the splitters.</p>
<p>I was able to build tmux 3.2rc from source tarball easily on MacOS:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">./configure && make</span><br><span class="line">sudo make install</span><br></pre></td></tr></table></figure>
<p>Once that, start the tmux and set:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">set pane-boarder-lines simple</span><br></pre></td></tr></table></figure>
<p>Would solve the problem.</p>
<h2>Thu Aug 18 23:29:08 PDT 2022</h2>
<p>I found easiest way is:</p>
<ol>
<li>use native terminal without using tmux
<ol>
<li>set terminal encoding to gbk</li>
<li>enable "ambiguous characters are double-width"</li>
</ol>
</li>
<li>in tt++, use <code>#config CHARSET GBK</code> to type chinese characters
<ol>
<li>tt++ also supports GBK to UTF conversion (<code>#config charset GBK1TOUTF8</code>). See <a href="https://tintin.mudhalla.net/faq.php" target="_blank" rel="noopener">tt++
Proof of concept - Minimalistic IRC bot via AWKhttp://kflu.github.io/2020/08/15/2020-08-15-awk-irc-bot/2020-08-15T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>I realized that awk is ideal for expect-like automation. The difficulty I faced
was how can awk "control" another program's (in this case, an irc client) both
stdin and stdout, i.e., awk would need to both read from the program's stdout
and write to its stdin. I solved this program by creating a FIFO for taking
inputs, and hook the program to it. Below is the detail:</p>
<p>First, let's set up our system:</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">>>> mkfifo ups <span class="comment"># FIFO for taking inputs to IRC (upstream)</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># Connect to IRC server. The <> style redirection ensures the </span></span><br><span class="line"><span class="comment"># EOF sent from other processes to the fifo doesn't cause this</span></span><br><span class="line"><span class="comment"># pipeline to terminate. It also ensures non-buffering mode</span></span><br><span class="line"><span class="comment"># so outputs are piped through without delay.</span></span><br><span class="line">>>> nc chat.freenode.org 6667 <>ups | awk <span class="string">"<span class="variable">$(cat <<'EOF'</span></span></span><br><span class="line"><span class="string"><span class="variable">{</span></span></span><br><span class="line"><span class="string"><span class="variable"> print "[received] " $0</span></span></span><br><span class="line"><span class="string"><span class="variable">}</span></span></span><br><span class="line"><span class="string"><span class="variable"></span></span></span><br><span class="line"><span class="string"><span class="variable">/kfkfkf/ {</span></span></span><br><span class="line"><span class="string"><span class="variable"> print "+++++ I saw "kfkfkf! +++++"</span></span></span><br><span class="line"><span class="string"><span class="variable"> print "hooohaaa" > "ups";</span></span></span><br><span class="line"><span class="string"><span class="variable">}</span></span></span><br><span class="line"><span class="string"><span class="variable">EOF</span></span></span><br><span class="line"><span class="string"><span class="variable">)</span>"</span></span><br></pre></td></tr></table></figure>
<p>Now this bot is up and running without human intervention. In case you want to
interact with this IRC session, you could access the intput using <code>cat</code>:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">>>> cat > ups</span><br><span class="line"></span><br><span class="line">nick foobar</span><br><span class="line">user foobar _ _ foobar</span><br><span class="line"></span><br><span class="line">privmsg nickserv :identify <password></span><br><span class="line"></span><br><span class="line">privmsg <nick> :<msg></span><br><span class="line">privmsg <#channel> :<msg></span><br></pre></td></tr></table></figure>
<p>The system looks like this:</p>
<pre><code>+---------------------------+
| |
| cat |
| |
+------------+--------------+
|
|
v
+--+--+
| ups +<---------------+
+--+--+ |
| |
| |
v |
+------------+--------------+ |
| | |
| nc chat.freenode.org 6667 | |
| | |
+------------+--------------+ |
| |
| |
v |
+------------+--------------+ |
| | |
| awk +----+
| |
+---------------------------+
</code></pre>
<p>One can connect to Freenode IRC using SSL instead, either one of below works:</p>
<pre><code>openssl s_client -quiet -connect chat.freenode.org:6697
socat stdio openssl-connect:chat.freenode.org:6697
</code></pre>
<h1>Taking it further</h1>
<p>A pattern can be observed here - the main program is the IRC session, where its
outputs are streaming through a pipeline unmodified. A set of bot programs takes
IRC output as stdin, and their outputs are the bot command that should be sent
back to the IRC. Then, we can use a utility tee program (<a href="https://github.com/kflu/proctee" target="_blank" rel="noopener"><code>proctee</code></a>) to
chain those bots together.</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">mkfifo ups <span class="comment"># input to IRC</span></span><br><span class="line">mkfifo display <span class="comment"># "display" window </span></span><br><span class="line"></span><br><span class="line"><span class="comment"># read user and password</span></span><br><span class="line"><span class="built_in">echo</span> -n <span class="string">'user: '</span>; <span class="built_in">read</span> user; </span><br><span class="line"><span class="built_in">echo</span> -n <span class="string">'pass: '</span>; stty -<span class="built_in">echo</span>; <span class="built_in">read</span> pass; stty <span class="built_in">echo</span></span><br><span class="line"></span><br><span class="line"><>ups socat stdio openssl-connect:chat.freenode.org:6697 \</span><br><span class="line">| { proctee -o display -- awk <span class="string">'{print}'</span> ;: logging bot } \</span><br><span class="line">| { proctee -o display -o ups -- bots/user.awk \</span><br><span class="line"> -v user=<span class="string">"user"</span> \</span><br><span class="line"> pass=<span class="string">"pass"</span> ;: user bot } \ </span><br><span class="line">| { proctee -o display -o ups -- bots/weather ;: weather bot } \</span><br><span class="line">| { proctee -o display -o ups -- bots/time ;: time bot
Nginx noteshttp://kflu.github.io/2020/05/09/2020-05-09-nginx-notes/2020-05-09T07:00:00.000Z2023-10-14T22:36:20.135Z
<h3>Running in daemon-less (foreground) mode</h3>
<p>Create a config with:</p>
<pre><code># filename: server.conf
daemon off;
</code></pre>
<p>Run:</p>
<pre><code>nginx -c "server.conf"
</code></pre>
<h3>Minimum Config</h3>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">daemon off; # only for dev mode</span><br><span class="line">events {}</span><br><span class="line"></span><br><span class="line">http {</span><br><span class="line"> server {</span><br><span class="line"> listen 8888;</span><br><span class="line"></span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://localhost:3000;</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span
React and Typescript noteshttp://kflu.github.io/2020/04/27/2020-04-27-react-typescript/2020-04-27T07:00:00.000Z2023-10-14T22:36:20.135Z
<p><a href="https://create-react-app.dev/" target="_blank" rel="noopener">Create React App</a> is React + Webpack. It scaffolds projects.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">npx create-react-app myapp --template=typescript</span><br></pre></td></tr></table></figure>
<p>React components:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">// functional component:</span><br><span class="line">function Foo(x: SomeType) {</span><br><span class="line"> return (<div> ... </div>);</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">// class component:</span><br><span class="line">class Bar extends Component {</span><br><span class="line"></span><br><span class="line"> render() { return (<div>...</div>); }</span><br><span class="line"></span><br><span class="line"> async componentDidMount() {</span><br><span class="line"> let data = await fetch("http://...");</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">// Use in JSX:</span><br><span class="line">(<Foo {...x} />)</span><br></pre></td></tr></table></figure>
<ul>
<li><a href="https://create-react-app.dev/" target="_blank" rel="noopener">Create React App</a></li>
<li><a href="https://www.typescriptlang.org/docs/handbook/basic-types.html" target="_blank" rel="noopener">Typescript handbook</a> is simple and clear</li>
<li><a href="https://medium.com/better-programming/building-basic-react-authentication-e20a574d5e71" target="_blank" rel="noopener">React Authentication</a> - token
Zoneminder setup guidehttp://kflu.github.io/2020/04/16/2020-04-16-zoneminder/2020-04-16T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>Installing Zoneminder is not straightforward at all. You need to set up all the dependencies and configure them: e.g., PHP, Apache, MySQL.</p>
<p>A easier way is to use <a href="https://github.com/dlandon/zoneminder" target="_blank" rel="noopener">the ZM docker image</a>.</p>
<p>However FreeBSD doesn't support Docker. So I installed a Ubuntu Server guest on my FreeBSD, and install docker on the Ubuntu Server instead. I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04" target="_blank" rel="noopener">this guide</a> to install Docker on Ubuntu. Basically just setup Docker's repository for <code>apt</code>. Then simply <code>apt install docker-ce</code>.</p>
<p>Once docker is ready, use follow commands to install and start the ZM service. Note that first time it takes a while to start up. You can check the log with <code>docker container log Zoneminder</code>.</p>
<p>Once ZM starts up, point your browser to <code>http://ip:8080/zm</code> you should see ZM the landing page.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">docker pull dlandon/zoneminder</span><br><span class="line"></span><br><span class="line">sudo docker run -d --name="Zoneminder" \</span><br><span class="line">--restart=always \</span><br><span class="line">--net="bridge" \</span><br><span class="line">--privileged="true" \</span><br><span class="line">-p 8080:80/tcp \</span><br><span class="line">-p 9000:9000/tcp \</span><br><span class="line">-e TZ="America/Los_Angeles" \</span><br><span class="line">-e SHMEM="50%" \</span><br><span class="line">-e PUID="99" \</span><br><span class="line">-e PGID="100" \</span><br><span class="line">-e INSTALL_HOOK="0" \</span><br><span class="line">-e INSTALL_FACE="0" \</span><br><span class="line">-e INSTALL_TINY_YOLO="0" \</span><br><span class="line">-e INSTALL_YOLO="0" \</span><br><span class="line">-e MULTI_PORT_START="0" \</span><br><span class="line">-e MULTI_PORT_END="0" \</span><br><span class="line">-v "/mnt/Zoneminder":"/config":rw \</span><br><span class="line">-v "/mnt/Zoneminder/data":"/var/cache/zoneminder":rw \</span><br><span
FreeBSD BHyve Setup Notehttp://kflu.github.io/2020/04/08/2020-04-08-freebsd-bhyve/2020-04-08T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>bhyve is FreeBSD's hypervisor. The native setup guide is at <a href="https://www.freebsd.org/doc/handbook/virtualization-host-bhyve.html" target="_blank" rel="noopener">here</a>.
However, there exists a higher level wrapper called <a href="https://github.com/churchers/vm-bhyve" target="_blank" rel="noopener">vm-bhyve</a>
available. I'm using that.</p>
<p>I'm following the installation guide in <a href="https://github.com/churchers/vm-bhyve/blob/master/README.md" target="_blank" rel="noopener">README</a>. In a summary:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">pkg install vm-bhyve</span><br><span class="line">pkg install grub2-bhyve # for linux guests</span><br><span class="line">pkg install bhyve-firmware # for UEFI support</span><br><span class="line">zfs create zroot/vm</span><br><span class="line"></span><br><span class="line"># update /etc/rc.conf</span><br><span class="line">sysrc vm_enable="YES"</span><br><span class="line">sysrc vm_dir="zfs:zroot/vm"</span><br><span class="line"></span><br><span class="line">vm init</span><br><span class="line">cp /usr/local/share/examples/vm-bhyve/* /zroot/vm/.templates/</span><br><span class="line"></span><br><span class="line"># This creates network interface `vm-public`</span><br><span class="line">vm switch create public</span><br><span class="line">vm switch add public em0</span><br><span class="line"></span><br><span class="line">vm iso http://repo1.sea.innoscale.net/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso</span><br><span class="line">vm iso http://releases.ubuntu.com/18.04.4/ubuntu-18.04.4-live-server-amd64.iso</span><br></pre></td></tr></table></figure>
<p>A few notes:</p>
<p>I created a zfs dataset <code>zroot/vm</code> for storing the vms, and set <code>vm_dir</code> to <code>zfs:zroot/vm</code>.</p>
<p>I followed the guide to name the vm switch <code>public</code>. And I can see a network
interface <code>vm-public</code> was created for me:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">vm-public: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500</span><br><span class="line"> ether 06:b3:5f:a8:b6:2c</span><br><span class="line"> nd6 options=1<PERFORMNUD></span><br><span class="line"> groups: bridge vm-switch viid-4c918@</span><br><span class="line"> id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15</span><br><span class="line"> maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200</span><br><span class="line"> root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0</span><br><span class="line"> member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP></span><br><span class="line"> ifmaxaddr 0 port 1 priority 128 path cost 200000</span><br></pre></td></tr></table></figure>
<p><strong>Choice of Linux distro and ISO</strong></p>
<p>I initially want to use CentOS. However CentOS by default uses graphic
installation, but I wasn't able to get that working (bhyeve seems to support VNC
for UEFI graphic but I didn't get that working).</p>
<p>I had better luck with Ubuntu server. It offers text mode installation. And I
was able to start and install it:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">vm iso http://releases.ubuntu.com/18.04.4/ubuntu-18.04.4-live-server-amd64.iso</span><br><span class="line">vm create -t ubuntu ub</span><br><span class="line">vm install ub ubuntu-18.04.4-live-server-amd64.iso</span><br><span class="line"></span><br><span class="line">vm console ub # attaches to the guest OS console</span><br><span class="line"># ... in a few moment Ubuntu ISO started and installation proceeded normally ...</span><br></pre></td></tr></table></figure>
<p>The security update at the end of Ubuntu installation did fail, so I chose
"cancel update and reboot".</p>
<p><strong>Fixing Grub</strong></p>
<p>After reboot, Ubuntu server booted into grub prompt. <a href="https://unix.stackexchange.com/a/330852/38968" target="_blank" rel="noopener">This SO post</a> is helpful. It means grub can't find root partition. So in grub prompt:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">set prefix=(hd0,gpt2)/boot/grub</span><br><span class="line">set root=(hd0,gpt2)</span><br><span class="line">insmod linux</span><br><span class="line">insmod normal</span><br><span class="line">normal</span><br></pre></td></tr></table></figure>
<p>This started ubuntu normally. Then issue command to fix grub:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">sudo update-grub</span><br></pre></td></tr></table></figure>
<p><strong>However after reboot, it stuck in grub prompt again...</strong>*</p>
<p><strong>[WORKING SOLUTION]</strong> it was due to Ubuntu Server installs boot to 2nd
partition, but with default bhyve grub looks for boot in 1st partition. See
<a href="https://github.com/churchers/vm-bhyve/wiki/Configuring-Grub-Guests" target="_blank" rel="noopener">here</a>.</p>
<p>The fix is to add:</p>
<pre><code>grub_run_partition="2"
</code></pre>
<p>into the VM's conf (<code>/zroot/vm/ub/ub.conf</code>).</p>
<p>Now the VM can start normally.</p>
<p>Here's the VM config that works for Ubuntu Server 18.04.4:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">loader="uefi"</span><br><span class="line"># ubuntu server installs boot to 2nd partition</span><br><span class="line">grub_run_partition="2"</span><br><span class="line">cpu=1</span><br><span class="line">memory=512M</span><br><span class="line">network0_type="virtio-net"</span><br><span class="line">network0_switch="public"</span><br><span class="line">disk0_type="virtio-blk"</span><br><span class="line">disk0_name="disk0.img"</span><br></pre></td></tr></table></figure>
<h2>Cloud images</h2>
<p>Trying out using cloud images. The ubuntu server minimal is only 100MB+</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">pkg install qemu-utils # needed for using cloud images</span><br><span class="line">vm img https://cloud-images.ubuntu.com/minimal/releases/bionic/release-20200318/ubuntu-18.04-minimal-cloudimg-amd64.img</span><br></pre></td></tr></table></figure>
<p>Cloud image doesn't allow password login. Luckily vm-bhyve also supports <a href="https://github.com/churchers/vm-bhyve#using-cloud-init" target="_blank" rel="noopener">cloud
init</a>. So you can inject
SSH public keys:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">pkg install cdrkit-genisoimage # required by cloud init</span><br><span class="line">vm create -t ubuntu_server -i ubuntu-18.04-minimal-cloudimg-amd64.img -C -k ~/.ssh/id_rsa.pub ub2</span><br></pre></td></tr></table></figure>
<h2>Cloud init</h2>
<p>Cloud init is two part:</p>
<ol>
<li>the cloud-init service running inside the guest machine, reading data sources
(e.g., from a mounted ISO system "seed.iso" to read init config.</li>
<li>the preparation of the "seed.iso" to "inject" the init config. This is done
on the host, before booting the guest VM for the first time.</li>
</ol>
<p>At the moment, vm-bhyve doesn't support all the cloud-init functions. Its
support is basically upon <code>vm create</code> command:</p>
<ol>
<li>it reads the cloud-init related configs it understands (very limited)</li>
<li>dump them into <code><vm>/.cloud-init</code> folder</li>
<li>create the <code>seed.iso</code> by invoking <code>genisoimage -output ./seed.iso -volid cidata -joliet -rock .cloud-init/*</code></li>
</ol>
<p>And cloud-init will be triggered the first time the VM is booted.</p>
<p>So to circumvent vm-bhyve's limitation, I:</p>
<ol>
<li>create a new VM</li>
<li>create a <code>.cloud-init</code> folder with proper config files as I desire</li>
<li>create the
Notes About ONVIF IP Camerahttp://kflu.github.io/2020/04/04/2020-04-04-onvif-ip-cam/2020-04-04T07:00:00.000Z2023-10-14T22:36:20.135Z
<h2>CONNECTING THE CAMERA TO NETWORK</h2>
<p>Getting the camera to connect to the network is still a device-dependent task.
For my camera there's no ethernet port for configuration, so it relies on
onboard microphone as I/O:</p>
<ol>
<li>connect to iOS app</li>
<li>Specify WIFI connection info in the app</li>
<li>The app modulates the info in audio wave and use phone's microphone to send
the info to the camera</li>
<li>Camera now connects to LAN</li>
</ol>
<h2>GETTING INFORMATION ABOUT THE CAMERA</h2>
<p>Now this is where ONVIF shines. It defines a set of standard web services. The
specs are <a href="https://www.onvif.org/profiles/specifications/" target="_blank" rel="noopener">here</a>. Those services are in WSDL, so using .NET core to
generate client code is really each:</p>
<h3>Finding the device IP</h3>
<p>I found the device IP by going into my router and searching by MAC (MAC is printed on a label on the camera). Alternatively, device IP can be obtained from the app.</p>
<h3>Finding servive endpoints</h3>
<p>The ONVIF core spec states that the device management (<code>devicemgmt.wsdl</code>)
endpoint is fixed to <code>http://onvif_host/onvif/device_service</code> (section 5.1.1).</p>
<p>One useful tool is <a href="https://github.com/patrickmichalina/camera-probe" target="_blank" rel="noopener"><code>camera-probe</code></a>. It's a NPM CLI that discovers
devices on LAN. It was able to detect my camera.</p>
<h3>Using WSDL clients</h3>
<p>.NET Core has tool to generate WSDL client given WSDL documents. The two web services that're particularly useful are <code>device.wsdl</code> (or confusingly <code>devicemgmt.wsdl</code>) and <code>media.wsdl</code>.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">dotnet tool install --global dotnet-svcutil</span><br><span class="line">mkdir onvif && cd onvif</span><br><span class="line">dotnet new console</span><br><span class="line">dotnet-svcutil https://www.onvif.org/ver10/device/wsdl/devicemgmt.wsdl -n '*,mgmt'</span><br><span class="line">dotnet-svcutil https://www.onvif.org/ver10/media/wsdl/media.wsdl -n '*,media'</span><br></pre></td></tr></table></figure>
<p>To use WSDL clients, the most important namespace is
<code>System.ServiceModel.*</code>. Creating service clients requires specifying a binding
(<code>BasicHttpBinding</code>), and the servicec endpoint:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">var client = new DeviceClient(</span><br><span class="line"> new System.ServiceModel.BasicHttpBinding(),</span><br><span class="line"> new System.ServiceModel.EndpointAddress("http://192.168.0.19:80/onvif/device_service")</span><br><span class="line">);</span><br><span class="line"></span><br><span class="line">var mediaClient = new MediaClient(</span><br><span class="line"> new System.ServiceModel.BasicHttpBinding(),</span><br><span class="line"> new System.ServiceModel.EndpointAddress("http://192.168.0.19:80/onvif/device_service")</span><br><span class="line">);</span><br></pre></td></tr></table></figure>
<p>I used below code to get various information about the device, the streams, and
the detail for each stream:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br></pre></td><td class="code"><pre><span class="line">System.Console.WriteLine(js(</span><br><span class="line"> await client.GetCapabilitiesAsync(new CapabilityCategory[] {CapabilityCategory.All})</span><br><span class="line">));</span><br><span class="line"></span><br><span class="line">System.Console.WriteLine(js(</span><br><span class="line"> await mediaClient.GetServiceCapabilitiesAsync()</span><br><span class="line">));</span><br><span class="line"></span><br><span class="line">System.Console.WriteLine(js(</span><br><span class="line"> await mediaClient.GetProfilesAsync()</span><br><span class="line">));</span><br><span class="line"></span><br><span class="line">System.Console.WriteLine(js( </span><br><span class="line"> await mediaClient.GetVideoSourceConfigurationAsync("stream0")</span><br><span class="line">));</span><br><span class="line"></span><br><span class="line">System.Console.WriteLine(js(</span><br><span class="line"> await mediaClient.GetStreamUriAsync(</span><br><span class="line"> new StreamSetup()</span><br><span class="line"> {</span><br><span class="line"> Stream = StreamType.RTPUnicast,</span><br><span class="line"> Transport = new Transport</span><br><span class="line"> {</span><br><span class="line"> Protocol = TransportProtocol.HTTP,</span><br><span class="line"> },</span><br><span class="line"> },</span><br><span class="line"> "profile_VideoSource_1"</span><br><span class="line"> )</span><br><span class="line">));</span><br></pre></td></tr></table></figure>
<p>The <code>GetStreamUriAsync</code> call gives the RTSP stream address for stream profile
<code>profile_VideoSource_1</code>: <code>rtsp://192.168.0.xx:2600/stream0</code>.</p>
<p>Note that this stream needs authentication. The user and password is,
unfortunately, device-dependent. For my device, the instrucment is mentioned at
Amiccom website under NVR section. It basically says the username is <code>admin</code>,
the password must be obtained from the app. I do see a generated password in the
app.</p>
<p>So I was able to connect to the stream in VLC using the following URI:</p>
<pre><code>rtsp://<username>:<password>@<device_addr>:2600/stream0`
</code></pre>
<h1>References</h1>
<ul>
<li><a href="https://www.onvif.org/profiles/specifications/" target="_blank" rel="noopener">specs</a></li>
<li><a href="https://www.onvif.org/wp-content/uploads/2016/12/ONVIF_WG-APG-Application_Programmers_Guide-1.pdf" target="_blank" rel="noopener">programers_guide</a></li>
<li><a href="https://www.onvif.org/resources/" target="_blank" rel="noopener">programming_resources</a></li>
<li><a href="https://github.com/patrickmichalina/camera-probe" target="_blank" rel="noopener">camera_probe</a></li>
<li><a href="https://amiccomcam.com/nvr-instructions/" target="_blank"
Setting up Monogame Development on Machttp://kflu.github.io/2020/01/12/2020-01-12-monogame-on-mac/2020-01-12T08:00:00.000Z2023-10-14T22:36:20.135Z
<p>It is not easy to set up monogame dev on Mac. It officially supports mono +
visual studio for mac + monogame extension. However, there were several issues:</p>
<ul>
<li>monogame extension does not support the new VS for Mac 2019, but only vs4mac
2017 . However, the 2017 download isn't available from Microsoft anymore (I
had to download it from <a href="https://dl.xamarin.com/VsMac/VisualStudioForMac-7.5.4.3.dmg" target="_blank" rel="noopener">here</a>)</li>
<li>I cannot scaffold projects with "Monogame Mac Application" (error: "Exception
has been thrown by the target of an invocation.")</li>
<li>I was able to scaffold "Monogame Windows Application" or "iPad/iOs", but I
couldn't build and run the projects.</li>
</ul>
<h2>Monogame on .Net core on Mac</h2>
<p>I was able to scaffold and run using this setup:</p>
<ul>
<li>Install .Net Core</li>
<li>Install vscode</li>
<li>Install <a href="http://community.monogame.net/t/monogame-3-7-1-release/11173" target="_blank" rel="noopener">the standalone monogame pipeline application</a> (follow the official
mac installation doc)</li>
</ul>
<p>Now following <a href="http://blog.dylanwilson.net/posts/monogame-on-dotnet-core/" target="_blank" rel="noopener">this blog</a>, firstly install the Monogame project template:</p>
<pre><code>dotnet new -i "MonoGame.Template.CSharp"
</code></pre>
<p>In a working directory:</p>
<pre><code>dotnet new mgdesktopgl
</code></pre>
<p>With the basic setup, you should be able to run the blank game with:</p>
<pre><code>dotnet run
</code></pre>
<p>Now let's follow <a href="http://www.monogame.net/documentation/?page=adding_content" target="_blank" rel="noopener">official guide</a> to add a ball sprite to the canvas.</p>
<p>Add a content by opening the Monogame Pipeline tool, add the <code>ball.png</code> to the
<code>Content</code> project. Then make changes accordingly to the guide.</p>
<hr>
<p><strong>Fix monogame freeimage issue:</strong></p>
<p>The pipeline tool modifies <code>Content.mgcb</code> to include some msbuild operations for
<code>ball.png</code>. Specifically, the <code>TextureImporter</code>. Under the hood, it interopts
with <code>libfreeimage.dylib</code>, which by some monogame bug, isn't shipped with the
<code>monogame.content.builder</code> nuget package. To fix this issue:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">brew install freeimage</span><br><span class="line"></span><br><span class="line"># find the installed `libfreeimage.dylib`:</span><br><span class="line">brew ls freeimage | grep -e 'libfreeimage\..*\.dylib'</span><br><span class="line"></span><br><span class="line"># link it into the MG builder build tool folder:</span><br><span class="line">ln -s \</span><br><span class="line"> /usr/local/Cellar/freeimage/3.18.0/lib/libfreeimage.3.18.0.dylib \</span><br><span class="line"> ~/.nuget/packages/monogame.content.builder/3.7.0.4/build/MGCB/build/libfreeimage.dylib</span><br></pre></td></tr></table></figure>
<hr>
<p>Now we are ready to build and run the game and see the ball image:</p>
<pre><code>dotnet run
</code></pre>
<p>The complete working code is at https://github.com/kflu/monogame-setup</p>
<h2>2020-01-31 Some more learnings (quick note)</h2>
<p>Pipeline tool is just a GUI helping you to construct the <code>Content.mgcb</code> file.
The content building actually happens at build time, the <code>Content.mgcb</code> file is
actually the command line parameters for calling <code>MGCB</code> command. In this sense,
the pipeline tool is not necessary - why I don't want to use it?</p>
<p><strong>Problem 1: Pipeline tool crashed when adding reference</strong></p>
<p>I try to add Tiled map to content - they require additonal reference assembly
for <code>MGCB</code> to build. Pipeline can add reference. But it has a bug crashing
when adding references.</p>
<p><strong>Problem 2: Difficult to add dependencies of the reference manually in MGCB
file</strong></p>
<p>It's not easy to add reference to MGCB file. Because you not only add the
reference, but also all its dependencies. Something like this:</p>
<pre><code>/reference:PATH1/MonoGame.Extended.Content.Pipeline.dll
/reference:PATH2/MonoGame.Extended.dll
/reference:PATH3/MonoGame.Extended.Tiled.dll
/reference:PATH4/Newtonsoft.Json.dll
</code></pre>
<p>I don't know if Pipeline tool help you add all dependencies, if you add a
reference assembly via Pipeline tool. But since it's broken I don't have this
help. And manually figuring out all dependencies and type their physical paths
are infeasible.</p>
<p>What I did was creating another .net core classlib (netstandard2.2). Adding the
top level reference via <code>dotnet add package ...</code>. Then <code>dotnet publish</code>. I got
the reference assembly and its dependencies all flattened in the <code>publish</code>
folder. Then I was add all of them as reference in MGCB file:</p>
<pre><code>/reference:../../pipeline_libs/bin/Debug/netstandard2.0/publish/MonoGame.Extended.Content.Pipeline.dll
/reference:../../pipeline_libs/bin/Debug/netstandard2.0/publish/MonoGame.Extended.dll
/reference:../../pipeline_libs/bin/Debug/netstandard2.0/publish/MonoGame.Extended.Tiled.dll
/reference:../../pipeline_libs/bin/Debug/netstandard2.0/publish/Newtonsoft.Json.dll
</code></pre>
<p><strong>Problem 3: relative paths between content files</strong></p>
<p>The relative paths between the tiled map (tmx), tilesets (tsx), the raw assets
(png) are inconsistent when they are copied or imported to the game project
directory. So often <code>dotnet build</code> (or just <code>mgcb</code>) complained the file could
not be found.</p>
<p>This is a process issue, I'd feel more comfortable once a pattern is found.</p>
<p><strong>Problem 4: <code>/build</code> command with source & destination doesn't work</strong></p>
<p>Form <code>/build:<src>:<dest></code> doesn't work. While according to MGCB document, it
should work. It says file path <code><src>:<desst></code> not found, as if it doesn't
understand this
Writing PyCharm Pluginhttp://kflu.github.io/2019/08/24/2019-08-24-writing-pycharm-plugin/2019-08-24T07:00:00.000Z2023-10-14T22:36:20.135Z
<p><strong>the complete plugin project is hosted <a href="https://github.com/kflu/pyprop3" target="_blank" rel="noopener">here</a></strong></p>
<p>Spent days to get this working, so documenting it here. I reported <a href="https://youtrack.jetbrains.com/issue/PY-35838" target="_blank" rel="noopener">a PyCharm
issue</a> where I want <code>__property__</code> to be recognized as <code>property</code>. The
developer recommend me to write a plugin to fix that. PyCharm plugin must be
written in Java (my choice) or Kotlin in IntelliJ. The official document
<a href="https://www.jetbrains.org/intellij/sdk/docs/basics/getting_started.html" target="_blank" rel="noopener">recommend using Gradle</a>. However <a href="https://youtrack.jetbrains.com/issue/PY-35838" target="_blank" rel="noopener">I can't get it
working</a>. Beside what I reported there, I've also bumped into
below difficulties:</p>
<ol>
<li>On my work laptop (MacOS), IDEA struggled to find the right JDK to use. I've
to download and install JDK 12. This maybe due to JAVA_HOME settings or some
sort. Also Gradle struggled to import and build.</li>
<li>On my personal laptop, IDEA prompt to fix Windows Defender. Gradle also hang at
importing.</li>
<li>In <code>plugin.xml</code>, <code><depends></code> complained <code>com.intellij.modules.python</code> cannot
be resolved.</li>
<li>Installed PyCharm locally and in IDEA set JDK to PyCharm's to solve
<code>com.intellij.modules.python</code>, but build fails to find
<code>PyKnownDecoratorProvider</code>.</li>
</ol>
<p>Here's what works for me and all the tricks.</p>
<p>First install PyCharm locally, so its JDK can be used later in IDEA, which has
<code>PyKnownDecoratorProvider</code>.</p>
<p>Second, configure project JDKs and libraries to add PyCharm JDK.</p>
<p>In IDEA, create a new project. Select "IntelliJ Platform Plugin" from <strong>project
templates</strong>. <strong>NOT</strong> the one from "Gradle". I think this is what the official
document refers to as "DevKit", which JetBrain does not prefer (but worked for
me).</p>
<p>In Project SDK, make sure the PyCharm JDK is selected.</p>
<p>Once project is opened, in <code>plugin.xml</code>, insert:</p>
<pre><code><depends>com.intellij.modules.python</depends>
</code></pre>
<p>This has auto-complete. If IDEA complained <code>...modules.python</code> can't be found,
it means the JDK isn't set up correctly. Once <code><depends></code> is specified
correctly, add</p>
<pre><code><extensions defaultExtensionNs="Pythonid">
<knownDecoratorProvider implementation="pyprop" />
</extensions>
</code></pre>
<p>Relevant official doc is <a href="https://www.jetbrains.org/intellij/sdk/docs/basics/plugin_structure/plugin_extensions_and_extension_points.html" target="_blank" rel="noopener">here</a>. However, pay attension to change
<code>defaultExtensionNs</code> to <code>Pythonid</code>. Otherwise you don't get autocomplete for
<code>knownDecoratorProvider</code> and IDEA complains, although later build will pass, PyCharm
will load the plugin, it will NOT instantiate your plugin implementation class.
When you get autocomplete for <code>knownDecoratorProvider</code> you know you get it
right.</p>
<p><code>knownDecoratorProvider</code> is declared in
<a href="https://github.com/JetBrains/intellij-community/blob/master/python/src/META-INF/python-core-common.xml" target="_blank" rel="noopener"><code>python-core-common.xml</code></a>. Note it's qualified name
begins with <code>Pythonid</code> that's what we put in <code>defaultExtensionNs</code>.</p>
<p>Now create the class <code>pyprop</code> under <code>src</code>:</p>
<pre><code>import com.jetbrains.python.psi.PyKnownDecoratorProvider;
import org.jetbrains.annotations.Nullable;
public class pyprop implements PyKnownDecoratorProvider {
@Nullable
@Override
public String toKnownDecorator(String decoratorName) {
return decoratorName.equals("__property__") ? "property" : null;
}
}
</code></pre>
<p>Build and it show succeed.</p>
<h2>Debugging</h2>
<p>IntelliJ provides good PyCharm plugin debugging. Just hit "debug plugin", it'll
bring up a PyCharm instance with the plugin loaded. You can set breakpoints in
plugin implementation to make sure they got hit. If so, you know you get it
right.</p>
<h2>Deploy</h2>
<p>Hit "Build" -> "prepare plgin module for deployment". This will package plugin
in a "jar" file which can be installed by PyCharm locally.</p>
<p>Finally, the complete plugin is <a href="https://github.com/kflu/pyprop3" target="_blank" rel="noopener">on
Python await inside context managerhttp://kflu.github.io/2019/06/10/2019-06-10-python-await-in-context-manager/2019-06-10T07:00:00.000Z2023-10-14T22:36:20.135Z
<p>Question: for below code, will the context gets closed correctly when
<code>async_task()</code> finishes?</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">with</span> get_context():</span><br><span class="line"> <span class="keyword">await</span> async_task()</span><br></pre></td></tr></table></figure>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#!/usr/bin/env python3</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> asyncio <span class="keyword">as</span> aio</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Ctx</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__enter__</span><span class="params">(self)</span>:</span></span><br><span class="line"> print(<span class="string">"enter"</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__exit__</span><span class="params">(self, exc_type, exc_val, exc_tb)</span>:</span></span><br><span class="line"> print(<span class="string">"exit"</span>)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">async</span> <span class="function"><span class="keyword">def</span> <span class="title">main</span><span class="params">()</span>:</span></span><br><span class="line"> <span class="keyword">with</span> Ctx() <span class="keyword">as</span> ctx:</span><br><span class="line"> print(<span class="string">"sleeping"</span>)</span><br><span class="line"> <span class="keyword">await</span> aio.sleep(<span class="number">2</span>)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">loop = aio.get_event_loop()</span><br><span class="line">loop.run_until_complete(main())</span><br></pre></td></tr></table></figure>
<p>This test produces:</p>
<pre><code>enter
sleeping
exit
</code></pre>
<p><strong>So yes - context manager does work with await operation inside
How to write a Scheme interpreterhttp://kflu.github.io/2018/04/15/2018-04-15-implement-scheme/2018-04-15T07:00:00.000Z2023-10-14T22:36:20.134Z
<p>This article describes the design and implementation of the <a href="https://github.com/Microsoft/schemy" target="_blank" rel="noopener">Schemy</a> interpreter.
Note that the design and implementation of schemy is heavily inspired by Peter Norvig's
<a href="http://norvig.com/lispy2.html" target="_blank" rel="noopener">article</a>.</p>
<p>tl;dr - here's a flowchart summarizing an implementation of a scheme interpreter:</p>
<p><img src="flowchart.png" alt="flowchart"></p>
<h2>S-Expression</h2>
<p>S-expression is the central construct of a Scheme program. An S-expression can be in the
form of any of the following:</p>
<ul>
<li>
<p>a value, e.g., <code>3.14</code>, <code>"some text"</code>, or any other values that your runtime supports
(in the case of schemy, this could be any .NET object we exposed).</p>
</li>
<li>
<p>a symbol, e.g., <code>count</code> (a variable), <code>sum</code> (a function).</p>
</li>
<li>
<p>a list of s-expressions, e.g., <code>(sum (+ 1 x) y (get-value "total"))</code></p>
</li>
</ul>
<p>Formally:</p>
<pre><code>Expression := Symbol | (Expression ...) | Value
</code></pre>
<h3>S-Expression representation</h3>
<p>In Schemy, we simply represent an expression as an <code>object</code>, which could be either:</p>
<ol>
<li>an instance of a <code>Symbol</code></li>
<li>a list of objects</li>
<li>any .NET object</li>
</ol>
<p>In a language that supports discriminated union, it could be more elegantly modeled. But
that's not in the scope of this discussion.</p>
<p>One may also note that in above representation, #2 and #3 could overlap - a .NET object
(#3) could be a list of objects that could be treated by the interpreter as an expression
(#2). This is as expected, and a powerful feature - in Scheme, a program (s-expression) can be treated as data - and
be processed, transformed! This is called <a href="https://en.wikipedia.org/wiki/Homoiconicity" target="_blank" rel="noopener">Homoiconicity</a>.</p>
<h2>Evaluation</h2>
<p>Now, S-expression alone is not very useful alone. For example, for a symbol
s-expression <code>count</code>, it doesn't make much sense without knowing what information <code>count</code>
holds. This leads to the concept of <strong>evaluating</strong> an s-expression.</p>
<p>S-expression is evaluated in a context, or "environment", which is nothing but a mapping
from symbols to values. Therefore we could define a <code>EvaluateExpression</code> function:</p>
<pre><code>eval(expr: Expression, env: Environment) -> object
</code></pre>
<ul>
<li>
<p>If <code>expr</code> is already a value, simply return it</p>
</li>
<li>
<p>If <code>expr</code> is a symbol, we just look up that symbol in <code>env</code> and return the value</p>
</li>
<li>
<p>If <code>expr</code> is a list of s-expressions - this could be a syntax form evaluation or
function invocation. In the simplest idea, we first evaluate each element expression
of the list recursively, then depending on the meaning of the first value (a function,
or a syntax form indicator, e.g, <code>if</code>), we handle them differently. Below gives some
examples on how to handle them naively, just for illustration, we'll cover
optimizations later.</p>
<ul>
<li>
<p>for <code>(if test conseq alt)</code>, we evaluate <code>test</code>, if true, we evaluate and return
<code>conseq</code>, otherwise, <code>alt</code>.</p>
</li>
<li>
<p>for <code>(define id expr)</code>, we evaluate <code>expr</code>, and update the environment to
associate symbol <code>id</code> to value of <code>expr</code>.</p>
</li>
<li>
<p>for <code>(func expr1 expr2 expr3)</code>, this is function invocation. We drill into the
detail in the following section.</p>
</li>
</ul>
</li>
</ul>
<h2>Function</h2>
<p>What is a function and how to invoke a function? A function is made of the following parts:</p>
<ol>
<li>a list of parameters - this is a list of symbols which should be bound to some value
at invocation time.</li>
<li>an environment under which the body expression should be evaluated</li>
<li>an s-expression representing the body (or implementation) of the function. This
s-expression usually references some symbols whose definitions reside either in the
parameters (defined at invocation time) or in the environment (defined at definition
time - lexical scoping (see below))</li>
</ol>
<p>Now for a function defined as:</p>
<figure class="highlight scheme"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">(<span class="name"><span class="builtin-name">define</span></span> f (<span class="name"><span class="builtin-name">lambda</span></span> (x y) (<span class="name"><span class="builtin-name">+</span></span> x y)))</span><br></pre></td></tr></table></figure>
<p>And when we evaluate <code>(f 1 2)</code>, we first make an environment containing the mapping <code>x=1, y=2</code>, and evaluate the body <code>(+ x y)</code> by using the parameters environment.</p>
<p>But that's not really what happens. What if the body of <code>f</code> references symbols which are
not as the parameters, e.g.:</p>
<figure class="highlight scheme"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">(<span class="name"><span class="builtin-name">define</span></span> x <span class="number">2</span>)</span><br><span class="line">(<span class="name"><span class="builtin-name">define</span></span> f (<span class="name"><span class="builtin-name">lambda</span></span> (y) (<span class="name"><span class="builtin-name">+</span></span> x y)))</span><br></pre></td></tr></table></figure>
<p>When invoking <code>(f 3)</code>, we would bind <code>y=3</code>. But where to get value for <code>x</code>? When we
define <code>f</code>, its environment contains the definition for <code>x</code>. So when we construct the
parameter environment for the invocation, we <strong>link</strong> it to an outer environment that contains
<code>x=2</code>. And the lookup logic for a key in a environment is this:</p>
<ol>
<li>Try to look up the key in current environment's symbol table. If found, return it.</li>
<li>If not found in current environment, go to the parent (outer) environment and attemp
lookup there.</li>
</ol>
<p>There can be many layers of environemnts. If none of the environment contains the mapping
for key, an error is thrown.</p>
<p>This is a core concept and language feature called <strong>lexical scoping</strong>, or <strong>closure</strong>.
Many more advanced language features can be implemented based up on this, including
classes, but we'll not go into the detail.</p>
<p>Wrapping up, we now know how to evaluate an s-expression or a function. An interesting
observation we should make now is that:</p>
<blockquote>
<p>Evaluating an S-expression and a function is quite similar - both requires an expression
and an environment. And we evaluate the expression using the symbol definition in the
environment.</p>
</blockquote>
<h2>Tail call optimization</h2>
<p>With the above description, the function evaluation looks like the following in the <code>eval</code>
function:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">define eval(expr, env):</span><br><span class="line"> ...</span><br><span class="line"> <span class="keyword">if</span> (is_invocation(expr)): <span class="comment"># function call (func x y z ...)</span></span><br><span class="line"> (func, args) = (expr[<span class="number">0</span>], expr[<span class="number">1</span>,:])</span><br><span class="line"> func_env = make_env(func.params, args).link(func.env)</span><br><span class="line"> func_body = func.body</span><br><span class="line"> <span class="keyword">return</span> eval(func_body, func_env)</span><br></pre></td></tr></table></figure>
<p>However, this implementation involves a recursive call (more specifically, a tail call)
into the <code>eval</code> function. And for implementation language like C# or Python which doesn't
support tail call optimization, that means if we evaluate a recursive function, the
evaluation itself is a recursion in the implementation language, and is subject to stack
overflow:</p>
<figure class="highlight scheme"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">(<span class="name"><span class="builtin-name">define</span></span> (<span class="name">sum-to</span> n acc)</span><br><span class="line"> (<span class="name"><span class="builtin-name">if</span></span> (<span class="name"><span class="builtin-name">=</span></span> n <span class="number">0</span>) acc </span><br><span class="line"> (<span class="name">sum-to</span> (<span class="name"><span class="builtin-name">-</span></span> n <span class="number">1</span>) (<span class="name"><span class="builtin-name">+</span></span> acc n))))</span><br></pre></td></tr></table></figure>
<p>This would cause <code>eval</code> to be called each time we encounter <code>(sum-to ...)</code> and the stack
size is O(n).</p>
<p>How can we optimize this case? If, when evaluating <code>eval(expr, env)</code>, we know <code>expr</code> is a
function call: <code>(f x y ...)</code>, then instead of calling <code>eval</code> recursively, we could <strong>swap
out</strong> <code>expr</code> with <code>f.body</code> (which is also an expression), and swap out <code>env</code> with
<code>make_env(func.params, args).link(env)</code>:</p>
<figure class="highlight python"><table><tr><td class="gutter"><pre><span
Setup sshguard and pf to block brute-forcershttp://kflu.github.io/2018/02/11/2018-02-11-sshguard/2018-02-11T08:00:00.000Z2023-10-14T22:36:20.134Z
<p>sshguard is much more reliable and easier to setup than python-based fail2ban.
The <a href="https://www.sshguard.net/docs/setup/" target="_blank" rel="noopener"><code>manpage</code></a> is very helpful guiding you through setup: <code>man 7 sshguard-setup</code>.</p>
<pre><code>$ pkg install sshguard
$ vim /usr/local/etc/sshguard.conf # conf file is self-explaining
$ cat >> /etc/pf.conf
table <sshguard> persist
block in proto tcp from <sshguard>
$ cat >> /etc/rc.conf
sshguard_enable="YES"
$ service pf restart
$ service sshguard restart
$ pfctl -t sshguard -T show # show sshguard table content
$ pfctl -vvsTables # show all pf tables
$ grep sshguard /var/log/auth # show sshguard blocking IPs in
Log-Structured Merge (LSM) Tree and SSTablehttp://kflu.github.io/2018/02/09/2018-02-09-lsm-tree/2018-02-09T08:00:00.000Z2023-10-14T22:36:20.133Z
<p><img src="lsm.png"
Nginx as media serverhttp://kflu.github.io/2018/02/04/2018-02-04-nginx-as-media-server/2018-02-04T08:00:00.000Z2023-10-14T22:36:20.133Z
<p>This is based on this simple <a href="https://aaronhorler.com/articles/nginx-browser-media-server.html" target="_blank" rel="noopener">guid</a>. It is a much easier media solution
than <a href="2018-02-04-freebsd-minidlna.md">DLNA</a>. jailed nginx configuration:</p>
<pre><code>server {
listen 80;
location / {
root /media;
autoindex on;
}
}
</code></pre>
<p>Add user <code>www</code> to the group that can access the media folder:</p>
<pre><code>pw groupmod media -m www
</code></pre>
<p>Preferrably <code>media</code> group should only have readonly access to <code>media/</code>.</p>
<p>Now set up <code>pf</code> in host:</p>
<pre><code>rdr proto tcp from any to any port <external_access_port> -> <jail_ip> port
Setting up miniDLNA on FreeBSDhttp://kflu.github.io/2018/02/04/2018-02-04-freebsd-minidlna/2018-02-04T08:00:00.000Z2023-10-14T22:36:20.133Z
<p>I'm still having difficulty running it in jail, likely to be related to
how my jail networking is set up (<a href="https://forums.freebsd.org/threads/13530/" target="_blank" rel="noopener">1</a>).</p>
<p>But I can successfully set it up in the host. Steps:</p>
<ol>
<li><code>pkg install minidlna</code></li>
<li>Configure <code>minidlna.conf</code>, <code>network_interface=em0</code> (NOT <code>eth0</code>)</li>
<li><code>echo 'minidlna_enable="YES"' >> /etc/rc.conf</code></li>
<li><code>service minidlna start</code></li>
</ol>
<p>Some facts:</p>
<ul>
<li>uPNP protocol uses UDP 1900</li>
<li>miniDLNA uses TCP 8200 for status web page</li>
<li>miniDLNA on FreeBSD:
<ul>
<li>service is <code>minidlna</code></li>
<li>process is <code>minidlnad</code></li>
<li>configuration is <code>/usr/local/etc/minidlna.conf</code></li>
<li>log file (default) is <code>/var/log/minidlna.log</code></li>
<li>db directory (default) is <code>/var/db/minidlna/</code></li>
</ul>
</li>
<li><code>minidlnad -R</code> to rescan</li>
<li>VLC can be used as DLNA client</li>
</ul>
<p>In a jail, I tried config <code>pf</code> with:</p>
<pre><code>rdr proto tcp from any to any port 8200 -> <jail_IP>
rdr proto udp from any to any port 1900 -> <jail_IP>
</code></pre>
<p>I can access http://<jail_IP>:8200 from LAN. I can also talk to
udp://<jail_IP>:1900 from LAN using <code>ncat</code>. But VLC does not recognize the
Mounting NTFS on FreeBSDhttp://kflu.github.io/2018/02/03/2018-02-03-freebsd-ntfs/2018-02-03T08:00:00.000Z2023-10-14T22:36:20.133Z
<p><a href="https://forums.freebsd.org/threads/62888/" target="_blank" rel="noopener">This post</a> helped me figure all this out.</p>
<p>I need to access a USB hard drive in NTFS on FreeBSD. In order to mount NTFS partitions, FreeBSD
uses <code>ntfs-3g</code> FUSE module.</p>
<p>First, make sure the fuse kernel module is loaded. This can be done adhoc with
<code>kldload fuse</code>. But to have it loaded at boot time, add the following line in
<code>/boot/loader.conf</code>:</p>
<pre><code>fuse_load="YES"
</code></pre>
<p>Then, install <code>fusefs-ntfs</code> package:</p>
<pre><code>pkg install fusefs-ntfs
</code></pre>
<p>Now the OS supports NTFS, you can plugin the device. Use <code>dmesg</code> to figure out the device ID (<code>d0</code>):</p>
<pre><code>da0 at umass-sim0 bus 0 scbus4 target 0 lun 0
da0: <WD Ext HDD 1021 2021> Fixed Direct Access SPC-2 SCSI device
da0: Serial Number 574D415A4135333836313839
da0: 40.000MB/s transfers
da0: 1907727MB (3907024896 512 byte sectors)
da0: quirks=0x2<NO_6_BYTE>
</code></pre>
<p>Use <code>gpart</code> to show its partitions:</p>
<pre><code>> ~ gpart show /dev/da0
=> 63 3907024833 da0 MBR (1.8T)
63 1985 - free - (993K)
2048 3907022848 1 ntfs (1.8T)
</code></pre>
<p>You can also find the device node for the partition under <code>/dev</code>:</p>
<pre><code>➜ ~ ls -l /dev/da0*
crw-r----- 1 root operator 0x72 Feb 3 12:07 /dev/da0
crw-r----- 1 root operator 0x73 Feb 3 12:07 /dev/da0s1
</code></pre>
<p>Now we are ready to mount it:</p>
<pre><code>ntfs-3g /dev/da0s1 /mnt -o ro
</code></pre>
<p><code>-o ro</code> makes sure it's mounted read-only. You can remove it to mount it read-write.</p>
<p>Note that I tried to use <code>mount</code> hoping there is a consolidated command for mounting
different kinds of file systems. But it wasn't successful:</p>
<pre><code>➜ ~ mount -t ntfs-3g /dev/da0s1 /mnt
mount: /dev/da0s1: Operation not supported by device
➜ ~ mount -t ntfs /dev/da0s1 /mnt
mount: /dev/da0s1: Operation not supported by device
</code></pre>
<p>Also note that usually mounting a partition can only be done by <code>root</code>, or
using <code>sudo</code>, which result in the mounted path is owned by <code>root:wheel</code>.
However you can mount the partition as a specified user and group using
<code>uid</code> and <code>gid</code> options.</p>
<p>First, find out the user and group IDs of the preferred user:</p>
<pre><code># id john
uid=1001(john) gid=1001(john) groups=1001(john),0(wheel)
</code></pre>
<p>Now run following command to mount:</p>
<pre><code>ntfs-3g /dev/da0s1 /mnt -o ro,uid=1001,gid=1001
</code></pre>
<p>Now <code>/mnt</code> is owned by
.NET stack size & recursionhttp://kflu.github.io/2018/01/17/2018-01-17-net-stacksize-recursion/2018-01-17T08:00:00.000Z2023-10-14T22:36:20.133Z
<p>According to <a href="https://stackoverflow.com/a/28658130/695964" target="_blank" rel="noopener">here</a>, the Windows x64 program should have default stack size
4MB. So I wrote a program to see how many levels of recursion can a typical
function support before getting stack overflow.</p>
<p>Here's the program:</p>
<pre><code>class Program
{
static void Main(string[] args)
{
Rec(0);
}
static void Rec(int x)
{
int y0 = x;
int y1 = y0 + x;
int y2 = y1 + x;
int y3 = y2 + x;
int y4 = y3 + x;
int y5 = y4 + x;
int y6 = y5 + x;
int y7 = y6 + x;
int y8 = y7 + x;
int y9 = y8 + x;
int y10 = y9 + x;
int y11 = y10 + x;
int y12 = y11 + x;
int y13 = y12 + x;
int y14 = y13 + x;
int y15 = y14 + x;
int y16 = y15 + x;
int y17 = y16 + x;
int y18 = y17 + x;
int y19 = y18 + x;
int y20 = y19 + x;
Console.WriteLine($"depth: {x}, local size: {22} >>>>>>>>>>>>>>> {y20}");
Rec(x + 1);
Console.WriteLine($"depth: {x}, local size: {22} <<<<<<<<<<<<<<< {y20}");
}
}
</code></pre>
<p>I configured it to build in <code>release</code> and <code>x64</code>. The output is:</p>
<pre><code>depth: 17170, local size: 2 >>>>>>>>>>>>>>> 17170
depth: 16125, local size: 10 >>>>>>>>>>>>>>> 145125
depth: 16101, local size: 22 >>>>>>>>>>>>>>> 338121
</code></pre>
<p>That looks like the stack size is roughly 600KB. I took a <code>dumpbin</code>, which
gives me below output. <code>stack reverse</code> is 100000. 100KB? WTF? But basically,
<strong>a function with ~20 local variables + arguments can recurse 16,000 times.</strong>
Not a lot.</p>
<pre><code>PE signature found
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (x86)
3 number of sections
5A5F9A97 time date stamp Wed Jan 17 10:48:55 2018
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
22 characteristics
Executable
Application can handle large (>2GB) addresses
OPTIONAL HEADER VALUES
10B magic # (PE32)
48.00 linker version
A00 size of code
800 size of initialized data
0 size of uninitialized data
293E entry point (0040293E)
2000 base of code
4000 base of data
400000 image base (00400000 to 00407FFF)
2000 section alignment
200 file alignment
4.00 operating system version
0.00 image version
6.00 subsystem version
0 Win32 version
8000 size of image
200 size of headers
0 checksum
3 subsystem (Windows CUI)
8560 DLL characteristics
High Entropy Virtual Addresses
Dynamic base
NX compatible
No structured exception handler
Terminal Server Aware
100000 size of stack reserve
1000 size of stack commit
100000 size of heap reserve
1000 size of heap commit
0 loader flags
10 number of directories
0 [ 0] RVA [size] of Export Directory
28EC [ 4F] RVA [size] of Import Directory
4000 [ 5BC] RVA [size] of Resource Directory
0 [ 0] RVA [size] of Exception Directory
0 [ 0] RVA [size] of Certificates Directory
6000 [ C] RVA [size] of Base Relocation Directory
27B4 [ 1C] RVA [size] of Debug Directory
0 [ 0] RVA [size] of Architecture Directory
0 [ 0] RVA [size] of Global Pointer Directory
0 [ 0] RVA [size] of Thread Storage Directory
0 [ 0] RVA [size] of Load Configuration Directory
0 [ 0] RVA [size] of Bound Import Directory
2000 [ 8] RVA [size] of Import Address Table Directory
0 [ 0] RVA [size] of Delay Import Directory
2008 [ 48] RVA [size] of COM Descriptor Directory
0 [ 0] RVA [size] of Reserved Directory
SECTION HEADER #1
.text name
944 virtual size
2000 virtual address (00402000 to 00402943)
A00 size of raw data
200 file pointer to raw data (00000200 to 00000BFF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
60000020 flags
Code
Execute Read
RAW DATA #1
Debug Directories
Time Type Size RVA Pointer
-------- ------- -------- -------- --------
5A5F9A97 cv 11C 000027D0 9D0 Format: RSDS, {8C1B2824-CBBA-483B-B4E1-18BBA4A2FEAC}, 1, c:\users\ConsoleApp2\ConsoleApp2\obj\Release\ConsoleApp2.pdb
clr Header:
48 cb
2.05 runtime version
20E4 [ 6D0] RVA [size] of MetaData Directory
20003 flags
IL Only
32-Bit Required
32-Bit Preferred
6000001 entry point token
0 [ 0] RVA [size] of Resources Directory
0 [ 0] RVA [size] of StrongNameSignature Directory
0 [ 0] RVA [size] of CodeManagerTable Directory
0 [ 0] RVA [size] of VTableFixups Directory
0 [ 0] RVA [size] of ExportAddressTableJumps Directory
0 [ 0] RVA [size] of ManagedNativeHeader Directory
Section contains the following imports:
mscoree.dll
402000 Import Address Table
402914 Import Name Table
0 time date stamp
0 Index of first forwarder reference
0 _CorExeMain
SECTION HEADER #2
.rsrc name
5BC virtual size
4000 virtual address (00404000 to 004045BB)
600 size of raw data
C00 file pointer to raw data (00000C00 to 000011FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
40000040 flags
Initialized Data
Read Only
RAW DATA #2
SECTION HEADER #3
.reloc name
C virtual size
6000 virtual address (00406000 to 0040600B)
200 size of raw data
1200 file pointer to raw data (00001200 to 000013FF)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
42000040 flags
Initialized Data
Discardable
Read Only
RAW DATA #3
BASE RELOCATIONS #3
2000 RVA, C SizeOfBlock
940 HIGHLOW 00402000
0 ABS
Summary
2000 .reloc
2000 .rsrc
2000
Video Editing Tipshttp://kflu.github.io/2017/12/25/2017-12-25-video-editing-tips/2017-12-25T08:00:00.000Z2023-10-14T22:36:20.133Z
<ul>
<li><a href="https://www.shotcut.org/" target="_blank" rel="noopener">Shortcut</a> for video editing. Awesome tool. Responsive and has most basic features I need. It crashes occationally, so rebemeber to save project whenever you can.</li>
<li>Gimp for image processing (it can read HEIC images from iOS)</li>
<li>OpenShot is buggy (2.4) and unresponsive. Nearly unusable.</li>
<li>Blender has a video editor, but it's not its primary
Setup PostgreSQL on FreeBSD Jailhttp://kflu.github.io/2017/11/28/2017-11-28-postgresql-freebsd/2017-11-28T08:00:00.000Z2023-10-14T22:36:20.133Z
<h2>Installation</h2>
<p>Install postgres:</p>
<pre><code>pkg install postgresql10-server-10.1
</code></pre>
<p>Enable service:</p>
<pre><code>sysrc postgresql_enable="YES"
</code></pre>
<p>If postgreSQL is installed within a jail, enable SysV IPC for that Jail
(this <a href="http://www.clausconrad.com/blog/running-postgresql-9-3-in-an-ezjail" target="_blank" rel="noopener">here</a>):</p>
<pre><code>echo 'security.jail.sysvipc_allowed=1' >> /etc/sysctl.conf
echo 'jail_sysvipc_allow="YES"' >> /etc/rc.conf
# in /usr/local/etc/ezjail/JAILNAME, update:
# export jail_JAILNAME_parameters=”allow.sysvipc=1″
# restart jail
ezjail-admin restart JAILNAME
</code></pre>
<p>Initialize DB (I don't know what this is). This would <em>fail</em> if postgresQL
is installed in a jail and SysV IPC is not allowed for the jail.</p>
<pre><code>service postgresql initdb
</code></pre>
<p>Now start the service:</p>
<pre><code>service postgresql start
</code></pre>
<p>There was a failure regarding SysV IPC during <code>initdb</code>, it looked like this:</p>
<pre><code>~ service postgresql initdb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C".
The default text search configuration will be set to "english".
Data page checksums are disabled.
creating directory /var/db/postgres/data10 ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... 2017-11-28 18:23:03.857 UTC [72052] FATAL: could not create shared memory segment: Function not implemented
2017-11-28 18:23:03.857 UTC [72052] DETAIL: Failed system call was shmget(key=1, size=48, 03600).
child process exited with exit code 1
initdb: removing data directory "/var/db/postgres/data10"
</code></pre>
<h2>Create users and databases</h2>
<p>You would need to create a PostgreSQL <em>user</em> or <em>role</em> that matches the name
of the primary system account that will use the database. For example, I
would use it in a jail, with the root account. So I need to create a PSQL
role matching that account:</p>
<pre><code>> su postgres # switch to account `postgres`
> createuser root
> createdb test_db
> exit # quit being `postgres`
# now as `root`:
> psql test_db
psql (10.1)
Type "help" for
Use SmartCard over Remote Desktop Sessionhttp://kflu.github.io/2017/10/20/2017-10-20-smartcard-remotedesktop/2017-10-20T07:00:00.000Z2023-10-14T22:36:20.133Z
<p>I see this issue when trying to use smart card in remote desktop session:</p>
<blockquote>
<p>The smart card requires drivers that are not present on this system</p>
</blockquote>
<p>The fix is documented <a href="https://stackoverflow.com/a/22975986/695964" target="_blank" rel="noopener">here</a> - you need to install smart card (not reader) driver on the remote machine.</p>
<p>My smart card is a "Gemalto IDPrime .Net". So I grab the latest from <a href="http://www.catalog.update.microsoft.com/Search.aspx?q=.NET%20Gemalto" target="_blank" rel="noopener">here</a>. I grabbed both:</p>
<ul>
<li>Gemalto - Other hardware, Smart Cards - Gemalto IDPrime .NET Smart Card (win7, server 2008)</li>
<li>Gemalto - Input - Gemalto IDPrime .NET Smart Card (win server 2008)</li>
</ul>
<p>These are cab files. Unzip them, right click the <code>.inf</code> file and select "install".</p>
<p>It worked for me after
Fixing Domain Trust Relationship Failed Issuehttp://kflu.github.io/2017/10/19/2017-10-19-fixing-domain-trust-relationship/2017-10-19T07:00:00.000Z2023-10-14T22:36:20.133Z
<p>There's a Windows issue made me performing a System Restore to a 1.5 month
old restoration point. But that leads to the below issue when I attempted to
log in with my latest domain credential:</p>
<blockquote>
<p>the trust relationship between this workstation and the primary domain failed</p>
</blockquote>
<p>To fix this issue, you must first be able to log in:</p>
<ul>
<li>
<p>If you have a local administrator account you type in <code>.\administrator</code>
and the password.</p>
</li>
<li>
<p>If you don't have local admin account enabled or don't remember the
password. You can still log on by using the old domain credential when
the restoration point was created. To use this credential, first unplug
any network connection (Ethernet & Wifi) and restart.</p>
</li>
</ul>
<p>Once you're in, open a elevated PowerShell session and do the following:</p>
<pre><code>Reset-ComputerMachinePassword -Server <domain server> -Credential (get-credential)
</code></pre>
<p><code>get-credential</code> prompt for a credential, use your <strong>latest</strong> domain
credential. You should be good now.</p>
<p>Lastly, always activate your local admin account and remember the
password!!!</p>
<h2>References</h2>
<ul>
<li><a href="https://community.spiceworks.com/how_to/108912-fix-the-trust-relationship-between-this-workstation-and-the-primary-domain-failed" target="_blank" rel="noopener">1</a></li>
<li><a href="http://implbits.com/active-directory/2012/04/13/dont-rejoin-to-fix.html" target="_blank"
Setting up latex on Windowshttp://kflu.github.io/2017/08/03/2017-08-03-latex-windows/2017-08-03T07:00:00.000Z2023-10-14T22:36:20.133Z
<p>Two major LaTex distribution - TexLive and MikTex. The former is
crossplatform and "official" that it comes from TUG. However, its Windows
support really sucks it's hard to even get the basic installation work for a
power user like me. Instead, I found MikTex really user friendly to work
with, maybe because it's built for Windows.</p>
<p>Download the <a href="https://miktex.org/portable" target="_blank" rel="noopener">portable version</a>. Unzip to <code>C:\miktex</code>. Follow the
instruction there to use it.</p>
<p>Instead, you can also add the following to <code>PATH</code> and work in that command
line:</p>
<pre><code>PS> $env:path += ';C:\miktex\texmfs\install\miktex\bin\'
</code></pre>
<p>MikTex comes with an IDE called TexWorks so that's nice.</p>
<h1>Working with Latex</h1>
<p><a href="http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/GSWLaTeX.pdf" target="_blank" rel="noopener">Getting started with LaTex</a>.</p>
<p>A minimal working LaTex document:</p>
<pre><code>\documentclass{article}
\title{Hello World!}
\begin{document}
\maketitle
\section{Section 1}
Hello
F# works on FreeBSD (jailed)http://kflu.github.io/2017/08/03/2017-08-03-fsharp-freebsd/2017-08-03T07:00:00.000Z2023-10-14T22:36:20.133Z
<p>Mono-based F# works on FreeBSD in a jail. There isn't
a port for .NET core yet. So I don't want to try it yet.</p>
<pre><code>pkg install mono-4.8.1.0_1
pkg install fsharp-4.1.18
</code></pre>
<p>Here's a summary:</p>
<ol>
<li>run with <code>mono <app.exe></code></li>
<li>build with <code>xbuild</code></li>
<li><code>nuget</code> and <code>paket</code> works after <code>mozroot --import</code></li>
<li>there isn't an official port for .NET core yet</li>
<li>there isn't an official port for vscode yet</li>
</ol>
<h1>Building projects and running executables</h1>
<p>Use <code>xbuild</code> to build. Use <code>mono <app.exe></code> to run.</p>
<h1>Package Management</h1>
<p><code>nuget</code> and <code>packet</code> <em>binaries</em> work directly.</p>
<h2>Certificate Issue</h2>
<p>However you need to import root certificates to avoid encryption/decryption
error while downloading from SSL enabled sites. See <a href="https://github.com/fsharp/fsharp/issues/616#issuecomment-320170277" target="_blank" rel="noopener">this issue</a>.</p>
<pre><code>mozroot --import # and type "yes" MANY MANY times
</code></pre>
<h2>Nuget</h2>
<p>For <code>nuget</code>:</p>
<pre><code>wget https://dist.nuget.org/win-x86-commandline/latest/nuget.exe
mono ./nuget install NewtonSoft.Json
</code></pre>
<h2>Paket</h2>
<p>For <code>paket</code>, follow <a href="https://fsprojects.github.io/Paket/getting-started.html" target="_blank" rel="noopener">paket guide</a>:</p>
<pre><code>cd <repo>/.paket
wget https://github.com/fsprojects/Paket/releases/download/3.31.0/paket.bootstrapper.exe
mv paket.bootstrapper.exe paket.exe # enable "magic" mode
cd ..
mono ./.paket/paket.exe install # install all dependencies
</code></pre>
<h1>Package Information</h1>
<pre><code>➜ ~ pkg info mono
mono-4.8.1.0_1
Name : mono
Version : 4.8.1.0_1
Installed on : Fri Aug 4 05:53:48 2017 UTC
Origin : lang/mono
Architecture : FreeBSD:11:amd64
Prefix : /usr/local
Categories : lang
Licenses : MIT
Maintainer : mono@FreeBSD.org
WWW : http://www.mono-project.com/
Comment : Open source implementation of .NET Development Framework
Options :
ACCEPTANCE_TESTS: off
Shared Libs required:
libinotify.so.0
Shared Libs provided:
libmonosgen-2.0.so.1
libmonoboehm-2.0.so.1
libikvm-native.so
libmono-profiler-iomap.so.0
libmono-profiler-aot.so.0
libmono-profiler-log.so.0
libMonoSupportW.so
libMonoPosixHelper.so
Annotations :
cpe : cpe:2.3:a:mono:mono:4.8.1.0:::::freebsd11:x64:1
repo_type : binary
repository : FreeBSD
Flat size : 183MiB
Description :
Mono is an open source implementation of .NET Development Framework. Its
objective is to enable UNIX developers to build and deploy cross-platform
.NET Applications. The project implements various technologies developed by
Microsoft that have now been submitted to the ECMA for standardization.
Mono provides the necessary software to develop and run .NET client and
server applications on BSD, Linux, Solaris, Mac OS X, Windows, and Unix.
WWW: http://www.mono-project.com/
➜ ~ pkg info fsharp
fsharp-4.1.18
Name : fsharp
Version : 4.1.18
Installed on : Fri Aug 4 05:56:19 2017 UTC
Origin : lang/fsharp
Architecture : FreeBSD:11:*
Prefix : /usr/local
Categories : lang
Licenses : APACHE20
Maintainer : mono@FreeBSD.org
WWW : http://fsharp.org/
Comment : Functional and object-oriented language for the .NET platform
Annotations :
repo_type : binary
repository : FreeBSD
Flat size : 92.3MiB
Description :
F# is an open-source, strongly typed, multi-paradigm programming
language encompassing functional, imperative and object-oriented
programming techniques. F# is most often used as a cross-platform CLI
language, but can also be used to generate JavaScript and GPU code.
F# is developed by The F# Software Foundation and Microsoft. An open
source, cross-platform edition of F# is available from the F# Software
Foundation. F# is also a fully supported language in Visual Studio.
Other tools supporting F# development include Mono, MonoDevelop,
SharpDevelop and the WebSharper tools for JavaScript and HTML5 web
programming.
F# originated as a variant of ML and has been influenced by OCaml, C#,
Python, Haskell, Scala and Erlang.
WWW:
WinDBG Quick Referencehttp://kflu.github.io/2017/07/30/2017-07-30-windbg/2017-07-30T07:00:00.000Z2023-10-14T22:36:20.133Z
<h1>References</h1>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/framework/tools/sos-dll-sos-debugging-extension" target="_blank" rel="noopener">SOS (.NET) debugging document</a></li>
<li><a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/controlling-exceptions-and-events" target="_blank" rel="noopener">Controlling Exceptions and Events</a></li>
</ul>
<h1>Tasks</h1>
<p>Debuggee control:</p>
<ul>
<li>go: <code>g</code></li>
<li>detach: <code>.detach</code></li>
<li>break: press <code><ctrl-break/pause></code></li>
</ul>
<p>Load sos extension</p>
<pre><code>.loadby sos clr
</code></pre>
<p>For details about loading SOS, refer <a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-managed-code" target="_blank" rel="noopener">here</a> and the following section on
failure to load SOS.</p>
<p>List CLR stack</p>
<pre><code>!CLRStack
</code></pre>
<p>Inspect objects (<a href="https://docs.microsoft.com/en-us/dotnet/framework/tools/sos-dll-sos-debugging-extension" target="_blank" rel="noopener">sos</a>)</p>
<ul>
<li><code>!DumpHeap</code></li>
<li><code>!DumpObj</code></li>
<li><code>!DumpArray</code></li>
<li><code>!DumpClass</code></li>
</ul>
<p>Break on (first chance) exceptions. See <a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/controlling-exceptions-and-events" target="_blank" rel="noopener">sxe doc</a>, "Event Definitions and
Defaults".</p>
<pre><code>sxe <eh|clr|...>
</code></pre>
<p>Inspect an exception (<a href="https://docs.microsoft.com/en-us/dotnet/framework/tools/sos-dll-sos-debugging-extension" target="_blank" rel="noopener">sos</a>):</p>
<pre><code>!PrintException <exception_address>
</code></pre>
<p><a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-dump--create-dump-file-" target="_blank" rel="noopener">Collect a mini dump</a></p>
<pre><code>.dump /mA
</code></pre>
<p>Symbol debugging:</p>
<pre><code>!sym noisy
.reload
</code></pre>
<h1>Failure Loading SOS</h1>
<p>Sometimes loading SOS can fail when debugging dumps:</p>
<pre><code>0:009> .cordll -ve -u -l
CLRDLL: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\mscordacwks.dll:4.7.2115.00 f:8
doesn't match desired version 4.7.2117.00 f:8
CLRDLL: Unable to find mscordacwks_AMD64_AMD64_4.7.2117.00.dll by mscorwks search
CLRDLL: Unable to find 'mscordacwks_AMD64_AMD64_4.7.2117.00.dll' on the path
CLRDLL: Unable to find clr.dll by search
Cannot Automatically load SOS
CLRDLL: ERROR: Unable to load DLL mscordacwks_AMD64_AMD64_4.7.2117.00.dll, Win32 error 0n2
CLRDLL: Consider using ".cordll -lp <path>" command to specify .NET runtime directory.
CLR DLL status: ERROR: Unable to load DLL mscordacwks_AMD64_AMD64_4.7.2117.00.dll, Win32 error 0n2
</code></pre>
<p>That's because the dump is collected from a machine that has different CLR
version. The best way to solve this is to copy the following DLLs from the
target machine and use them to load SOS:</p>
<ul>
<li><code>C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll</code></li>
<li><code>C:\Windows\Microsoft.NET\Framework\v4.0.30319\mscordacwks.dll</code></li>
<li><code>C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos.dll</code></li>
</ul>
<p>Put them under <code>d:\share</code>, and in WinDbg load with:</p>
<pre><code>0:009> .cordll -lp d:\share
CLRDLL: Loaded DLL d:\share\mscordacwks.dll
Automatically loaded SOS Extension
CLR DLL status: Loaded DLL d:\share\mscordacwks.dll
</code></pre>
<p>Here are some useful doc:</p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-managed-code" target="_blank" rel="noopener">loading SOS</a>: diagnostic loading SOS</li>
<li><a href="https://stackoverflow.com/a/10194213/695964" target="_blank" rel="noopener">loading SOS for different CLR version (SO)</a></li>
</ul>
<!-- ====================== REFERENCES ========================= -->
<!-- vim: set tw=76 spell:
SSH Reverse Port Forwarding With Untrusted Remote Hosthttp://kflu.github.io/2017/07/30/2017-07-30-tunnel-only-ssh-user/2017-07-30T07:00:00.000Z2023-10-14T22:36:20.132Z
<p>tl;dr - safety is provided by setting up a non-priviledged tunnel only user</p>
<h1>References</h1>
<ul>
<li>Discussions
<ul>
<li><a href="https://forums.freebsd.org/threads/61682/#post-355591" target="_blank" rel="noopener">My post on FreeBSD forum: Best way to allow ssh connection just for
reverse port forwarding</a></li>
<li><a href="https://serverfault.com/a/56581/309638" target="_blank" rel="noopener">ssh tunneling only access (SO answer)</a></li>
<li><a href="https://serverfault.com/a/119381/309638" target="_blank" rel="noopener">sshd restriction per user basis (SO answer)</a></li>
<li><a href="https://therub.org/2011/08/24/minimal-ssh-chroot-in-freebsd/" target="_blank" rel="noopener">Minimal SSH Chroot in FreeBSD</a></li>
</ul>
</li>
<li>man pages
<ul>
<li><a href="https://www.freebsd.org/cgi/man.cgi?sshd_config(5)" target="_blank" rel="noopener">sshd_config</a></li>
<li><a href="https://www.freebsd.org/cgi/man.cgi?query=sshd&sektion=8&apropos=0&manpath=FreeBSD+11.1-RELEASE+and+Ports#AUTHORIZED_KEYS%09FILE_FORMAT" target="_blank" rel="noopener">authorized_keys</a></li>
</ul>
</li>
<li><a href="https://unix.stackexchange.com/q/46235/38968" target="_blank" rel="noopener">SSH reverse port forwarding explained</a></li>
<li><a href="http://www.harding.motd.ca/autossh/" target="_blank" rel="noopener">autossh - Automatically restart SSH sessions and tunnels</a></li>
<li><a href="https://therub.org/2011/08/24/minimal-ssh-chroot-in-freebsd/" target="_blank" rel="noopener">the use of <code>nologin</code></a></li>
</ul>
<h1>Problem</h1>
<p>Here's my scenario:</p>
<p><img src="diagram.png" alt="diagram"></p>
<ul>
<li>I have a home server (HostB) which is completely within my control.</li>
<li>I have an off-site machine that can potentially be physically accessed by
other people I don't trust (HostA).</li>
</ul>
<p>I want to do off-site backups (encrypted of course) via <code>duplicity</code> from
HostB to HostA. Because HostA is behind firewall, it can't provide direct
ssh access. So I'll have to do a <a href="https://unix.stackexchange.com/q/46235/38968" target="_blank" rel="noopener">reverse port
forwarding</a> to expose HostA:22. In order to
reliably do the reverse port forwarding without password, I will add HostA's
public key to HostB's authorized_keys file. Now that can potentially be bad,
because the pub key could be stolen.</p>
<p>However, since the ssh login from HostA -> HostB is <strong>only</strong> to establish
the port forwarding tunnel so HostB can access HostA:22, is there any good
way I can restrict the HostA -> HostB ssh connection to <strong>only</strong> provide the
tunnel and nothing else?</p>
<h1>Setting Up User For Tunnel Only</h1>
<p>After <a href="https://forums.freebsd.org/threads/61682/#post-355591" target="_blank" rel="noopener">discussing online</a>, I'm aware of the following
solution:</p>
<ol>
<li>
<p>Create a non-priviledged user without login shell
(<a href="https://therub.org/2011/08/24/minimal-ssh-chroot-in-freebsd/" target="_blank" rel="noopener">nologin</a>). Set user home to <code>/var/...</code> and make it
readonly (suggested by obsigna & Jov):</p>
<pre><code>pw useradd -n tunnel -c "SSH Tunnel User" -u 9999 -d /var/tunnel -s /usr/sbin/nologin
mkdir -m 0500 -p /var/tunnel/.ssh
chown -R tunnel:nogroup /var/tunnel
chflags -R schg /var/tunnel
</code></pre>
</li>
<li>
<p>Do a <code>ChrootDirectory</code> using <code>MATCH USER</code> in
<a href="https://www.freebsd.org/cgi/man.cgi?sshd_config(5)" target="_blank" rel="noopener"><code>sshd_config</code></a> for extra safety. See <a href="https://therub.org/2011/08/24/minimal-ssh-chroot-in-freebsd/" target="_blank" rel="noopener">this
post</a>.</p>
</li>
<li>
<p>Use <a href="https://www.freebsd.org/cgi/man.cgi?query=sshd&sektion=8&apropos=0&manpath=FreeBSD+11.1-RELEASE+and+Ports#AUTHORIZED_KEYS%09FILE_FORMAT" target="_blank" rel="noopener">authorized_keys</a> to further restrict the public key:</p>
<ul>
<li><code>command="command"</code></li>
<li><code>no-X11-forwarding</code></li>
<li><code>permitopen="host:port"</code></li>
</ul>
</li>
</ol>
<h1>Tricks & Tools for Port Forwarding</h1>
<ul>
<li><a href="http://www.harding.motd.ca/autossh/" target="_blank" rel="noopener">autossh</a> can be used to create reliable tunnel</li>
</ul>
<!-- vim: set tw=75 spell:
Msys2 - Cleanest Unix Subsystem on Windowshttp://kflu.github.io/2017/07/18/2017-07-18-msys2/2017-07-18T07:00:00.000Z2023-10-14T22:36:20.132Z
<p>msys2 is the easiest and cleanest unix sub-system on Windows. It's based on
Cygwin. It's tools are built on Mingw64. It provides build toolchain for
Mingw64. It provides a nice package manager the <code>pacman</code> from ArchLinux.</p>
<ul>
<li><a href="https://github.com/msys2/msys2/wiki/MSYS2-installation" target="_blank" rel="noopener">Installation</a></li>
<li><a href="https://github.com/msys2/msys2/wiki/Using-packages" target="_blank" rel="noopener">Using packages</a>
<ul>
<li>search package <code>pacman -Ss <partial_name></code></li>
<li>install package <code>pacman -S <pkg_name></code></li>
</ul>
</li>
</ul>
<p>With my experience so far, it makes an ideal Window utility toolbox for
administration tasks. It's worth to be a default installation for every
Windows machine.</p>
<h3>Setting Home Directory to <code>USERPROFILE</code></h3>
<p><em>2018-8-30 note: it seems using msys2.exe with the original nsswitch.conf
is enough, which has <code>db_home windows cygwin desc</code>. I guess the problem was
it didn't work with domain joined computer. If you're not domain joined, this
section isn't needed.</em></p>
<p>Do this through <code>etc/nsswitch.conf</code>. Set:</p>
<pre><code>db_home: windows /c/Users/%U
</code></pre>
<p>The <code>/c/Users/%U</code> is to <a href="https://github.com/Alexpux/MSYS2-packages/issues/1167#issuecomment-366485916" target="_blank" rel="noopener">workaround an issue</a> that <code>db_home: windows</code> has no effect on my domain joined machine. For detail, refer to
<a href="https://cygwin.com/cygwin-ug-net/ntsec.html" target="_blank" rel="noopener">Cygwin nsswitch.conf doc</a>.</p>
<h3>Making portable applications</h3>
<p>Note that <code>nsswitch.conf</code> is read relative to the executable path. So to
make a portable executable be aware of <code>nsswitch.conf</code> settings, have a
structure like below. If you invoke <code>ssh.exe</code> from <em>anywhere</em>, the
<code>nsswitch.conf</code> is respected.</p>
<pre><code>ssh.bat # wraps ssh.exe
msys/
usr/
bin/
ssh.exe
<dependency DLLs>
etc/
nsswitch.conf
</code></pre>
<h3>Shell Initialization</h3>
<p><em>2018-8-30 note: it seems using msys2.exe is enough - but don't use the <code>.bat</code> shortcuts
from the start menu - for some reason they don't respect user <code>rc</code> scripts</em></p>
<p>Depending on the "launcher" you use. Nowadays, it has <code>msys2.exe</code> which takes <code>msys2.ini</code>
containing env vars. But that's not user specific. I build a <code>msys2.ps1</code> that sets
env vars and launches <code>msys2.exe</code>. Use this and leave the <code>ini</code> unchanged.</p>
<p>For example you can set the default shell by <code>SHELL=/usr/bin/zsh</code>.</p>
<h3>Figuring Executable Dependencies</h3>
<p>To make a portable executable distribution, you need to copy not only
the executable, but all its dependencies. You can do so with <code>ldd</code>:</p>
<pre><code>$ ldd /usr/bin/rsync.exe
ntdll.dll => /c/WINDOWS/SYSTEM32/ntdll.dll (0x778c0000)
KERNEL32.DLL => /c/WINDOWS/System32/KERNEL32.DLL (0x769b0000)
KERNEL32.DLL => /c/WINDOWS/System32/KERNEL32.DLL (0x769b0000)
KERNELBASE.dll => /c/WINDOWS/System32/KERNELBASE.dll (0x77620000)
msys-gcc_s-1.dll => /usr/bin/msys-gcc_s-1.dll (0x6ac00000)
msys-iconv-2.dll => /usr/bin/msys-iconv-2.dll (0x6ee80000)
msys-2.0.dll => /usr/bin/msys-2.0.dll (0x61000000)
msys-z.dll => /usr/bin/msys-z.dll (0x644c0000)
</code></pre>
<p>Note that all the DLLs located inside the msys root <code>/usr/bin</code> are the
dependencies that need to be packed alongside the executable.</p>
<h1>References</h1>
<ul>
<li><a href="https://github.com/msys2/msys2/wiki" target="_blank" rel="noopener">Msys2 Wiki</a></li>
<li><a href="https://github.com/msys2/msys2/wiki/MSYS2-installation" target="_blank" rel="noopener">Installation</a></li>
<li><a href="https://github.com/msys2/msys2/wiki/Using-packages" target="_blank" rel="noopener">Using package</a></li>
<li><a href="https://cygwin.com/cygwin-ug-net/ntsec.html" target="_blank" rel="noopener">Cygwin
FreeBSD ZFShttp://kflu.github.io/2017/07/17/2017-07-17-freebsd-zfs/2017-07-17T07:00:00.000Z2023-10-14T22:36:20.132Z
<h1>References</h1>
<ul>
<li><a href="https://www.michaelwlucas.com/os/fmzfs" target="_blank" rel="noopener">FreeBSD Mastery: ZFS</a>
<ul>
<li>Chapter 3: Creating Pools and VDEVs</li>
<li>Chapter 4: Mounting ZFS Filesystems</li>
<li>Chapter 5: Replacing Mirror Providers</li>
<li>Chapter 8: Custom ZFS Installation Partitioning</li>
</ul>
</li>
<li><a href="https://www.michaelwlucas.com/os/fmse" target="_blank" rel="noopener">FreeBSD Mastery: Storage Essential</a></li>
<li>Relevant man pages
<ul>
<li><a href="https://www.freebsd.org/cgi/man.cgi?gpart(8)" target="_blank" rel="noopener">gpart</a>: this contains OS boot up logic</li>
<li><a href="https://www.freebsd.org/cgi/man.cgi?zpool(8)" target="_blank" rel="noopener">zpool</a></li>
<li><a href="https://www.freebsd.org/cgi/man.cgi?query=zfs&sektion=8" target="_blank" rel="noopener">zfs</a></li>
<li><a href="https://www.freebsd.org/cgi/man.cgi?query=geom&sektion=8" target="_blank" rel="noopener">geom</a></li>
</ul>
</li>
<li><a href="https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror" target="_blank" rel="noopener">Installing FreeBSD Root on ZFS (Mirror) using GPT (FreeBSD wiki)</a></li>
</ul>
<h1>Working with storage and file system</h1>
<p><em>See BSD repository <code>README.md</code> for updated version.</em></p>
<ul>
<li>lists all disks recognized by the OS: <code>geom disk list</code></li>
<li>working with partitions
<ul>
<li>lists all partitions: <code>gpart <show|list</code></li>
<li>create partition scheme for disk: <code>gpart create -s gpt <device></code></li>
<li>destroy partition for disk: <code>gpart destroy [-F] <device></code></li>
<li>add new partition to device: <code>gpart add -t <fs_type> -a 1m <device></code></li>
<li>write boot code to disk: <code>gpart bootcode -b boot/pmbr -p boot/gptzfsboot -i <part#> <device></code></li>
<li>devices and partitions are at <code>/dev/</code></li>
<li>GPT labels are at <code>/dev/gpt</code></li>
</ul>
</li>
</ul>
<h1>Understanding and Working with ZFS</h1>
<p>I've a very successful weekend learning and adopting ZFS on my home server.
As an end result, I'm running a ZFS pool on a single mirrow VDEV composed
of two disks.</p>
<p>On a high level, an ZFS system looks like below.</p>
<p><img src="arch.png" alt="zfs architecture"></p>
<p>There can be multiple ZFS pools present on the system. Each pool consists
of multiple VDEVs. Each VDEV consists of multiple disks. Then on top of
each pool, a tree structure of "datasets" can be created to organize file
systems. Datasets is there mainly for management purposes.</p>
<p>A VDEV is a cluster of disks. VDEV implements software RAID. VDEV manages
redundency - if a portion of the disks inside a VDEV fail, that might be
OK. But if a VDEV fail inside of a pool, the entire pool is broken. The
VDEV can take either an entire, raw disk, or take disk partitions. I opt
for disk partitions because this way you can make sure all providers in the
VDEV has the same sector size.</p>
<p>There're multiple types of VDEVs:</p>
<ul>
<li>stripe (single disks composed together)</li>
<li>mirror (N:1 mirroring)</li>
<li>RAIDZ-1, RAIDZ-2, RAIDZ-3 (more advanced types of RAID)</li>
</ul>
<p>There's a quite general purpose way to prepare (e.g., partitioning) a disk
for use in ZFS. The partition looks like:</p>
<ol>
<li>a freebsd-boot partitioin, 512k</li>
<li>a freebsd-swap partition, 2GB</li>
<li>a freebsd-zfs partiion, the rest of the entire disk</li>
</ol>
<p>For a disk <code>ada1</code>:</p>
<pre><code># CREATE GPT PARTITION SCHEME:
gpart create -s gpt ada1
gpart add -t freebsd-boot -a 1m -b 40 -s 512k ada1
gpart add -t freebsd-swap -a 1m -s 2G ada1
gpart add -t freebsd-zfs -a 1m ada1
# WRITES BOOT CODE TO MBR AND TO BOOT PARTITION:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
</code></pre>
<p>Note:</p>
<ol>
<li>
<p>Use <code>-a 1m</code> to make sure the sector size aligns regardless of disk
differences. See <a href="https://www.michaelwlucas.com/os/fmzfs" target="_blank" rel="noopener">fmzfs</a>, section "Pools Alignment and Disk Sector
Size".</p>
</li>
<li>
<p><code>-b 40</code> to begin the boot partition at 40B.</p>
</li>
<li>
<p>I don't use GPT labels, as opposed to <a href="https://www.michaelwlucas.com/os/fmzfs" target="_blank" rel="noopener">fmzfs</a>. Later pool
manipulations all use partition names directly (e.g., <code>ada1p3</code>).</p>
</li>
</ol>
<p>Now to add the disk to a mirrow VDEV in <code>zroot</code> pool:</p>
<pre><code>zpool attach zroot ada0p3 ada1p3
</code></pre>
<p>Here we attach <code>ada1p3</code> to poll <code>zroot</code>, and ask it to mirror the existing
provider <code>ada0p3</code>. Now <code>zpool status</code> should show the newly added disk is
"resilvering".</p>
<p>The result of the above command is:</p>
<pre><code>Make sure to wait until resilver is done before rebooting.
If you boot from pool 'zroot', you may need to update
boot code on newly attached disk 'ada1p3'.
Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
</code></pre>
<h1>Replacing Disks in ZFS Pool</h1>
<p>Today (2021-01-25) one of the mirrored disk is failing so I had to replace it.
On high level, it involved:</p>
<ol>
<li>remove the old disk</li>
<li>connect new disk</li>
<li>format new disk</li>
<li><code>zpool replace <pool> <old> <new_partition></code></li>
<li>write boot code to new disk</li>
<li>wait for resilvering done</li>
</ol>
<p><a href="https://farrokhi.net/posts/2020/05/replacing-a-faulty-disk-in-zfs/" target="_blank" rel="noopener">This article</a> describes almost the exact steps I followed.</p>
<p>For formating new disk, one can copy partition from old disk:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">gpart backup ada0 > ada0.parts</span><br><span class="line">cat ada0.parts | gpart restore -F ada1</span><br></pre></td></tr></table></figure>
<p>If ada1 is larger and the zfs partition is at the end, one can resize (grow)
the zfs partition with below. Note that leaving out <code>-s</code> would set size to all
the rest of the disk space.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">gpart resize -i 3 -a 1m ada1</span><br></pre></td></tr></table></figure>
<p>I had hard time booting into the zfs pool and had the system detect the new
disk. Troubles are</p>
<ul>
<li>my failed disk is connected to the 1st boot SATA cable. If I connect it to
the new disk, system insisted to boot from it, even though my 2nd (good old)
disk is bootable.</li>
<li>if I unplug 1st SATA then system booted successfully, but hot-plug new device
won't get recognized - didn't show in <code>geom disk list</code>, <code>dmesg</code> or
<code>/var/log/messages</code>; <code>camcontrol rescan</code> didn't help.</li>
<li>what worked is I manually swapped the SATA cables so my good old disk is now
the 1st SATA disk, and the new disk is the 2nd. This way, the system boots,
and the new disk is recognized in <code>geom</code>.</li>
</ul>
<p>Writing boot code is as usual:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1</span><br></pre></td></tr></table></figure>
<h3>IMPORTANT NOTE ON ZPOOL REPLACE</h3>
<p><strong>NOTE</strong> when issue:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"># zpool replace <pool> <old_id> <new_partition></span><br><span class="line">pool replace zroot 372837423423409823 ada1p3</span><br></pre></td></tr></table></figure>
<p>That is the <strong>new partition</strong> (e.g. ada1p3), NOT new disk. If you put disk
(<code>ada1</code>) it will still work (use the whole disk as raw for mirrowing) but you
lost the partition schemes on the disk, so you won't be able to boot from it,
in case your other disk in the mirrow fail.</p>
<h1>CREATING ZVOL, UFS PARTITION ON ZFS</h1>
<p><a href="https://docs.freebsd.org/en/books/handbook/zfs/#zfs-zfs-volume" target="_blank" rel="noopener">Main doc</a>. zvol is like zfs dataset, but exposed as raw block device. So
it is useful, e.g, for creating filesystems on top.</p>
<figure class="highlight sh"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">zfs create -V 2g -o compression=on zroot/ufs_part</span><br><span class="line">newfs -O2 /dev/zvol/zroot/ufs_part</span><br><span class="line">mount /dev/zvol/zroot/ufs_part
SOCKS proxy via SSHhttp://kflu.github.io/2017/07/15/2017-07-15-SOCKS-proxy/2017-07-15T07:00:00.000Z2023-10-14T22:36:20.131Z
<h2>References</h2>
<ul>
<li><a href="https://www.ocf.berkeley.edu/~xuanluo/sshproxywin.html" target="_blank" rel="noopener">How to tunnel Internet traffic over SSH in Windows (the guide that finally works for me)</a></li>
<li><a href="https://www.digitalocean.com/community/tutorials/how-to-route-web-traffic-securely-without-a-vpn-using-a-socks-tunnel" target="_blank" rel="noopener">How To Route Web Traffic Securely Without a VPN Using a SOCKS Tunnel</a></li>
<li><a href="https://en.wikipedia.org/wiki/Tunneling_protocol#Secure_Shell_tunneling" target="_blank" rel="noopener">Tunneling protocols Wikipedia</a></li>
</ul>
<h2>Overview</h2>
<p>For proxy/tunneling, there're several options:</p>
<ol>
<li>
<p>Per-application based, there's <a href="https://en.wikipedia.org/wiki/Tunneling_protocol#Secure_Shell_tunneling" target="_blank" rel="noopener">SSH port forwarding</a>.</p>
<p>Server setup is easy. You only need the regular configured SSHD. This is
the easiest to setup and use. Just fire the proper SSH command and point
your application to the locally bind port.</p>
</li>
<li>
<p>Next is the SOCKS proxy - this works at the web application level so any
application (mostly browsers) that supports SOCKS can use it.</p>
<p>Server setup is still easy. You only need the regularly configured SSHD.
Client setup is also easy, fire up the command but compared to portforwarding
you'll also setup proxy settings. This setting is usually in OS or app.</p>
</li>
<li>
<p>VPN server</p>
<p>Hardest to setup on the server side. There're two types - OpenVPN, SSH layer 3
tunneling.</p>
</li>
</ol>
<p>Here we are going to talk about #2.</p>
<h2>SOCKS Server Side</h2>
<p>Server side, you only need regularly configured SSHD. Whenver SSH can connect, SOCKS should work.</p>
<h2>SOCKS Client Side - Enabling SOCKS</h2>
<p>On client side, to enable SOCKS proxy, issue this command:</p>
<pre><code>ssh -D <socks_port> <user@remote_host>
</code></pre>
<p>This gives you an SSH session to the remote host AND also enables SOCKS on local machine. If you don't want to access the SSH
session interactively and would rather prefer it to stay in the background (i.e., hidden from UI), use the following command
which has extra arguments:</p>
<pre><code>ssh -D <socks_port> -f -C -q -N <user@remote_host>
</code></pre>
<p>where <code>socks_port</code> is the local port of the SOCKS proxy.</p>
<p>You can also use putty:</p>
<p><img src="putty.png" alt="putty"></p>
<h2>SOCKS Client Side - Configuring Applications</h2>
<p>Then to configure application to use SOCKS, in Windows, you can configure via
"internet options" -> "connection" tab -> "proxy server" -> "advanced". Here's
it:</p>
<p><img src="socks_win.png" alt="options"></p>
<p>Note that you need to <strong>uncheck "use the same proxy server for all protocols"</strong>, clear HTTP, Secure, FTP fields. And only fill Socks field.
<a href="https://www.ocf.berkeley.edu/~xuanluo/sshproxywin.html" target="_blank" rel="noopener">This</a> is the guide that works for me. <a href="https://www.digitalocean.com/community/tutorials/how-to-route-web-traffic-securely-without-a-vpn-using-a-socks-tunnel" target="_blank" rel="noopener">This</a> is thorough, but didn't mention the above critical
strip BOM from fileshttp://kflu.github.io/2017/06/26/2017-06-26-strip-bom/2017-06-26T07:00:00.000Z2023-10-14T22:36:20.131Z
<p>Byte Order Mark (BOM) is problematic in many programming languages. For
example, in <a href="https://groups.google.com/forum/#!topic/racket-users/yWxY2JjUles" target="_blank" rel="noopener">Racket</a>. Python also has the same behavior. .NET IO functions
can correctly handle BOM.</p>
<p>To ease the pain, one way is to ask your editor not to write BOM. Another way
would be to use a command line tool to strip it out. NPM has a <code>strip-bom-cli</code>
package. However, in PowerShell, both stdout redirection and <code>out-file</code> forces
BOM for UTF8 encoding.</p>
<p>It turns out <a href="https://stackoverflow.com/a/32951824/695964" target="_blank" rel="noopener">there's an easy way</a> to strip BOM directly in PowerShell,
without any 3rd party tools, by using <code>[IO.File]::WriteAllLines()</code>. For
example, to strip BOM for all files under a directory, do:</p>
<pre><code>PS> ls *.cs | %{ $ls = (Get-Content $_); [IO.File]::WriteAllLines($_, $ls)
.NET CIL instructions performancehttp://kflu.github.io/2017/06/09/2017-06-09-NET-CIL-instructions-perf/2017-06-09T07:00:00.000Z2023-10-14T22:36:20.131Z
<p>I'm lucky to still be able to find this. <a href="https://msdn.microsoft.com/en-us/library/ms973852.aspx" target="_blank" rel="noopener">This link</a> lists the performance of each CIL instructions
Linux remote desktop via XMinghttp://kflu.github.io/2017/05/14/2017-05-14-linux-remote-desktop/2017-05-14T07:00:00.000Z2023-10-14T22:36:20.131Z
<p>In <a href="http://kflu.github.io/2017/01/24/2017-01-24-win-x11-forward/">this post</a>, I talked about setting up X11 forwarding using Xming and
SSH. Today I was looking into forwarding an entire window manager via X11, but
wasn't able to.</p>
<h1>XDMCP and Xming</h1>
<p>However, I found another cool solution - XDMCP. XDMCP is like remote desktop
for linux. Xming is an XDMCP client. Once XDMCP is enabled on the remote
host's display manager, one can use Xming to connect to it. <strong>Note</strong> that,
via this approach, the Xming is used in a completely different mode than in
X11 forwarding. Specifically, SSH didn't play any part in XDMCP (I heard XDMCP
is not a secure protocol).</p>
<h3>Remote host setup</h3>
<p>Ubuntu (and Xubuntu) 16.04 uses lightdm as its display manager. XDMCP isn't
enabled by default. To enabled it, edit <code>/etc/lightdm/lightdm.conf</code> and add:</p>
<pre><code>[XDMCPServer]
enabled=true
</code></pre>
<p>Now restart lightdm service (this is for Ubuntu 16.04):</p>
<pre><code> sudo systemctl restart lightdm.service
</code></pre>
<p>For more about Ubuntu and XDMCP, refer to <a href="https://wiki.ubuntu.com/xdmcp" target="_blank" rel="noopener">xdmcp ubuntu wiki</a>.</p>
<h3>Client connection</h3>
<p>In order to connect, Xming must be started from command line. Run:</p>
<pre><code>PS> & 'C:\Program Files (x86)\Xming\Xming.exe' -rootless -keyhook -query <remote_host> -clipboard -nowinkill
</code></pre>
<p>Here the key part is <code>-query <remote_host></code>.</p>
<p><code>-fullscreen</code> can also be replaced with <code>-rootless</code>, or, to be removed (so in
"single window mode") and specify screen size: <code>-screen 0 1400 1000</code>. <strong>Note</strong>
that in <code>-fullscreen</code>, <code>alt-tab</code> is broken, and switching in/out of Xming in
this mode is very slow. On the contrary, <code>-rootless</code> mode is awesome -
swiching in/out Xming is fast, <code>alt-tab</code> works perfectly.</p>
<p><code>-keyhook</code> is used to enable keyboard short. However, I found <code>alt-tab</code> to be
still problematic. Windows keys seem to work: <code>win-e</code> opened mouse pad on
Xubuntu. <code>-nowinkill</code> is used so Xming doesn't eat <code>alt-F4</code>.</p>
<p>Refer to <a href="http://www.straightrunning.com/xmingnotes/manual.php" target="_blank" rel="noopener">Xming man page</a>.</p>
<h3>Experience</h3>
<p>Initial testing seems to be pretty fast for local Hyper-V Linux machine.</p>
<hr>
<p>I still want to investigate how to do X11 forwarding + window manager. See
<a href="https://opensource.com/article/16/12/yearbook-best-couple-2016-display-manager-and-window-manager" target="_blank" rel="noopener">Display manager and window manager</a>.</p>
<h3>References</h3>
<ul>
<li><a href="https://wiki.ubuntu.com/xdmcp" target="_blank" rel="noopener">xdmcp ubuntu wiki</a></li>
<li><a href="http://www.straightrunning.com/xmingnotes/manual.php" target="_blank" rel="noopener">xming man page</a></li>
<li><a href="https://opensource.com/article/16/12/yearbook-best-couple-2016-display-manager-and-window-manager" target="_blank" rel="noopener">Display manager and window manager</a></li>
</ul>
<h1>Window Manager via X11 forwarding</h1>
<p>I can successully x11 forward entire window manager to local machine with
Ubuntu server. I choose Ubuntu server because it doesn't come with a desktop
environment pre-installed so it's more tweakable.</p>
<p>This is inspired by <a href="http://x.sodpit.com/remotex.htm" target="_blank" rel="noopener">this blog post</a>.</p>
<h3>Xming setup</h3>
<p>Launch Xming in the single window or fullscreen mode. Multiwindow mode will
cause window managers to fail to start.</p>
<p>Single window mode with screen size:</p>
<pre><code>& 'C:\Program Files (x86)\Xming\Xming.exe' :0 -clipboard -screen 0 1400 1000
</code></pre>
<p>Fullscreen mode:</p>
<pre><code>& 'C:\Program Files (x86)\Xming\Xming.exe' :0 -clipboard -fullscreen
</code></pre>
<p>In fullscreen mode, escape with <code>ctrl-esc</code>.</p>
<p>As described in <a href="http://kflu.github.io/2017/01/24/2017-01-24-win-x11-forward/">x11 forwarding</a>, set the <code>DISPLAY</code> env var in local
machine terminal before SSH (or in putty UI):</p>
<pre><code>export DISPLAY=localhost:0.0
ssh -Y <user>@<remote_host>
</code></pre>
<h3>Remote host (Ubuntu Server) setup</h3>
<p>Refer to <a href="https://help.ubuntu.com/community/ServerGUI#X11_Server_Installation" target="_blank" rel="noopener">Ubuntu Server GUI wiki X11 Server Installation</a>:</p>
<pre><code>sudo apt-get install xorg
</code></pre>
<p>Then install the choice of window manager. E.g., fluxbox:</p>
<pre><code>sudo apt-get install fluxbox
</code></pre>
<p>Then start with <code>fluxbox</code>.</p>
<p>Not all window manager work this way. Tested and worked are:</p>
<ul>
<li>Fluxbox, openbox</li>
<li>xfwm4 - WM for xfce - works but haven't figured out any app launcher to use</li>
<li>xfce4 - works, pretty, seem to run pretty smoothly (first session might be
slow)</li>
</ul>
<p>Those don't work are:</p>
<ul>
<li>awesome, enlightment,
Archlinux installation, partition, EFIhttp://kflu.github.io/2017/05/14/2017-05-14-archlinux-installation/2017-05-14T07:00:00.000Z2023-10-14T22:36:20.131Z
<p>This is not easy. Mostly followed <a href="https://wiki.archlinux.org/index.php/Installation_guide" target="_blank" rel="noopener">Installation guide</a>, on a Hyper-V v2 VM.
Hyper-V v2 has EFI enabled, so follow corresponding instructions.</p>
<h2>Disk partitioning and mounting</h2>
<p>Requires two partitions, an EFI partition and a main partition. EFI partiion
is a FAT32 partition. Used <code>parted</code>.</p>
<p>Follow <a href="https://wiki.archlinux.org/index.php/GNU_Parted#UEFI.2FGPT_examples" target="_blank" rel="noopener">UEFI/GPT example for parted</a>:</p>
<pre><code>(parted) mkpart ESP fat32 1MiB 513MiB
(parted) set 1 boot on
(parted) mkpart primary ext4 513MiB 100%
(parted) quit
</code></pre>
<p>See <a href="#parted"><code>parted</code> tips</a>.</p>
<p>From console:</p>
<pre><code>mkfs.ext4 /dev/sdxY
mkfs.fat -F32 /dev/sdxY
</code></pre>
<p>Then mount them:</p>
<pre><code>mount /dev/sdxY /mnt # this is the primary root
mkdir /mnt/boot
mount /dev/sdxY /mnt/boot # this is the EFI partition
</code></pre>
<h2>Install the base packages</h2>
<pre><code>pacstrap /mnt base
</code></pre>
<p><strong>Note</strong> this must be done after mounting <code>/mnt</code> and <code>/mnt/boot</code>, as it installs
essentials like <code>vmlinux</code>, <code>initramfs</code>, etc. into <code>/mnt/boot</code>.</p>
<h2>Boot Loader</h2>
<p>This is hard. I tried GRUB with EFI first, it failed on me
<sup><a href="#grub">1</a></sup>. I then used <a href="https://wiki.archlinux.org/index.php/Systemd-boot#EFI_boot" target="_blank" rel="noopener"><code>systemd-boot</code></a> which worked
eventually.</p>
<p><strong>Note</strong>: do all these <strong>before</strong> chroot to <code>/mnt</code>, otherwise you don't have
the necessary executables.</p>
<p>First install binaries into EFI partition (<code>/mnt/boot</code> folder):</p>
<pre><code>bootctl --path=esp install
</code></pre>
<p>Then add an entries to <code>/mnt/boot/loader/entries</code>, and configure
<code>/mnt/boot/loader/loader.conf</code> to point to the new entry.</p>
<p>Add a new entry at <code>/mnt/boot/loader/entries/arch.conf</code> with content:</p>
<pre><code>title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw
</code></pre>
<p>Replace the <code>PARTUUID</code> with the primary partition's GPT partition UUID. You
can find it with <code>blkid</code>:</p>
<pre><code># blkid
/dev/sda1: UUID="xxxx-xxxx" TYPE="vfat" PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxx..."
/dev/sda2: UUID="333db32c-b91e-41da-86c7-801c88059660" TYPE="ext4" PARTUUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxx..."
</code></pre>
<p><strong>Note</strong>: use <code>PARTUUID</code>, NOT <code>UUID</code>.</p>
<div id="parted">
<h2><code>parted</code> tips</h2>
<p>Run <code>parted</code> with <code>-a optimal</code> so <a href="https://wiki.archlinux.org/index.php/GNU_Parted#Alignment" target="_blank" rel="noopener">misalignment</a> are warned. But I cannot get
partitions to properly aligned and used "Ignore" when creating partitions <sup><a href="#percentage">2</a></sup>.</p>
<ul>
<li>
<p>at command line use <code>fdisk -l</code> to see all disks and partitions</p>
</li>
<li>
<p>use <code>h <command></code> for help</p>
</li>
<li>
<p>use <code>p</code> or <code>p all</code> to list partitions and disks</p>
</li>
<li>
<p>use <code>mklabel</code> to make a new partition table for a disk - this destroyes old
table. E.g., <code>mklabel gpt</code> to create a GPT partition table. <code>mklabel msdos</code> to create a msdos or (maybe) MBR partition table.</p>
</li>
<li>
<p>use <code>mkpart</code> to make new partition on a disk with partition table.</p>
<ul>
<li>the start/end supports unit postfix, like <code>MiB</code>, <code>MB</code>, etc. A negative
number counts from the end (<code>-120MiB</code>). Also percentage can be used
(<code>100%</code>).</li>
</ul>
</li>
<li>
<p>use <code>resizepart</code> to resize a partition</p>
</li>
</ul>
<p><strong>Note</strong> After running <code>parted</code>, you'll need to run <code>mkfs.xxx</code> to format the
newly created partitions.</p>
<hr>
<p><a name="grub">1</a>: <code>grub-install</code> failed with: "error: failed to get canonical path of 'airootfs'"</p>
<p><a name="percentage">2</a>: per <a href="https://wiki.archlinux.org/index.php/GNU_Parted#Alignment" target="_blank" rel="noopener">doc</a>, use percentage notation so it auto-align for
Making pets safer with Arduinohttp://kflu.github.io/2017/04/29/2017-04-29-arduino-vehicle-environment-monitor/2017-04-29T07:00:00.000Z2023-10-14T22:36:20.100Z
<p><em>All source code and related information can be found on the
<a href="https://github.com/kflu/vem" target="_blank" rel="noopener">project's github repo</a>.</em></p>
<p>This is Max, a 6 year old Golden Retriever that I adopted when he was
a little puppy. He and I both love outdoors and we have such a close
binding that he'd beg me to take him whenever I'm going out. I try to
take him to wherever I go as much as possible - going on a hike,
paddle boarding on the water, shopping in the mall, etc. But the
reality is, in an area as pet friendly as Seattle, there're a lot of
places that don't welcome dogs, mostly the restaurants and the grocery
stores.</p>
<p><img src="pup.jpg" alt="the pup"></p>
<p>If we go out for a day in urban area, he most likely will be staying
in the car during lunch time. When it comes to pet safety, I'm a
little paranoid as most pet owner would be. When leaving him in the
car, I make sure to park in the shade, have the sunroof open, wind the
windows down, etc. But no matter how much I do, when I sit in the
restaurants I always worry about the temperature in the car.</p>
<p><strong>Until one day I decided to do something about it - making a in-car,
wireless real-time temperature monitor, which transmits periodic
readings to the receiver that I carry with me. This gives me a piece
of mind knowing that my pup is staying safely and comfortably in the
car!</strong></p>
<p><strong>Disclaimer</strong> <em>Please be responsible when leaving your pet in the car.
Refer to <a href="http://blog.gopetfriendly.com/is-it-illegal-to-leave-your-pet-alone-in-the-car/" target="_blank" rel="noopener">the laws</a> in different states that allows or prohibits leaving
animal in motor vehicle in certain situations.</em></p>
<p>Before going into the details, here's what I built using several
weekends and less than $100:</p>
<p><img src="IMG_2523.JPG.small.jpg" alt="device"></p>
<p>On the right is the transmitter. It stays in the car that
sends the temperature readings every 5 seconds. On the left is the
receiver, I take it when leaving my pup in the car. The receiver has a
little LED that blinks a certain number of times to indicate the
temperature in the car. Both devices run on battery and have a
transmission range of around 250m (820ft) in very complex building
environment (shopping mall), with primitive (poor man's) wire antenna.</p>
<h1>High Level Design Decisions</h1>
<p>The overall system design is based on <a href="https://www.lora-alliance.org/What-Is-LoRa/Technology" target="_blank" rel="noopener">LoRa wireless technology</a>.
It uses spread spectrum communication and has advanced technology to
combat channel fading and interference. In ideal situation it's
claimed to have miles of transmission range. Initially I was
considering Family Radio Service (FRS) (used by walkie talkie)
for its long communication range. But I couldn't find good
hardware and software support for it. <a href="https://www.sparkfun.com/pages/xbee_guide" target="_blank" rel="noopener">XBee</a> seems to be a good
alternative. With the "Pro" version, XBee claims to have miles of
transmission range. But the cost of XBee Pro chips seem to be much
higher than the Lora modules.</p>
<p>Basically I need a system to perform measuring, transmitting,
receiving, and displaying the temperature data between 2 peers. I knew
I should choose Arduino boards over RaspberryPi, because the tasks are
simple and <em>the power consumption must be low</em> in order to make at
least the receiver mobile.</p>
<p>I had no previous experience with Arduino programming and soldering
circuits. My major was in wireless communication back in college and
had college level knowledge on circuits. Programming was trivial as
I'm already a programmer:) And this was a pretty fun experience!</p>
<p><a href="https://www.adafruit.com/" target="_blank" rel="noopener">Adafruit</a> is an awesome website teaching how to do all sorts of
Arduino things and also sells the devices. What I like most is on each
product page it also offers the tutorial on how to assemble, wire, and
program it! It made my first hardware project much smoother than I'd
imagined.</p>
<p>Here's the list of things that I bought for this project.</p>
<table>
<thead>
<tr>
<th>Item</th>
<th>Price</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://www.amazon.com/gp/product/B00PD92EJ8" target="_blank" rel="noopener">Arduino Mega 2560 R3 (OSOYOO)</a></td>
<td>$13.99</td>
<td>Transmitter board (awesome cheap "clone"<sup><a href="#clone">1</a></sup> board)</td>
</tr>
<tr>
<td><a href="https://www.adafruit.com/product/3072" target="_blank" rel="noopener">Adafruit RFM95W LoRa Radio Transceiver Breakout - 868 or 915 MHz</a></td>
<td>$19.95</td>
<td>Transmitter communication module</td>
</tr>
<tr>
<td><a href="https://www.adafruit.com/products/165" target="_blank" rel="noopener">TMP36 - Analog Temperature sensor - TMP36</a></td>
<td>$1.5</td>
<td>Transmitter temperature sensor</td>
</tr>
<tr>
<td><a href="https://www.amazon.com/gp/product/B005X1Y7I2" target="_blank" rel="noopener">Anker PowerCore+ mini 3350mAh</a></td>
<td>$13.99</td>
<td>Transmitter power supply (USB)</td>
</tr>
<tr>
<td><a href="https://www.adafruit.com/products/2771" target="_blank" rel="noopener">Adafruit Feather 32u4 Basic Proto</a></td>
<td>$19.95</td>
<td>Receiver board</td>
</tr>
<tr>
<td><a href="https://www.adafruit.com/products/3231" target="_blank" rel="noopener">Adafruit LoRa Radio FeatherWing - RFM95W 900 MHz</a></td>
<td>$19.95</td>
<td>Receiver communication module</td>
</tr>
<tr>
<td><a href="https://www.adafruit.com/product/727" target="_blank" rel="noopener">3 x AAA Battery Holder with On/Off Switch and 2-Pin JST</a></td>
<td>$1.95</td>
<td>Receiver power supply (3 x AAA)</td>
</tr>
</tbody>
</table>
<p><strong>Note that:</strong></p>
<ul>
<li>
<p>When buying LoRa modules, please make sure to buy the frequency
according to the regulation of your country. In US, LoRa uses 915MHz.</p>
</li>
<li>
<p>The cost can be significantly cut down from this list. My
choice of the products were not solely for cutting down the price, but
more for better learning experience, community support, flexibility in
prototyping. For example, Adafruit has <a href="https://www.adafruit.com/product/3178" target="_blank" rel="noopener">feather board integrated with
LoRa chip</a> (or <a href="https://www.adafruit.com/product/3078" target="_blank" rel="noopener">this</a>). They are cheaper than buying the board
and LoRa module separately. And it also eliminates almost all the
soldering work.</p>
</li>
</ul>
<p>The board and the LoRa module are wired together using SPI interface.
I'm not going into the detail. Please refer to <a href="https://learn.adafruit.com/adafruit-rfm69hcw-and-rfm96-rfm95-rfm98-lora-packet-padio-breakouts/wiring" target="_blank" rel="noopener">Adafruit tutorial</a>.</p>
<h1>Measuring the temperature</h1>
<p>Using the TMP36 temperature sensor is extremely easy. It's done by
using the MCU to measure the output voltage of the sensor. Most
importantly, the input voltage to the sensor must be stable for the
measurement to be accurate. Since I'll be using USB power on the
transmitter side, and the USB power bank is stable at 5v, I don't have
difficulty here.</p>
<p>For more information about using TMP36, refer to Adafruit <a href="https://learn.adafruit.com/tmp36-temperature-sensor" target="_blank" rel="noopener">TMP36 product
page</a>.</p>
<h1>Displaying the temperature</h1>
<p>You may have noticed that the receiver doesn't have a display. Yes -
this was my first Arduino project and I didn't have time to learn and
investigate everything at one shot. I decided to go simple and hacky.
On every Arduino board, there is a tiny cute built-in LED that can be
programmed (I guess they have it because almost every beginner's first
project is to program the LED to blink). Using the built-in LED has
two advantages - it's simple to program, and uses minimum power than
almost any other fancier display.</p>
<p>I quantize the temperature into 7 regions, each have a different
number of quick blinks every 5 seconds. So I can know the temperature region
in the car by observing the number of quick blinks.</p>
<pre><code>int temp_to_comfort(float tempC) {
if ( tempC < 0) return 7;
if ( 0 <= tempC && tempC < 20) return 1;
if (20 <= tempC && tempC < 25) return 2;
if (25 <= tempC && tempC < 30) return 3;
if (30 <= tempC && tempC < 35) return 4;
if (35 <= tempC && tempC < 40) return 5;
if (40 <= tempC) return 6;
}
</code></pre>
<p>I did took sometime tweaking the speed of each blink - initially it
blinks a little too fast and I couldn't tell if it blinked 3 or 4 or
more times. Once I slow it down a little, it works pleasantly well!</p>
<p>I took this idea further and made my little LED a generic information
representation system. It can display all sorts of other information.
For example, a long, slow blink indicates no data is being received -
maybe the distance is too long, or the transmitter runs out of
battery.</p>
<p>Below animation shows it's receiving temperature in the 20°C ~ 25°C
range (2 blinks):</p>
<p><img src="LED_receiving.gif" alt="LED receiving"></p>
<p>Below animation shows it's not receiving any data (1 long slow blink):</p>
<p><img src="LED_error.gif" alt="LED error"></p>
<h1>Communication System</h1>
<p><a href="http://www.airspayce.com/mikem/arduino/RadioHead/classRH__RF95.html" target="_blank" rel="noopener">RadioHead</a> has awesome support for driving the LoRa modules. LoRa
technology operates at the physical and MAC layer. Since I only need a
peer to peer channel, I only used the physical layer functionality.</p>
<p>LoRa physical layer can be configured to use different frequency,
bandwidth, spreading factor, channel coding rate. These all affect the
data rate and
More about synchronizationhttp://kflu.github.io/2017/04/11/2017-04-11-more-synchronozation/2017-04-11T07:00:00.000Z2023-10-14T22:36:20.099Z
<ul>
<li><a href="http://tomasp.net/blog/csharp-async-gotchas.aspx/" target="_blank" rel="noopener">Async in C# and F# Asynchronous gotchas in C#</a></li>
<li><a href="https://blogs.msdn.microsoft.com/pfxteam/2012/06/15/executioncontext-vs-synchronizationcontext/" target="_blank" rel="noopener">ExecutionContext vs SynchronizationContext</a></li>
</ul>
<h2>About async/await</h2>
<p>Difference between running a <code>Func<Task></code> and <code>Task.Run()</code>. See <a href="http://tomasp.net/blog/csharp-async-gotchas.aspx/" target="_blank" rel="noopener">this article</a>.</p>
<p><code>Action</code> vs <code>Func<Task></code>.</p>
<ul>
<li>Running a <code>Func<Task></code> runs the first part of its body <strong>synchronously</strong> up
until the first <code>await</code> statement. The rest part are run on a separate task.</li>
<li><code>Task.Run()</code> puts its input to a separate task to run.</li>
</ul>
<p>The below two snippets are trivially different. <strong>TODO</strong> need elaborating.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">var tasks = Enumerable.Range(0, 100).Select(async x =></span><br><span class="line">{</span><br><span class="line"> log("step1");</span><br><span class="line"> await Task.Delay(1000);</span><br><span class="line"> log("step2");</span><br><span class="line">}).ToArray();</span><br><span class="line"></span><br><span class="line">log("step3");</span><br><span class="line">Task.WaitAll(tasks);</span><br></pre></td></tr></table></figure>
<p>compared to:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">var tasks = Enumerable.Range(0, 100).Select(x => Task.Run(async () =></span><br><span class="line">{</span><br><span class="line"> log("step1");</span><br><span class="line"> await Task.Delay(1000);</span><br><span class="line"> log("step2");</span><br><span class="line">})).ToArray();</span><br><span class="line"></span><br><span class="line">log("step3");</span><br><span class="line">Task.WaitAll(tasks);</span><br></pre></td></tr></table></figure>
<h2>About <code>SynchronizationContext</code></h2>
<p>It's an abstraction interface whose implementations are used to dispatch
asynchronous tasks in some specific ways. See <a href="https://blogs.msdn.microsoft.com/pfxteam/2012/06/15/executioncontext-vs-synchronizationcontext/" target="_blank" rel="noopener">this
A survey of C# synchronization primitiveshttp://kflu.github.io/2017/04/04/2017-04-04-csharp-synchronization/2017-04-04T07:00:00.000Z2023-10-14T22:36:20.099Z
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/ms228964(v=vs.110).aspx" target="_blank" rel="noopener">This MSDN document</a> gives a good survey on various synchronization
primitives in .NET. This article will follow how it categorizes the
synchronization primitives.</li>
<li><a href="http://www.albahari.com/threading/part2.aspx" target="_blank" rel="noopener">Threading in C#</a> is a very good high level overview on synchronization.
It has a slightly different way of categorizing sync primitives. It also
offers a few ones that're not mentioned in the other doc.</li>
<li><a href="http://download.microsoft.com/download/B/C/F/BCFD4868-1354-45E3-B71B-B851CD78733D/PerformanceCharacteristicsOfSyncPrimitives.pdf" target="_blank" rel="noopener">This paper</a> describes some .NET 4.0 new primitives and provides insight
into their implementation and performance consideration.</li>
</ul>
<p>Synchronization is mostly about the mechanisms provided by the language to
perform waiting (aka blocking). Those mechanisms vary depending on how the
waiting is achieved, and by what criteria the waiting should finish (aka
released, or unblocked).</p>
<h1>Exclusive Locks</h1>
<p><code>lock</code>, <code>Monitor</code>, <code>Mutex</code> are exclusive locks. <code>lock</code> is the most convenient
to use. <code>Monitor</code> provides richer options when waiting on the lock, e.g.,
timeout, etc. <code>Mutex</code> provides inter-process locking but is more expensive to
use.</p>
<p><a href="http://www.albahari.com/threading/part2.aspx" target="_blank" rel="noopener"><code>lock</code> vs <code>Mutex</code></a>:</p>
<blockquote>
<p>[...] Of the two (lock and Mutex), the lock construct is faster and more
convenient. Mutex, though, has a niche in that its lock can span applications
in different processes on the computer.</p>
</blockquote>
<h1>Reader Writer Lock</h1>
<p>The ReaderWriterLockSlim class addresses the case where a thread that changes
data, the writer, must have exclusive access to a resource. When the writer is
not active, any number of readers can access the resource (for example, by
calling the EnterReadLock method). When a thread requests exclusive access,
(for example, by calling the EnterWriteLock method), subsequent reader requests
block until all existing readers have exited the lock, and the writer has
entered and exited the lock.</p>
<p>From <a href="https://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim(v=vs.110).aspx" target="_blank" rel="noopener">MSDN</a>:</p>
<blockquote>
<p>ReaderWriterLockSlim is similar to ReaderWriterLock, but it has simplified
rules for recursion and for upgrading and downgrading lock state.
ReaderWriterLockSlimavoids many cases of potential deadlock. In addition, the
performance of ReaderWriterLockSlim is significantly better than
ReaderWriterLock. ReaderWriterLockSlim is recommended for all new
development.</p>
</blockquote>
<h1>Semaphore</h1>
<p>Semaphore is more general than mutually exclusive locks by allowing a specified
number of threads to access a resource. Like <code>Mutex</code>, it can be used across
process. <code>SemaphoreSlim</code> is its local and more efficient version.</p>
<h1>Wait Handles</h1>
<p><a href="https://msdn.microsoft.com/en-us/library/ksb7zs2x(v=vs.110).aspx" target="_blank" rel="noopener">Conceptual overview: EventWaitHandle, AutoResetEvent, CountdownEvent,
ManualResetEvent</a>.</p>
<pre><code> WaitHandle (abstract)
|
+-----------+----------------+
| | |
| | |
v v v
EventWaitHandle Semaphore Mutex
+ +
| |
| |
v v
AutoResetEvent ManualResetEvent
</code></pre>
<p>Wait handles can be waited on. But what's more interesting is the events. An
event is a value with two states - "signaled" and "not-signaled". When waiting
on it, a thread is only unblocked once the event is "signaled". An event can be
signaled by calling its <code>.Set()</code> method.</p>
<ul>
<li><code>AutoResetEvent</code> is auto reset to "not-signaled" state when one waiting
thread is unblocked, therefore continue to blocking the rest of the waiting
threads. Effectively, it lets only one thread to "pass through" at a time.</li>
<li><code>ManualResetEvent</code> does not auto reset so it lets all waiting threads to pass
through once signaled.</li>
</ul>
<p>There are lightweight counterparts of the above mentioned primitives that're
faster and doesn't work across process:</p>
<ul>
<li><code>System.Threading.SemaphoreSlim</code> is a lightweight version of
System.Threading.Semaphore.</li>
<li><code>System.Threading.ManualResetEventSlim</code> is a lightweight version of
System.Threading.ManualResetEvent.</li>
<li><code>System.Threading.CountdownEvent</code> represents an event that becomes signaled
when its count is zero.</li>
<li><code>System.Threading.Barrier</code> enables multiple threads to synchronize with one
another without requiring control by a master thread. A barrier prevents each
thread from continuing until all threads have reached a specified point.</li>
</ul>
<h1>Spin based primitives</h1>
<p><code>SpinWait</code> can be used to wait for a condition <code>Func<bool></code> to meet. It uses a
good combination of spinning (initially) and blocking (yielding the thread,
after an excessive time spinning).</p>
<p><code>SpinLock</code> is a lock based on spinning. It doesn't turn into blocking so care
must be taken when holding such a lock. Only for advanced and very performance
critical uses.</p>
<h1>Interlocked Operations</h1>
<p>Lastly, <code>Interlocked</code> is not blocking at all. From <a href="https://msdn.microsoft.com/en-us/library/ms228964(v=vs.110).aspx" target="_blank" rel="noopener">MSDN</a>:</p>
<blockquote>
<p>Interlocked operations are simple atomic operations performed on a memory
location by static methods of the Interlocked class. Those atomic operations
include addition, increment and decrement, exchange, conditional exchange
depending on a comparison, and read operations for 64-bit values on 32-bit
platforms.</p>
</blockquote>
<h1>TaskCompletionSource<t></t></h1>
<p>This can be used to synchronize tasks the way events are used for threads.
Tasks and async/await is asynchronous computing on a higher level than threads.
And in addition to the built in constructs that one can await on,
<code>TaskCompletionSource<T></code> can be used to signal and synchronous anything among
tasks.</p>
<h1>Thread Affinity and Its Issues When Used with Tasks</h1>
<p>Some sync mechanisms described above has thread affinity and it has issues when
used with Tasks. For example, tasks do not work well with monitors. As monitors
have thread affinity, a task, resumed onto a different thread than its original
thread, can release the wrong monitor; vise versa, a task resumed to a thread
can aquire the monitor while it has already been aquired by a previously
blocked task.</p>
<p>So it's important to think about thread affinity when picking a sync mechanism to work
with tasks. As a summary:</p>
<p>These are thread affine:</p>
<ul>
<li>Monitors (and locks)</li>
<li>Reader-writer locks</li>
<li>Mutex</li>
</ul>
<p>These does not have thread affinity:</p>
<ul>
<li>Semaphore (and SemaphoreSlim)</li>
<li>Event wait
Native library dependencies - how to debughttp://kflu.github.io/2017/03/02/2017-03-02-native-library-dependencies/2017-03-02T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>Suppose you have a managed library <code>Foo.dll</code>, which P/Invokes <code>DllImport(A.dll)</code>. <code>A.dll</code> in turn references <code>B.dll</code>, <code>C.dll</code>, and so on. There could be two types of errors.</p>
<p>check for:</p>
<ul>
<li>bitness mistach?</li>
<li>missing dependent libraries (e.p. msvc redistribution)</li>
</ul>
<p>Tools that might help:</p>
<ul>
<li>dumpbin</li>
<li>dependency walker</li>
<li>corflags</li>
<li>gflags</li>
<li>procmon (sysinternals)</li>
</ul>
<p>The below are the details.</p>
<h2>Bitness mismatch</h2>
<p>First, there could be a bitness mismatch between the native libraries and <code>Foo.dll</code>. When this happens, you'll see</p>
<blockquote>
<p>Unhandled Exception: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)</p>
</blockquote>
<p>Usually it happens when the native libraries are built for amd64/x64, but <code>foo.dll</code> is for <code>AnyCPU</code> or <code>x86</code>. The <code>csproj</code> file must add following lines:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><PropertyGroup Condition="'$(Platform)' == 'amd64'"></span><br><span class="line"> <PlatformTarget>x64</PlatformTarget></span><br><span class="line"></PropertyGroup></span><br></pre></td></tr></table></figure>
<p>Several tools can be used to inspect a library's bitness.</p>
<p>For managed library, use <code>corflags</code>. Look for <code>PE</code>, <code>ILONLY</code>, and <code>32BIT</code> fields:</p>
<p>A <code>amd64</code> assembly:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">> CorFlags.exe /nologo .\CNTKLibraryManaged-2.0.dll</span><br><span class="line">Version : v4.0.30319</span><br><span class="line">CLR Header: 2.5</span><br><span class="line">PE : PE32+</span><br><span class="line">CorFlags : 9</span><br><span class="line">ILONLY : 1</span><br><span class="line">32BIT : 0</span><br><span class="line">Signed : 1</span><br></pre></td></tr></table></figure>
<p>An <code>AnyCPU</code> assembly:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">> CorFlags.exe /nologo .\Newtonsoft.Json.dll</span><br><span class="line">Version : v2.0.50727</span><br><span class="line">CLR Header: 2.5</span><br><span class="line">PE : PE32</span><br><span class="line">CorFlags : 9</span><br><span class="line">ILONLY : 1</span><br><span class="line">32BIT : 0</span><br><span class="line">Signed : 1</span><br></pre></td></tr></table></figure>
<p>For managed <strong>and</strong> native library, use <code>dumpbin /headers</code>.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">> dumpbin.exe /headers .\CNTKLibraryManaged-2.0.dll</span><br><span class="line">PE signature found</span><br><span class="line">File Type: DLL</span><br><span class="line">FILE HEADER VALUES</span><br><span class="line"> 8664 machine (x64)</span><br><span class="line"> ...</span><br><span class="line"> 2022 characteristics</span><br><span class="line"> Executable</span><br><span class="line"> Application can handle large (>2GB) addresses</span><br><span class="line"> DLL</span><br><span class="line"></span><br><span class="line">OPTIONAL HEADER VALUES</span><br><span class="line"> 20B magic # (PE32+)</span><br><span class="line">...</span><br></pre></td></tr></table></figure>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">> dumpbin.exe /headers .\Newtonsoft.Json.dll</span><br><span class="line">...</span><br><span class="line">PE signature found</span><br><span class="line">File Type: DLL</span><br><span class="line">FILE HEADER VALUES</span><br><span class="line"> 14C machine (x86)</span><br><span class="line"> ...</span><br><span class="line"> 2102 characteristics</span><br><span class="line"> Executable</span><br><span class="line"> 32 bit word machine</span><br><span class="line"> DLL</span><br><span class="line"></span><br><span class="line">OPTIONAL HEADER VALUES</span><br><span class="line"> 10B magic # (PE32)</span><br><span class="line">...</span><br></pre></td></tr></table></figure>
<h2>Missing Dependencies (somewhere deep down the tree)</h2>
<p>tldr; check if you're missing the <a href="http://stackoverflow.com/a/32998963/695964" target="_blank" rel="noopener">Visual C++ Redistributable for Visual Studio XXXX</a>.</p>
<p>This is much harder. You would get an exception:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">"Message":"An error has occurred.",</span><br><span class="line">"ExceptionMessage":"Unable to load DLL 'CNTKLibraryCSBinding': The specified module could not be found. (Exception from HRESULT: 0x8007007E)",</span><br><span class="line">"ExceptionType":"System.DllNotFoundException",</span><br><span class="line">"StackTrace":" : </span><br><span class="line"> CNTK.CNTKLibPINVOKE.SWIGExceptionHelper.SWIGRegisterExceptionCallbacks_CNTKLib(...)</span><br><span class="line"> CNTK.CNTKLibPINVOKE.SWIGExceptionHelper..cctor()"</span><br></pre></td></tr></table></figure>
<p>It is as if <code>CNTKLibraryCSBinding.dll</code> couldn't be found, but in fact it is some library that this native library depends.</p>
<p>How do you find out what is missing? For managed assembly loading issue, one could use <a href="https://msdn.microsoft.com/en-us/library/e74a18c4(v=vs.110).aspx" target="_blank" rel="noopener">Fuslogvw.exe</a>, however it doesn't work for <code>DllImport()</code> issues. There are several solutions suggested on the web. You could try sysinternals <code>procmon</code>. Or <code>gflags</code>. But I have luck with <a href="http://www.dependencywalker.com/" target="_blank" rel="noopener">DependencyWalker</a>. It can be used to <strong>statically</strong> analyze the dependency tree. But in case of <code>DllImport()</code>, which is runtime load, use the profiling feature. Give it the application, start profiling, run to the place it would throw the exception. Look for libraries that are missing.</p>
<p>In my case, it happens to be the <a href="http://stackoverflow.com/a/32998963/695964" target="_blank" rel="noopener">Visual C++ Redistributable for Visual Studio 2015 (<code>msvcp140.dll</code>)</a> that's missing from my test server. That explains why I could run the same program on dev box without problem.</p>
<p>Here're all the online articles that helped:</p>
<ul>
<li><a href="http://stackoverflow.com/questions/3818482/dllimport-generates-system-dllnotfoundexception" target="_blank" rel="noopener">DllImport generates System.DllNotFoundException</a></li>
<li><a href="http://stackoverflow.com/questions/10774250/dllnotfoundexception-with-hresult-0x8007007e-when-loading-64-bit-dll" target="_blank" rel="noopener">DllNotFoundException with HRESULT 0x8007007E when loading 64-bit dll</a></li>
<li><a href="http://stackoverflow.com/questions/269181/when-a-dll-is-not-found-while-p-invoking-how-can-i-get-a-message-about-the-spec" target="_blank" rel="noopener">When a DLL is not found while P/Invoking, how can I get a message about the specific unmanaged DLL that is missing?</a></li>
<li><a href="http://stackoverflow.com/questions/2093485/system-dllnotfoundexception-unable-to-load-dll-on-window-2003" target="_blank" rel="noopener">System.DllNotFoundException: Unable to load DLL on window 2003</a></li>
<li><a href="http://stackoverflow.com/questions/39223976/dllnotfoundexception-pinvoke-issue" target="_blank" rel="noopener">DllNotFoundException PInvoke
Chicken Scheme Noteshttp://kflu.github.io/2017/02/22/2017-02-22-chicken-scheme-notes/2017-02-22T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>There're many ways to install chicken scheme on Windows. I recommend compile it from source. It's because down the road you may need to compile an egg (like what I had to do with <code>openssl</code> egg), you need a matching compiler that matches the one compiled chicken. This avoids a lot of the mystic problems down the road. And as of now, is the only way that worked for me. So here you go.</p>
<h1>Compiling Chicken in Mingw</h1>
<p>This guide shows how to compile chicken using msys. I recommend msys over vanilla mingw, since mostly the same msys will be needed in the future to compile egg dependencies.</p>
<p>Download chicken source from <a href="http://code.call-cc.org/" target="_blank" rel="noopener">here</a>. In msys console, unzip to <code>~/work/chicken</code>. Now determine where on the <strong>host</strong> system you want to install chicken. I want to install it at <code>c:\chicken</code>, so in msys, do</p>
<pre><code>mkdir /c/chicken
make PLATFORM=mingw-msys PREFIX=/c/chicken
make PLATFORM=mingw-msys PREFIX=/c/chicken install
</code></pre>
<p>Once installed, add <code><mingw>/bin</code> to PATH so chicken executables knows where to find the gcc runtime (<code>libgcc_*</code>). Otherwise, you'll see <a href="http://stackoverflow.com/q/4702732/695964" target="_blank" rel="noopener">this error</a>. I tried to compile chicken with gcc option <code>-static-libgcc</code> but the compilation would fail.</p>
<p>And you may need to add the installation path (c:\chicken) in this case to <code>CHICKEN_PREFIX</code>.</p>
<p><strong>tip</strong> if you encounter problem with <code>csc</code>, always check <code>CHECK_PREFIX</code>.</p>
<h2>Test it out</h2>
<p>You can run post-install test according to the <a href="https://code.call-cc.org/cgi-bin/gitweb.cgi?p=chicken-core.git;a=blob;f=README;h=97888053ca98704d761ac944ec555885391d7d9a;hb=6ea24b60e7ef93e2f7c668bdb0437c0189e47dcd" target="_blank" rel="noopener"><code>README</code></a>, with</p>
<pre><code>make PLATFORM=mingw-msys PREFIX=/c/chicken check
</code></pre>
<p>Also:</p>
<ul>
<li>run <code>csi</code> to test the interpreter</li>
<li>run <code>csc</code> to test the compiler</li>
<li>try installing extensions with <code>chicken-install <egg></code>.</li>
</ul>
<h1>Problem installing <code>openssl</code> egg</h1>
<p><code>openssl</code> requires an installation of openssl libraries and the header files accessible by chicken compiler. The easiest way I found is to compile openssl (straightforward) and drop the header files (<code>include/openssl/*</code>) and lib files (<code>libssl.a</code>, <code>libcrypto.a</code>) to chicken directories.</p>
<ul>
<li>headers are at <code><chicken>/include\chicken</code></li>
<li>libs are at <code><chicken>/lib</code></li>
</ul>
<h2>Compiling OpenSSL in Mingw</h2>
<p>Download openssl source (v1.0.2k) from <a href="https://www.openssl.org/source/" target="_blank" rel="noopener">here</a>. Extract it to msys home directory. From msys console, follow instruction in <code>INSTALL</code> file:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">./config</span><br><span class="line">make</span><br></pre></td></tr></table></figure>
<p>This compiles <code>libssl.a</code> and <code>libcrypto.a</code> at source root. All header files are copied to <code>include/openssl</code>. They can then be copied to the corresponding chicken directories.</p>
<h1>Pending Issues</h1>
<p>Extensions cannot be statically linked, e.g., <code>http-client</code>. See this <a href="http://stackoverflow.com/q/38928186/695964" target="_blank" rel="noopener">SO question</a>. <a href="http://www.foldling.org/scheme.html#compiling-statically-linked-chicken-scheme-programs-with-extensions" target="_blank" rel="noopener">This blog post</a> offers a way to do this by manually compiling the dependencies with <code>csc</code>, might worth a try. <a href="https://lists.nongnu.org/archive/html/chicken-users/2014-04/msg00000.html" target="_blank" rel="noopener">This thread</a> reports a similar problem.</p>
<h1>Relevant notes</h1>
<p>The below are notes I took along fixing the issues, mostly building eggs. They were actions I took before fully compiling chicken from source. Might still offer some insights.</p>
<p>The chicken binaries (especially <code>csc</code>) is tightly related to the location of the C compiler that compiled it. I got a working copy of chicken and <code>csc</code> by installing it and its depedency <code>mingw</code> from Chocolatey. Chicken is installed at <code>c:\chicken</code>, <code>mingw</code> is installed at <code>c:\tools\mingw</code>.</p>
<p>However, I have problem installing <code>openssl</code> egg. Here documents the attempts to fix them.</p>
<h2>Problem: cannot find openssl/rand.h</h2>
<p>Initially solved with downloading openssl developer library and extracts headers from <a href="http://gnuwin32.sourceforge.net/packages/openssl.htm" target="_blank" rel="noopener">OpenSSL for Windows</a> into chicken installation path.</p>
<p><strong>tip</strong> Chicken installation structure:</p>
<ul>
<li>headers are at <code><chicken>/include\chicken</code></li>
<li>libs are at <code><chicken>/lib</code></li>
</ul>
<p>This leads to the next problem. A proper fix for this would be to build openssl from mingw.</p>
<h2>Problem: header naming clash**</h2>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">C:/chicken/include/chicken/openssl/ossl_typ.h:176:33: error: expected ')' before numeric constant</span><br><span class="line"> typedef struct ocsp_response_st OCSP_RESPONSE;</span><br></pre></td></tr></table></figure>
<p>There's header macro name clash (<code>OCSP_RESPONSE</code>) in <code>wincrypto.h</code> and <code>ossl-typ.h</code>. Inspired by this issue on SourceForge<a href="https://sourceforge.net/p/podofo/mailman/message/35158528/" target="_blank" rel="noopener">1</a> and its diff attachment, I came up with the following fix for <code>openssl.scm</code> in the egg:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">*** "openssl\\openssl.orig.scm" Wed Feb 22 01:21:45 2017</span><br><span class="line">--- "openssl\\openssl.scm" Wed Feb 22 01:21:12 2017</span><br><span class="line">***************</span><br><span class="line">*** 62,68 ****</span><br><span class="line">--- 62,79 ----</span><br><span class="line"> #ifdef _MSC_VER</span><br><span class="line"> #include <winsock2.h></span><br><span class="line"> #else</span><br><span class="line">+</span><br><span class="line">+ #ifndef NOCRYPT</span><br><span class="line">+ #define PODOFO_NO_WINCRYPT</span><br><span class="line">+ #define NOCRYPT</span><br><span class="line">+ #endif</span><br><span class="line">+</span><br><span class="line"> #include <ws2tcpip.h></span><br><span class="line">+</span><br><span class="line">+ #ifdef PODOFO_NO_WINCRYPT</span><br><span class="line">+ #undef NOCRYPT</span><br><span class="line">+ #endif</span><br><span class="line">+</span><br><span class="line"> #endif</span><br><span class="line"></span><br><span class="line"> #include <openssl/rand.h></span><br></pre></td></tr></table></figure>
<p><strong>Tip</strong> to debug chicken installation problem, use</p>
<pre><code>chicken-install -k -debug <egg>
</code></pre>
<p><code>-k</code> keeps the temporary working directory where the egg files are located. It is helpful for allowing subsequent installation from the locally fixed egg files.</p>
<p><strong>Tip</strong> to install egg from local directory:</p>
<pre><code>chicken-install -transport local -location C:\Users\username\AppData\Local\Temp\tempd5e8.3100\ openssl
</code></pre>
<p>See <a href="https://wiki.call-cc.org/man/4/Extensions#other-modes-of-installation" target="_blank" rel="noopener">"other modes of installation"</a>.</p>
<p>This leads to the next problem.</p>
<h2>Problem: library incompatibility</h2>
<p>Now that compilation is successful, the linker couldn't locate <code>libssl</code> and <code>libcrypto</code>. Still solved this issue by dropping these two files from <a href="http://gnuwin32.sourceforge.net/packages/openssl.htm" target="_blank" rel="noopener">openssl developer library</a> into <code><chicken>/lib</code>. Now I got library incompatibility at linker stage:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">C:/tools/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:\chicken\lib\/libssl.a when searching for -lssl</span><br><span class="line">C:/tools/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find
How .NET runtime locates assemblieshttp://kflu.github.io/2017/02/15/2017-02-15-dotnet-locating-assemblies/2017-02-15T08:00:00.000Z2023-10-14T22:36:20.099Z
<p><a href="https://msdn.microsoft.com/en-us/library/yx7xezcf(v=vs.110).aspx" target="_blank" rel="noopener">This article</a> talks about this topic in detail.</p>
<p>Usually the assemblies are found in the same directory of the running process. But it can be extended by using the application config file or <code>web.config</code> (same schema). The main configuration point is <a href="https://msdn.microsoft.com/en-us/library/twy1dw1e(v=vs.110).aspx" target="_blank" rel="noopener"><code><assemblyBinding></code></a> tag. There're two ways to it:</p>
<p>In the first approach, you can specify each individual assembly with <code><dependentAssembly></code> tag. It even allows you to specify version redirection:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><dependentAssembly></span><br><span class="line"> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="..." culture="neutral" /></span><br><span class="line"> <codeBase version="6.0.0.0" href="file:///C:\Program Files\foo\bar\Newtonsoft.Json.dll" /></span><br><span class="line"> <bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" /></span><br><span class="line"></dependentAssembly></span><br></pre></td></tr></table></figure>
<p>In the second approach, you can specify a "probing" sub-directory with <code><probing></code> tag. The limitation is that the probing location has to be a sub-directory of the process directory.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><configuration> </span><br><span class="line"> <runtime> </span><br><span class="line"> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> </span><br><span class="line"> <probing privatePath="bin;bin2\subbin;bin3"/> </span><br><span class="line"> </assemblyBinding> </span><br><span class="line"> </runtime> </span><br><span
OWIN, Katana, ASP.NET Web API / MVChttp://kflu.github.io/2017/02/14/2017-02-14-owin-katana-aspnet/2017-02-14T08:00:00.000Z2023-10-14T22:36:20.099Z
<p><a href="https://www.asp.net/media/4071077/aspnet-web-api-poster.pdf" target="_blank" rel="noopener">This ASP.NET Web API poster</a> is awesome!</p>
<p>tl;dr - It is very easy to write self-hosting C#/F# ASP.NET MVC/Web API application, without using any Visual Studio non-sense (templates, plugins, etc.)</p>
<p>Reading <a href="https://docs.microsoft.com/en-us/aspnet/aspnet/overview/owin-and-katana/an-overview-of-project-katana" target="_blank" rel="noopener">An overview of Project Katana</a> (A HIGHLY RECOMMENDED READ) helped a lot understanding the relationship between the Microsoft web stack. OWIN is the spec laying out how to build middleware and web app in a chainable fashion. This is very similar to the apps concept in Express.js. Everything as small as authentication to as large as a web framework can be defined as a middleware.</p>
<p>Katana is an implementation of OWIN and related tools.</p>
<p>On top of this, you can then write ASP.NET MVC/Web API and host them as OWIN apps.</p>
<p>To self host a Web API app, there're two options,</p>
<ol>
<li>use OWIN selfhost (<code>Microsoft.Owin.SelfHost</code>)</li>
<li>use Web API home brew selfhoting (System.Web.Http.SelfHost)</li>
</ol>
<p>If #2 is chosen, then it's completely separated from the OWIN stack.</p>
<ul>
<li>The core OWIN nuget package is <code>OWIN</code></li>
<li>The core Katana nuget package is <code>Microsoft.Owin</code></li>
<li>The OWIN self host package is <code>Microsoft.Owin.SelfHost</code></li>
<li>The core ASP.NET Web API package is <code>Microsoft.AspNet.WebApi.Core</code>, but the assembly name is <code>System.Web.Http</code> (so f**king confusing)</li>
</ul>
<p>Useful documents for developing Web API and self hosting:</p>
<ul>
<li>To understand OWIN/Katana/ASP.NET: <a href="https://docs.microsoft.com/en-us/aspnet/aspnet/overview/owin-and-katana/an-overview-of-project-katana" target="_blank" rel="noopener">An overview of Project Katana</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/web-api-routing-and-actions/routing-and-action-selection" target="_blank" rel="noopener">Routing and Action Selection in ASP.NET Web API</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/advanced/configuring-aspnet-web-api" target="_blank" rel="noopener">Configuring ASP.NET Web API 2</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/older-versions/self-host-a-web-api" target="_blank" rel="noopener">Self-Host ASP.NET Web API 1</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/hosting-aspnet-web-api/use-owin-to-self-host-web-api" target="_blank" rel="noopener">Use OWIN to Self-Host ASP.NET Web API 2</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/media-formatters" target="_blank" rel="noopener">Media Formatters in ASP.NET Web API 2</a> (<a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/json-and-xml-serialization" target="_blank" rel="noopener">JSON</a>)</li>
</ul>
<p>Here's <a href="https://gist.github.com/kflu/32e0ec23eb8a57294fa2ab1e5bd33869" target="_blank" rel="noopener">a vanilla Wep API application</a> without any Visual Studio non-sense:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br></pre></td><td class="code"><pre><span class="line">namespace Foo</span><br><span class="line">{</span><br><span class="line"> using System;</span><br><span class="line"> using System.Net;</span><br><span class="line"> using System.Threading;</span><br><span class="line"> using System.Linq;</span><br><span class="line"> using System.Collections.Generic;</span><br><span class="line"> using System.Text;</span><br><span class="line"></span><br><span class="line"> using Newtonsoft.Json;</span><br><span class="line"> using System.Web.Http;</span><br><span class="line"> using System.Web.Http.SelfHost;</span><br><span class="line"></span><br><span class="line"> /// <summary></span><br><span class="line"> /// Main class</span><br><span class="line"> /// </summary></span><br><span class="line"> public static class MainClass</span><br><span class="line"> {</span><br><span class="line"> static void Main(string[] args)</span><br><span class="line"> {</span><br><span class="line"> var addr = new Uri("http://localhost:8001");</span><br><span class="line"> using (var config = new HttpSelfHostConfiguration(addr))</span><br><span class="line"> {</span><br><span class="line"> config.Routes.MapHttpRoute("default", "api/{controller}");</span><br><span class="line"> using (var srv = new HttpSelfHostServer(config))</span><br><span class="line"> {</span><br><span class="line"> srv.OpenAsync().Wait();</span><br><span class="line"> Console.WriteLine("Server started");</span><br><span class="line"></span><br><span class="line"> Console.ReadLine();</span><br><span class="line"> Console.WriteLine("done");</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> public class ProductController : ApiController</span><br><span class="line"> {</span><br><span class="line"> public class Result</span><br><span class="line"> {</span><br><span class="line"> public int Code;</span><br><span class="line"> public object Metadata;</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> [HttpPost]</span><br><span class="line"> public Result Post([FromBody]int[] data)</span><br><span class="line"> {</span><br><span class="line"> return new Result</span><br><span class="line"> {</span><br><span class="line"> Code = 1,</span><br><span class="line"> Metadata = data,</span><br><span class="line"> };</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span
Nginx Reverse Proxy, SSLhttp://kflu.github.io/2017/02/09/2017-02-09-nginx-proxy-ssl/2017-02-09T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>Nginx is awesome - awesomely simple! I set it up on my FreeBSD home server. I like the idea of reverse proxy. For any application that can service a local port via HTTP. Nginx is able to serve it publicly via proxy. Here's the scenario:</p>
<p>I have a TiddlyWiki node.js app, which is ONLY capable of HTTP, but not HTTPS. That's dangerous with basic auth (which TIddlyWiki only supports). But no problem. Assuming the TiddlyWiki serves a local 8080 port. We can use Nginx to proxy that to a public facing 443 HTTPS port. Here's how.</p>
<h1>Install Nginx on FreeBSD</h1>
<p>This is as simple as</p>
<pre><code>pkg install nginx
</code></pre>
<p>That's only ~1MB.</p>
<p>Note that configurations are installed at <code>/usr/local/etc/</code>:</p>
<ul>
<li>Main configuration at <code>/usr/local/etc/nginx/nginx.conf</code></li>
<li>Service script at <code>/usr/local/etc/rc.d/nginx</code></li>
</ul>
<h1>Setup Nginx</h1>
<p>Enable the service in <code>/etc/rc.conf</code>:</p>
<pre><code>nginx_enable="YES"
</code></pre>
<p>Start the service manually:</p>
<pre><code>service nginx start
</code></pre>
<p>Generate SSL certificate with (refer to <a href="https://www.freebsd.org/doc/handbook/openssl.html" target="_blank" rel="noopener">freebsd doc</a>). And copy them to <code>/usr/local/etc/nginx/</code>.</p>
<pre><code>openssl req -new -nodes -out cert.crt -keyout cert.key -sha256 -newkey rsa:2048
</code></pre>
<p>Configure <code>/usr/local/etc/nginx/nginx.conf</code> according to the nginx document (<a href="http://nginx.org/en/docs/beginners_guide.html#control" target="_blank" rel="noopener">2</a>,<a href="https://www.nginx.com/resources/admin-guide/reverse-proxy/" target="_blank" rel="noopener">3</a>,<a href="http://nginx.org/en/docs/http/configuring_https_servers.html" target="_blank" rel="noopener">4</a>):</p>
<p>Disable the HTTP 80 section by commenting out the below section:</p>
<pre><code>server {
listen: 80;
...
}
</code></pre>
<p>Enable the HTTPS section likewise.</p>
<pre><code># HTTPS server
#
server {
listen 443 ssl;
server_name localhost;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
...
location / {
proxy_pass http://localhost:8080;
}
}
</code></pre>
<p>The <code>proxy_pass</code> line specifies to proxy local endpoint 8080.</p>
<h1>References</h1>
<ul>
<li><a href="http://nginx.org/en/docs/beginners_guide.html#control" target="_blank" rel="noopener">nginx newbie doc</a></li>
<li><a href="https://www.nginx.com/resources/admin-guide/reverse-proxy/" target="_blank" rel="noopener">nginx reverse proxy</a></li>
<li><a href="http://nginx.org/en/docs/http/configuring_https_servers.html" target="_blank" rel="noopener">nginx https config</a></li>
<li><a href="https://www.freebsd.org/doc/handbook/openssl.html" target="_blank" rel="noopener">freebsd ssl</a></li>
<li><a href="http://security.stackexchange.com/questions/8110/what-are-the-risks-of-self-signing-a-certificate-for-ssl" target="_blank" rel="noopener">What are the risks of self signing a certificate for
FreeBSD daemonhttp://kflu.github.io/2017/02/09/2017-02-09-freebsd-daemon/2017-02-09T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>The canonical way to create system services are using <code>rc.d</code> system. While those scripts are easy to write, thanks to this <a href="https://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/" target="_blank" rel="noopener">guide</a>, I've run into several issues having my script to run correctly. That is although I can manually start the service (as root) with <code>/etc/rc.d/myservice start</code> (<em>maybe I should try <code>service myservice start</code> next time?</em>), when the system boots, it fails to run, due to errors like, <code>bash</code> couldn't be found, <code>node</code> couldn't be found.</p>
<p>I've better luck with crontab though. The nice thing is I get to reuse the script I wrote for the rc.d. I just install a crontab line <code>@reboot path/to/script start</code>:</p>
<pre><code>[root@bsd /usr/jails/tiddly/var/cron/tabs]# cat root
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.3lLCXYWdq3 installed on Thu Feb 9 21:07:27 2017)
# (Cron version -- $FreeBSD: releng/11.0/usr.sbin/cron/crontab/crontab.c 305427 2016-09-05 16:43:57Z emaste $)
@reboot /root/wiki/wiki start
</code></pre>
<p>Normally a daemon can be created by putting this in shell script:</p>
<pre><code>(nohup /path/to/daemon.sh >> log_file 2>&1 &)
</code></pre>
<p>Quoted from one of the comments:</p>
<blockquote>
<p>The parentheses in (nohup sleep 20 &) do make a difference. They specify a sub-shell. Inside the sub-shell, the nohup command executes the sleep command in the background. When it returns, the sub-shell exits, so the sleep is orphaned, no longer 'owned' by the current shell.</p>
</blockquote>
<p>Refer to this <a href="http://stackoverflow.com/questions/958249/whats-the-difference-between-nohup-and-a-daemon" target="_blank" rel="noopener">awesome SO post</a>.</p>
<p>Another way that's more awesome is to use <code>screen</code>:</p>
<pre><code>screen -d -m /path/to/daemon.sh
</code></pre>
<p><code>screen</code> runs the script and then <strong>detaches</strong>. What's cool about this approach is that at any time, you can reattach to the <code>screen</code> session and interact with the daemon! I need to figure out a way make <code>daemon.sh</code> also produces
FreeBSD jails configurationhttp://kflu.github.io/2017/02/06/2017-02-06-freebsd-jails/2017-02-06T08:00:00.000Z2023-10-14T22:36:20.099Z
<h1>Set up <code>ezjail</code></h1>
<p>Follow this <a href="https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-ezjail.html" target="_blank" rel="noopener">doc</a>:</p>
<p>clone <code>lo0</code> to <code>lo1</code> in <code>/etc/rc.conf</code>:</p>
<pre><code>cloned_interfaces="lo1"
</code></pre>
<p>To create it without reboot: <code>service netif cloneup</code>.</p>
<p>Install <code>ezjail</code></p>
<pre><code>cd /usr/ports/sysutils/ezjail & make install clean
</code></pre>
<p>Enable <code>ezjail</code> in <code>rc.conf</code>: <code>ezjail_enable="YES"</code></p>
<p>Start <code>ezjail</code>: <code>service ezjail start</code>.</p>
<p>To setup the base environment: <code>ezjail-admin install -p</code>.</p>
<p>Copy host's <code>resolv.conf</code> to jail's template so each newly created jail
is able to resolve domain names:</p>
<pre><code>host> cp /etc/resolv.conf /usr/jails/newjail/etc/
</code></pre>
<h1>Networking</h1>
<h2>Conventional way</h2>
<p>Each jail must be asigned an IP. Traditionally in <code>ezjail</code>, this is done
as part of jail creation:</p>
<pre><code>ezjail-admin create dnsjail 'lo1|127.0.1.1,em0|192.168.1.50'
</code></pre>
<p>This would assign <code>dnsjail</code> a private IP <code>127.0.1.1</code> on lo1, and an aliased IP
<code>192.168.1.50</code>. The latter is an alias IP the <strong>host</strong> OS creates. You can see
it with <code>ifconfig em0</code> in the host. Also the host is accessible via this IP
in its LAN. For more information about IP aliasing, see <a href="https://www.freebsd.org/doc/handbook/configtuning-virtual-hosts.html" target="_blank" rel="noopener">virtual host</a>.</p>
<p>Then in the jail, edit <code>hosts</code> to change <code>127.0.0.1</code> to <code>127.0.1.1</code> and add
the jail's hostname to each entry. This is <strong>essential</strong> for it to access
internet.</p>
<p>But this is hard to manage, each network-talking jail needs an IP and you need to
configure router for each of these IPs. Rather, it would be good if the jails
can do networking through host's IP address. That's what the next section is about.</p>
<h2>Networking through host's IP</h2>
<p>Inspired by <a href="https://www.davd.eu/posts-freebsd-jails-with-a-single-public-ip-address/" target="_blank" rel="noopener">this post</a>. It's done through NAT. You still need a pool
of IPs but they don't need to be aliases to host's IP.</p>
<p>In <code>/etc/rc.conf</code>, add:</p>
<pre><code>cloned_interfaces="lo1"
ipv4_addrs_lo1="192.168.60.1-9/29"
</code></pre>
<p>Note the range <code>192.168.60.1 ~ 192.168.60.9</code>. I previously used <code>192.168.0.1-9</code> and I
lost network connection to my host.</p>
<p>Now restart <code>netif</code>: <code>host> service netif restart</code>. And you should see the newly created
IPs.</p>
<pre><code>em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>
ether ...
inet 192.168.0.7 netmask 0xffffff00 broadcast 192.168.0.255
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: lo
lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet 192.168.60.1 netmask 0xfffffff8
inet 192.168.60.2 netmask 0xffffffff
inet 192.168.60.3 netmask 0xffffffff
inet 192.168.60.4 netmask 0xffffffff
inet 192.168.60.5 netmask 0xffffffff
inet 192.168.60.6 netmask 0xffffffff
inet 192.168.60.7 netmask 0xffffffff
inet 192.168.60.8 netmask 0xffffffff
inet 192.168.60.9 netmask 0xffffffff
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
groups: lo
</code></pre>
<p>Now we use <code>pf</code> to map traffics to and from jails.</p>
<p>Enable <code>pf</code> by adding to <code>rc.conf</code>: <code>pf_enable="YES"</code>. Edit <code>/etc/pf.conf</code>:</p>
<pre><code># Public IP address
IP_PUB="<host's public IP>"
# Packet normalization
scrub in all
# Allow outbound connections from within the jails
nat on em0 from lo1:network to any -> (em0)
# webserver jail at 192.168.60.2
rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.60.2
rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.60.2
# .. or map jail's host's 80 to jail's 8080:
# rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.60.2 port 8080
# mailserver jail at 192.168.60.3
rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.60.3
rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.60.3
rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.60.3
rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.60.3
</code></pre>
<p>Start <code>pf</code>: <code>host> service pf start</code></p>
<p>Now creating a jail is simply:</p>
<pre><code>ezjail-admin create <jail_name> <IP>
</code></pre>
<p>where <code>IP</code> is one of those of <code>lo1</code>'s newly created, e.g., <code>192.168.60.2</code>.</p>
<p>We can set up another interface <code>lo2</code> but without configuring NAT for its network, in that case
the jail is restricted to LAN access only:</p>
<pre><code>cloned_interfaces="lo1 lo2"
ipv4_addrs_lo1="192.168.60.1-9/29" # Set up NAT for them
ipv4_addrs_lo2="192.168.70.1-9/29" # Don't set up NAT for them
</code></pre>
<p>I observed start of jail with LAN-only access is slower, maybe due to services
requiring internet timed out during start.</p>
<h1>Jail accessing file system outside of jail</h1>
<p>This can be done by <code>nullfs_mount</code>. Basically by mounting a part of the host file system
under the jail's root:</p>
<pre><code>mkdir /usr/jails/<jail_name>/data
mount -t nullfs -o ro /data /usr/jails/<jail_name>/data
</code></pre>
<p>Alternatively, add this to the jail-specific <code>fstab</code> at: <code>/etc/fstab.<jail_name></code>:</p>
<pre><code>/data /usr/jails/<jail_name>/data nullfs ro
</code></pre>
<p>However there's a [bug I'm currently investigating][nullfs_issue], where the
mount yields inconsistent subfolders.</p>
<h1>Working with jails:</h1>
<ul>
<li>List: <code>jls [-v]</code></li>
<li>Start/stop: <code>ezjail-admin <start|stop|restart> <jail_name></code></li>
<li>delete: <code>ezjail-admin delete [-w] <jail_name></code></li>
<li><code>ezjail</code>'s per-jail configuration is in directory <code>/usr/local/etc/ezjail</code></li>
<li><code>ezjail</code>'s per-jail root is in <code>/usr/jails/<jail_name></code> directory. Here you
can modify jail's settings that's created at creation, e.g., the IP.</li>
<li>In jails you can't use <code>ping</code> to test network connection, instead, use
<code>telnet google.com 80</code>.</li>
<li>Root of a jail is at <code>/usr/jails/<jail_name></code></li>
<li>Jail-specific <code>fstab</code> are at <code>/etc/fstab.<jail_name></code></li>
</ul>
<h1>Accessing mounted file systems from inside jail</h1>
<p>Host file system can be mounted to jail by modifying <code>/etc/fstab.<jail_name></code>.
Inside the jail, the mounted file system has the same ACL as in the host. But
the owner/group are shown as IDs. If a user inside jail (e.g., <code>www</code>) wants to
access the file system, we must first create a user/group inside jail with the
corresponding owner/group ID. And give the target jail user the corresponding
ACL inside the jail. Example:</p>
<pre><code># /tmp/foo/ has 333:8000
jail> pw groupadd foo_group -g 8000 # create foo_group with ID 8000 inside jail
jail> pw groupmod www -m foo_group # add jail user `www` to foo_group
</code></pre>
<h1>References</h1>
<ul>
<li><a href="https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-ezjail.html" target="_blank" rel="noopener">FreeBSD EZJail</a></li>
<li><a href="https://www.davd.eu/posts-freebsd-jails-with-a-single-public-ip-address/" target="_blank" rel="noopener">Single IP jails</a></li>
</ul>
<h1>Networking troubleshooting</h1>
<h2>Routing rable</h2>
<pre><code>host> netstat -r
Routing tables
Internet:
Destination Gateway Flags Netif Expire
default 192.168.0.1 UGS em0
hostname-bsd link#2 UH lo0
192.168.0.0/24 link#1 U em0
192.168.0.7 link#1 UHS lo0
192.168.60.1 link#3 UH lo1
192.168.60.2 link#3 UH lo1
192.168.60.3 link#3 UH lo1
192.168.60.4 link#3 UH lo1
192.168.60.5 link#3 UH lo1
192.168.60.6 link#3 UH lo1
192.168.60.7 link#3 UH lo1
192.168.60.8 link#3 UH lo1
192.168.60.9 link#3 UH lo1
Internet6:
Destination Gateway Flags Netif Expire
::/96 hostname-bsd UGRS lo0
hostname-bsd link#2 UH lo0
::ffff:0.0.0.0/96 hostname-bsd UGRS lo0
fe80::/10 hostname-bsd UGRS lo0
fe80::%lo0/64 link#2 U lo0
fe80::1%lo0 link#2 UHS lo0
ff02::/16 hostname-bsd UGRS
ASCII table (re)explainedhttp://kflu.github.io/2017/02/02/2017-02-02-ascii-table/2017-02-02T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>This totally blew my mind. Look at the ASCII table, organized in 4 columns:</p>
<pre><code> 00 01 10 11
00000 NUL Spc @ `
00001 SOH ! A a
00010 STX " B b
00011 ETX # C c
00100 EOT $ D d
00101 ENQ % E e
00110 ACK & F f
00111 BEL ' G g
01000 BS ( H h
01001 TAB ) I i
01010 LF * J j
01011 VT + K k
01100 FF , L l
01101 CR - M m
01110 SO . N n
01111 SI / O o
10000 DLE 0 P p
10001 DC1 1 Q q
10010 DC2 2 R r
10011 DC3 3 S s
10100 DC4 4 T t
10101 NAK 5 U u
10110 SYN 6 V v
10111 ETB 7 W w
11000 CAN 8 X x
11001 EM 9 Y y
11010 SUB : Z z
11011 ESC ; [ {
11100 FS < \ |
11101 GS = ] }
11110 RS > ^ ~
11111 US ? _ DEL
</code></pre>
<h1>Observations</h1>
<p><a href="http://worldpowersystems.com/archives/codes/X3.4-1963/index.html" target="_blank" rel="noopener">This original paper</a> lists the design considerations (gets
interesting at page 7). <a href="https://en.wikipedia.org/wiki/ASCII#Internal_organization" target="_blank" rel="noopener">Wikipedia</a> lists some interesting designs too.
Some are explained <a href="https://en.wikipedia.org/wiki/ASCII#Internal_organization" target="_blank" rel="noopener">here</a>. Some are explained in the HN comments
<a href="https://news.ycombinator.com/item?id=13539552" target="_blank" rel="noopener">here</a> and <a href="https://news.ycombinator.com/item?id=13499386" target="_blank" rel="noopener">here</a>. And this <a href="http://trafficways.org/ascii/ascii.pdf" target="_blank" rel="noopener">pdf paper</a>.</p>
<p>But here are my observations:</p>
<p>Grouped in 4 columns, each with 32 chars, from 00000 (0) - 11111 (31). Each col
has a different 2 bits prefix (00, 01, 10, 11).</p>
<p>All control chars are in first column, except <code>DEL</code>, which is 0111_1111.</p>
<p>a-z and A-Z all begins with 00001 (1), ends at 11010 (26). The difference is the column
they are in. Lower cases in column 10, upper cases in column 11. That means:</p>
<p><strong>Converting to upper case means setting 6th bit to 1 (<code>OR 0b00100000</code>).
Converting to lower case means setting 6th bit to 0 (<code>AND 0b11011111</code>).</strong></p>
<figure class="highlight fsharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> toLower (c:char) = byte c ||| byte <span class="number">0</span>b00100000 |> char</span><br><span class="line"><span class="keyword">let</span> toUpper (c:char) = byte c &&& byte <span class="number">0</span>b11011111 |> char</span><br></pre></td></tr></table></figure>
<p><strong>Characters</strong> 0-9 are their represented numberic values (0b0000 - 0b1001) prefixed
with 0b0011 (<code>OR 0b0011_0000</code>).</p>
<p>Converting char digit to number can be efficiently done with masking out the higher 4 bits:</p>
<figure class="highlight fsharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">let</span> char_to_num (digit: char) = byte digit &&& byte <span class="number">0</span>b00001111 |> int</span><br></pre></td></tr></table></figure>
<p>Note how these characters aligned:</p>
<pre><code> 10 11
11011 [ {
11100 \ |
11101 ]
X11 forwarding on Windowshttp://kflu.github.io/2017/01/24/2017-01-24-win-x11-forward/2017-01-24T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>This allows you to ssh from Windows machine and get two major benefits:</p>
<ol>
<li>Make use of X11 apps on the ssh server</li>
<li>Make (primarily) remote vim to access system clipboard</li>
</ol>
<p>Here's how. This guide uses the following setup:</p>
<ul>
<li>No need to install full Cygwin or MSYS</li>
<li>Use Mintty/ssh that comes with <a href="https://git-scm.com/" target="_blank" rel="noopener">Git on Windows</a>, aka, git bash.</li>
</ul>
<p>I mainly followed <a href="https://ysgitdiary.blogspot.com/2014/04/how-to-configure-x11-port-forwarding.html" target="_blank" rel="noopener">this guide</a>.</p>
<h2>Server setup</h2>
<p>Ensure <code>/etc/ssh/sshd_config</code>:</p>
<pre><code>AllowAgentForwarding yes
AllowTcpForwarding yes
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
</code></pre>
<p>Restart <code>sshd</code> with <code>service ssh restart</code> (Debian) or <code>service sshd restart</code> (FreeBSD)</p>
<p>Ensure <code>xauth</code> is installed. On Debian use <code>dpkg -l | grep xauth</code>. On
FreeBSD use <code>pkg info | grep xauth</code>.</p>
<h3>FreeBSD specific setup</h3>
<p>Install <code>xauth</code> with <code>pkg install xauth</code>. But this didn't properly setup everything. To complete the
configuration:</p>
<pre><code>touch ~/.Xauthority # xauth complaints if it's absent
</code></pre>
<p>Note down your <code>hostname</code> from <code>/etc/rc.conf</code>, add that to your <code>/etc/hosts</code>:</p>
<pre><code>::1 <YOUR_HOST_HERE> localhost localhost.my.domain
127.0.0.1 <YOUR_HOST_HERE> localhost localhost.my.domain
</code></pre>
<p><a href="https://forums.freebsd.org/threads/8003/" target="_blank" rel="noopener">This post</a> inspired me.</p>
<h2>Client setup</h2>
<p>Install <a href="https://sourceforge.net/projects/xming/files/latest/download" target="_blank" rel="noopener">xming x server</a> on Windows. Make sure the server is <code>:0.0</code>. This can be told
by hovering mouse over the X icon in taskbar.</p>
<p>Fire up mintty,</p>
<pre><code>export DISPLAY=localhost:0.0
ssh -Y <ssh server>
</code></pre>
<p><em>The original post omitted <code>localhost</code> and it didn't work for me.</em></p>
<p>In ssh session, test with <code>xclock</code>.</p>
<h2>Vim clipboard</h2>
<p>First <a href="http://vim.wikia.com/wiki/Accessing_the_system_clipboard#Checking_for_X11-clipboard_support_in_terminal" target="_blank" rel="noopener">check vim system clipboard support</a>:</p>
<pre><code>vim --version | grep clipboard
</code></pre>
<p>If for <code>clipboard</code> and <code>xterm_clipboard</code> there's a <code>-</code> in front, then you are <strong>NOT</strong> good. <a href="http://askubuntu.com/a/613173/259343" target="_blank" rel="noopener">For Ubuntu,
the base vim package is in this case</a>. You'll need vim GUI packages like <code>vim-gtk</code> for it to work:</p>
<pre><code>apt-get install vim-gtk
</code></pre>
<p>Now in vim remote session, select some text type <code>"+y</code>. Try to paste it in local notepad and make sure it works.</p>
<h2>X11 with <code>su</code></h2>
<p><code>x11</code> won't work after <code>su</code> - the cookies for X11 forwarding is stored in the user's <code>~/.Xauthority</code>. For X11
to continue work after <code>su</code>, make a symbolic link to the user which logs in remotely:</p>
<pre><code>ln -s /home/<user>/.Xauthority /root/.Xauthority
</code></pre>
<h2>X11 Clipboard with Tmux</h2>
<p>It's hard to select and copy text from tmux if there's vertical splits - using terminals copy utility would
copy across panes. On the other hand, Tmux's copy does not integrate well with xclip (I found it works only
intermittently).</p>
<p>The best solution I have so far is to rely on tmux's copy (not to x11). Then launch <code>xclip</code>. While it's waiting
to take input from stdin, paste tmux's copy buffer by pressing <code>ctrl-b ]</code>. Then press <code>ctrl-D</code> (EOF) to commit
<code>xclip</code>.</p>
<p>This works for both <code>ctrl-b [</code>, and tmux's support for mouse selection.</p>
<h2>References</h2>
<ul>
<li><a href="https://ysgitdiary.blogspot.com/2014/04/how-to-configure-x11-port-forwarding.html" target="_blank" rel="noopener">Guide</a></li>
<li><a href="http://vim.wikia.com/wiki/Accessing_the_system_clipboard#Checking_for_X11-clipboard_support_in_terminal" target="_blank" rel="noopener">Vim support</a></li>
<li><a href="http://askubuntu.com/a/613173/259343" target="_blank" rel="noopener">Ubuntu
strange cmd/power quote escapinghttp://kflu.github.io/2017/01/11/2017-01-11-strange-quote-escaping-cmd/2017-01-11T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>With this program (<code>cs.exe</code>):</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">class Program</span><br><span class="line">{</span><br><span class="line"> static void Main(string[] args)</span><br><span class="line"> {</span><br><span class="line"> foreach (var item in args)</span><br><span class="line"> {</span><br><span class="line"> Console.WriteLine(item);</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<p>And the runs:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">> cs.exe go\to\a_path</span><br><span class="line">go\to\a_path</span><br><span class="line"></span><br><span class="line">> cs.exe "go\to\a path"</span><br><span class="line">go\to\a path</span><br><span class="line"></span><br><span class="line">> cs.exe "go\to\a path\"</span><br><span class="line">go\to\a path"</span><br><span class="line"></span><br><span class="line">> cs.exe 'go\to\a path\'</span><br><span class="line">'go\to\a</span><br><span class="line">path\'</span><br></pre></td></tr></table></figure>
<p>That means if your path has a space so you quote it, be very careful NOT to put a trailing <code>\</code> at the end, otherwise your program
might just not be able to handle it as it incorrectly contains a <code>"</code> at the end. Single quote is even
Working with LDAP/AD in .NEThttp://kflu.github.io/2017/01/05/2017-01-05-working-with-ldap/2017-01-05T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>Here's the code to access AD (latest at <a href="https://gist.github.com/kflu/ea18e097427f3d458322011025583384" target="_blank" rel="noopener">here</a>).</p>
<figure class="highlight fsharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">(* Accessing AD through LDAP</span></span><br><span class="line"><span class="comment">Inspired by http://stackoverflow.com/a/14814508/695964 </span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">Need nuget package System.DirectoryServices</span></span><br><span class="line"><span class="comment"></span></span><br><span class="line"><span class="comment">*)</span></span><br><span class="line"></span><br><span class="line">#r <span class="string">@"./packages/System.DirectoryServices/lib/System.DirectoryServices.dll"</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">open</span> System</span><br><span class="line"><span class="keyword">open</span> System.Collections</span><br><span class="line"><span class="keyword">open</span> System.DirectoryServices</span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> de = <span class="keyword">new</span> DirectoryEntry() <span class="comment">// connects to the local domain controller</span></span><br><span class="line"></span><br><span class="line"><span class="comment">// these two are optional</span></span><br><span class="line">de.Path <- <span class="string">"LDAP://OU=UserAccounts,DC=foo,DC=bar,DC=baidu,DC=com"</span> <span class="comment">// This scopes the subsequence queries</span></span><br><span class="line">de.AuthenticationType <- AuthenticationTypes.Secure</span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> s = <span class="keyword">new</span> DirectorySearcher(de, Filter=<span class="string">"(name=John Smith)"</span>)</span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> res = s.FindOne()</span><br><span class="line"></span><br><span class="line">res.Properties.[<span class="string">"name"</span>] <span class="comment">// this is always a seq</span></span><br><span class="line">res.Properties.[<span class="string">"name"</span>].[<span class="number">0</span>] <span class="comment">// this is always a obj that needs to be casted at runtime</span></span><br><span class="line">res.Properties.[<span class="string">"name"</span>].[<span class="number">0</span>] :?> string <span class="comment">// I know it's a string</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> myMailboxGuid = Guid(res.Properties.[<span class="string">"someBinaryField"</span>].[<span class="number">0</span>] :?> byte array)</span><br><span class="line"></span><br><span class="line"><span class="comment">// Display all fields (res.Properties implements IDictionary: http://stackoverflow.com/a/3267704/695964)</span></span><br><span class="line">res.Properties |> Seq.cast<DictionaryEntry> |> Seq.iter (<span class="keyword">fun</span> x -> printfn <span class="string">"%A"</span> (x.Key, x.Value))</span><br></pre></td></tr></table></figure>
<h2>References</h2>
<ul>
<li><a href="https://gist.github.com/kflu/ea18e097427f3d458322011025583384" target="_blank" rel="noopener">My gist to access AD in F#</a></li>
<li><a href="http://stackoverflow.com/a/14814508/695964" target="_blank" rel="noopener">SO post on connecting to LDAP</a></li>
<li><a href="https://www.codeproject.com/articles/18102/howto-almost-everything-in-active-directory-via-c" target="_blank" rel="noopener">Howto: (Almost) Everything In Active Directory via C#</a></li>
<li><a href="http://ianatkinson.net/computing/adcsharp.htm" target="_blank" rel="noopener">Active Directory With C#</a></li>
<li><a href="https://technet.microsoft.com/en-us/sysinternals/adexplorer.aspx" target="_blank" rel="noopener">GUI tool: AD explorer (very rough and
Auto Deploy Hexo.io to Github Pages With Travis CIhttp://kflu.github.io/2017/01/03/2017-01-03-hexo-travis/2017-01-03T08:00:00.000Z2023-10-14T22:36:20.099Z
<h2>References</h2>
<ul>
<li><a href="https://pages.github.com/" target="_blank" rel="noopener">Github pages</a></li>
<li><a href="https://github.com/settings/tokens" target="_blank" rel="noopener">Github Access tokens</a></li>
<li><a href="http://www.staticgen.com/" target="_blank" rel="noopener">Static site generators</a></li>
<li>Hexo
<ul>
<li><a href="https://hexo.io/docs/setup.html" target="_blank" rel="noopener">setup</a></li>
<li><a href="https://hexo.io/docs/configuration.html" target="_blank" rel="noopener">config</a></li>
<li><a href="https://hexo.io/themes/" target="_blank" rel="noopener">themes</a></li>
</ul>
</li>
<li>Travis
<ul>
<li><a href="https://docs.travis-ci.com/user/customizing-the-build/" target="_blank" rel="noopener">Configure the build</a></li>
<li><a href="https://docs.travis-ci.com/user/encryption-keys/" target="_blank" rel="noopener">Encrypting data in <code>travis.yml</code> (not used)</a></li>
</ul>
</li>
</ul>
<p><em>This article is inspired by <a href="http://www.tuicool.com/articles/AZf2Yzb" target="_blank" rel="noopener">this</a> and <a href="https://xin053.github.io/2016/06/05/Travis%20CI%E8%87%AA%E5%8A%A8%E9%83%A8%E7%BD%B2Hexo%E5%8D%9A%E5%AE%A2%E5%88%B0Github/" target="_blank" rel="noopener">this</a>.</em></p>
<p>With Travis CI, every time new change is made to the site repo, a build will kick off
on Travis and deploy the updated site to Github pages. This is not a trivial process, so
this article describes the idea behind each piece and documents details.</p>
<h2>Github Pages</h2>
<p><a href="https://pages.github.com/" target="_blank" rel="noopener">Github pages</a> is a github service to host static web sites. It works by rendering static
files (HTML, etc.) that are checked in as a github repo at a special URL (e.g, https://<username>.github.io/project).</username></p>
<p>Nowadays people don't write static HTMLs manually, but rather writing articles/posts
in markdowns (or other markups) and relying on other tools to generate the HTMLs/stylesheets.</p>
<p>There are many of those static site generators. <a href="http://www.staticgen.com/" target="_blank" rel="noopener">Here's a nice list of the most popular ones</a>.</p>
<p>One problem using these tools with github pages is that you have to have a computer with the tool installed to generate
the site and then publish it. If you're somewhere with no access to the tool then you can't publish posts.</p>
<p>With the help of Travis CI, this scenario becomes possible:</p>
<ol>
<li>create a post directly via Github repo web UI.</li>
<li>Travis automatically invokes build process</li>
<li>Travis deploys the updated site to Github pages</li>
</ol>
<h2>Setup github repo</h2>
<p>There're many ways to setup Github pages. I use the following setup:</p>
<ol>
<li>branch <code>master</code> contains the generated site, which will be rendered directly</li>
<li>branch <code>source</code> contains the raw articles and files necessary to generate the site</li>
</ol>
<p>Follow <a href="https://hexo.io/docs/setup.html" target="_blank" rel="noopener">instruction for Hexo.io</a> to scaffold the <code>source</code> branch. Your <code>source</code> branch should look like this:</p>
<pre><code>.
| .gitignore
| .travis.yml
| db.json
| package.json
| _config.yml
|
+---scaffolds
| \--- ...
|
+---source
| \---_posts
| hello-world.md
|
\---themes
\--- ...
</code></pre>
<h3>Hexo themes</h3>
<p>[Hexo themes][themes] can be downloaded. But do not git clone into the repo as you can't udpate and commit the theme's <code>_config.yml</code>
as it will be treated as a git submodule. Instead, download it and unzip into your repo.</p>
<h3>Hexo configs</h3>
<p>Both the site-level and theme-level <code>_config.yml</code> needs to be updated. Refer to the <a href="https://hexo.io/docs/configuration.html" target="_blank" rel="noopener">Hexo doc</a> and the theme doc on how to update them.</p>
<h3>Hexo Workflow</h3>
<ol>
<li><code>hexo clean</code></li>
<li><code>hexo generate</code></li>
<li><code>hexo deploy</code></li>
</ol>
<p>Once that worked, you can start working on enabling travis.</p>
<h2>Setup Travis CI</h2>
<p>Travis listens to your repo's commit event and invokes build process specified in the repo's <code>.travis.yml</code> file of the triggering branch.
travis script is run in a Linux environment so you can use shell commands.</p>
<p>I put the actual site generating commands in <code>package.json</code> so I can use npm to run them:</p>
<pre><code>"scripts": {
"build": "hexo clean && hexo generate && hexo deploy"
},
</code></pre>
<p>Then in <code>.travis.yml</code>:</p>
<pre><code>language: node_js
node_js:
- 6.0.0
branches:
only:
- source
install: npm install
before_script:
- git config --global user.name "KL"
- git config --global user.email "kfldev@outlook.com"
- sed -i "s/__GITHUB_TOKEN__/${__GITHUB_TOKEN__}/" _config.yml
script: npm run build
</code></pre>
<p>Note that in order for Travis to deploy to github repo, it needs to have access. I got the github access token from <a href="https://github.com/settings/tokens" target="_blank" rel="noopener">here</a>.
Then the repo can be accessed via URL <code>https://<TOKEN>@github.com/<user>/<repo></code>. For security reason this token should NOT be checked in but should be
specified in Travis repo settings as an environment variable. Then replace the URL in hexo config with this vairable at build time.</p>
<p>In hexo <code>_config.yml</code>:</p>
<pre><code>deploy:
type: git
repo: https://__GITHUB_TOKEN__@github.com/user/blog.git
branch: master
</code></pre>
<p><code>__GITHUB_TOKEN__</code> is replaced with <code>sed</code> by travis
Linux, Terminals, Windowshttp://kflu.github.io/2016/12/20/2016-12-20-Linux-terminals-windows/2016-12-20T08:00:00.000Z2023-10-14T22:36:20.099Z
<h2>Mintty</h2>
<p>Mintty comes with git for windows. It comes with an ssh client so you don't need putty. I found mintty to be easier to work with.</p>
<h3>TERM</h3>
<p>Setting term to be <code>xterm-256color</code> gives mouse support for vim, tmux, etc. Originally it was <code>xterm</code> and it didn't work.</p>
<h2>Tmux</h2>
<p>In ~/.tmux.config set <code>set -g mouse on</code> will enable mouse support for selecting pane, resizing, etc. See <a href="http://stackoverflow.com/a/33336609/695964" target="_blank"
Ubuntu on Gen 2 Hyper-Vhttp://kflu.github.io/2016/12/19/2016-12-19-Ubuntu-HyperV/2016-12-19T08:00:00.000Z2023-10-14T22:36:20.099Z
<p>All setup should be straightforward except getting the network to work. After spending a lot of time, <a href="http://help.yoyogames.com/hc/en-us/articles/216754468-Setup-An-Ubuntu-Virtual-Machine-Using-Hyper-V" target="_blank" rel="noopener">this guide worked</a>.
What's needed is an external virtual switch and enabling "Allow management operating system to share this network". When creating the
switch, it asks to bind to a physical adapter. So I guess changing to using a different physical adapater in the host (e.g., laptop
swithed from Ethernet to WIFI) would make the network break in the
CNTK cheatsheethttp://kflu.github.io/2016/10/30/2016-10-30-CNTK-cheatsheet-Iris-Dataset-classifier/2016-10-30T07:00:00.000Z2023-10-14T22:36:20.099Z
<p>See <a
GraphViz noteshttp://kflu.github.io/2016/10/27/2016-10-27-GraphViz/2016-10-27T07:00:00.000Z2023-10-14T22:36:20.099Z
<p>Windows download is <a href="http://www.graphviz.org/Download_windows.php" target="_blank" rel="noopener">here</a>. I cannot find a statically linked version (all-in-one). <a href="http://melp.nl/2013/08/flow-charts-in-code-enter-graphviz-and-the-dot-language/" target="_blank" rel="noopener">This</a> is a good short tutorial on how to draw flowchart. Note how common node attributes can be declared together:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">digraph {</span><br><span class="line"> label="How to make sure 'input' is valid";</span><br><span class="line"> </span><br><span class="line"> node[shape="box", style="rounded"]</span><br><span class="line"> start; end;</span><br><span class="line"> node[shape="parallelogram", style=""]</span><br><span class="line"> message; input;</span><br><span class="line"> node[shape="diamond", style=""]</span><br><span class="line"> if_valid;</span><br><span class="line"> </span><br><span class="line"> start -> input;</span><br><span class="line"> input -> if_valid;</span><br><span class="line"> if_valid -> message[label="no"];</span><br><span class="line"> if_valid -> end[label="yes"];</span><br><span class="line"> message -> input; </span><br><span class="line"> </span><br><span class="line"> {rank=same; message input}</span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<p>On commandline, use:</p>
<pre><code>dot -Tpng -o graph.png graph.dot
</code></pre>
<p>For newlines in labels, you can use <code>\n</code>. Alternatively, literal newlines in the string is captured and regarded. Further more, <code>\</code> in
strings are line continuation. So for better formatting, one can write:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">node[label="\</span><br><span class="line">this is a long description</span><br><span class="line">this is another line];</span><br></pre></td></tr></table></figure>
<p><strong>References</strong></p>
<ul>
<li><a href="http://www.graphviz.org/doc/info/attrs.html" target="_blank" rel="noopener">Node, Edge and Graph Attributes</a></li>
<li><a href="https://github.com/wannesm/wmgraphviz.vim" target="_blank" rel="noopener">Nice Vim GraphViz
CNTK installation and IDE setuphttp://kflu.github.io/2016/10/27/2016-10-27-cntk-installation/2016-10-27T07:00:00.000Z2023-10-14T22:36:20.099Z
<p>Followed <a href="https://github.com/Microsoft/CNTK/wiki/CNTK-Binary-Download-and-Manual-Installation" target="_blank" rel="noopener">this manual instruction</a> since I have Anaconda already installed. <strong>Be very careful</strong> to install all required binaries:</p>
<ul>
<li><a href="https://www.microsoft.com/en-ie/download/details.aspx?id=40784" target="_blank" rel="noopener">Visual C++ Redistributable Package for Visual Studio 2013</a></li>
<li><a href="https://www.microsoft.com/en-us/download/details.aspx?id=30679" target="_blank" rel="noopener">Visual C++ Redistributable Package for Visual Studio 2012</a></li>
<li><a href="https://www.microsoft.com/en-us/download/details.aspx?id=49926" target="_blank" rel="noopener">MPI</a></li>
<li>Visual C++ compiler - install this from visual studio, otherwise it complains "Microsoft Visual C++ 14.0 is required" during <code>pip install</code></li>
</ul>
<p>I forgot to install MPI and later when <code>import cntk</code> it failed loading the <code>_cntk_py.pyd</code> module, costing me a lot of time. This error can also happen if you installed a wrong target version of CNTK, e.g., you installed CNTK GPU on a host without GPU.</p>
<p>Then create a conda environment for it:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">conda create --name cntk python=3.4.3 anaconda</span><br><span class="line">activate cntk</span><br><span class="line">python -m pip install --upgrade pip # This is really optional to me</span><br></pre></td></tr></table></figure>
<p>See <a href="https://conda.io/docs/py2or3.html" target="_blank" rel="noopener">Anaconda python management</a> for how to create conda environment.</p>
<p>Now <code>pip install</code> from <a href="https://github.com/Microsoft/CNTK/wiki/Setup-Windows-Python" target="_blank" rel="noopener">github here</a>. There're three choices:</p>
<ul>
<li>CPU only</li>
<li>GPU</li>
<li>GPU with 1bit SGD</li>
</ul>
<p>Note that the python version needs to match your python installation.</p>
<p>I tried the CPU only and the GPU with 1bit SGD, seems to work on my Surface Book with standalone GPU.</p>
<p>Now to verify it works, fire up a conda commandline and:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">activate cntk-py34</span><br><span class="line">python</span><br><span class="line"></span><br><span class="line"># now in python:</span><br><span class="line">>> import cntk # this should succeed!</span><br></pre></td></tr></table></figure>
<h2>Notes on Anaconda environment</h2>
<p><strong>to display conda environment information</strong></p>
<pre><code>conda info --all
</code></pre>
<h2>IDE setup</h2>
<p>Best autocompletion support for Python that I've seen so far is PyCharm. Python Tools for Visual Studio (PTVS), for example, is not able to infer function signature on <code>cntk.ops.input_variable</code> and others. Install the free PyCharm community edition is sufficient. To let
PyCharm use the Anaconda CNTK environment:</p>
<ol>
<li>find the python executable python by using <code>conda info --all</code>.</li>
<li>in PyCharm -> settings -> project -> project interpreter -> "add local", put in the desired python executable path</li>
<li>note that CNTK relies on a bunch of environment variables to work, e.g., <code>MSMPI_LIB32</code>, <code>MSMPI_LIB64</code>, so <strong>always launch PyCharm from the anaconda console</strong> to ensure necessary environment variables are inherited by the IDE. If this is not set up properly, interpreter throws when <code>import cntk</code>, complaining module cannot be loaded.</li>
</ol>
<p>For the convinience of launch PyCharm from commandline, I added PyCharm to <code>PATH</code>.</p>
<h2>CNTK development workflow</h2>
<ol>
<li>launch Anaconda console</li>
<li><code>activate cntk-py34</code></li>
<li>(optional) launch powershell</li>
<li>launch IDE from anaconda
Computing covariancehttp://kflu.github.io/2016/09/04/2016-09-04-computing-covariance/2016-09-04T07:00:00.000Z2023-10-14T22:36:20.098Z
<p>For a given m x n matrix $X = {X_{ij}}$, where each row is a sample, each column is a <strong>zero-mean</strong> feature, the normal way of computing covariance matrix is</p>
<p>$$ \Sigma = \frac{1}{m} X^T \times X $$</p>
<p>This can be easily understood - $\Sigma_{ij}$ is the covariance between i-th and j-th feature of the dataset. The computation reflects that - $\Sigma_{ij}$ is computed by $(1/m) * <X_i, X_j>$, where $<X_i, X_j>$ is the inner product between column $X_i$ and column $X_j$. Since all features (columns) are zero-mean, this is exactly the definition of covariance between two random variables.</p>
<p>To my suprise, the other way of estimating the covariance is:</p>
<p>$$ \Sigma = \frac{1}{m} \sum_{i=1}^m { {X^{(i)}}^T \times X^{(i)} } $$</p>
<p>where $X^{(i)}$ is a 1xn row vector representing the i-th observed sample in the dataset. What that means is that instead of estimating the covariance matrix feature-wise, i.e., computing covariance values one by one, we're now estimating the entire covariance matrix using each single observation samples, and averaging those estimates ($\frac{1}{m}\sum$). This approach has the benefit that the covariance matrix can be built incrementally!</p>
<p>The below graph demonstrates the squared estimation errors of these two methods compared with the <code>cov()</code> function.</p>
<p><img src="computing_covariance.png" alt="comparison"></p>
<p>The two lines overlaps perfectly implying they're fundamentally equivelent. And as the sample size gets large, the estimation error gets small.</p>
<p>Here's the code for it (<a href="https://gist.github.com/kflu/c8dbb4a365302386109724faa2c15cbe#file-compute_covariance-m" target="_blank" rel="noopener">gist</a>):</p>
<figure class="highlight matlab"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br></pre></td><td class="code"><pre><span class="line">clc;</span><br><span class="line">close all;</span><br><span class="line"></span><br><span class="line">mm = []</span><br><span class="line">E1 = []</span><br><span class="line">E2 = []</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> m = <span class="number">10</span>:<span class="number">50</span>:<span class="number">1000</span></span><br><span class="line"></span><br><span class="line"> e1 = <span class="number">0.0</span>;</span><br><span class="line"> e2 = <span class="number">0.0</span>;</span><br><span class="line"> </span><br><span class="line"> mm = [mm m];</span><br><span class="line"> </span><br><span class="line"> # averaging the error performance</span><br><span class="line"> <span class="keyword">for</span> a = <span class="number">1</span>:<span class="number">10</span></span><br><span class="line"> n = <span class="number">20</span>;</span><br><span class="line"> X = <span class="number">3</span> * <span class="built_in">rand</span>(m,n) - <span class="number">1.5</span>; </span><br><span class="line"> C1 = cov(X);</span><br><span class="line"> </span><br><span class="line"> # computing covariance feature by feature (column wise inner product)</span><br><span class="line"> C2 = (<span class="number">1</span>/m) * X' * X;</span><br><span class="line"></span><br><span class="line"> # estimate covariance by computing covariance on each sample and then average</span><br><span class="line"> C = <span class="built_in">zeros</span>(n, n);</span><br><span class="line"> <span class="keyword">for</span> <span class="built_in">i</span> = <span class="number">1</span>:m</span><br><span class="line"> C += X(<span class="built_in">i</span>,:)' * X(<span class="built_in">i</span>, :);</span><br><span class="line"> <span class="keyword">end</span></span><br><span class="line"></span><br><span class="line"> C = (<span class="number">1</span>/m) * C;</span><br><span class="line"></span><br><span class="line"> e1 += (<span class="number">1</span>/m*n) * sum(sum((C1 - C).^<span class="number">2</span>, <span class="number">1</span>), <span class="number">2</span>);</span><br><span class="line"> e2 += (<span class="number">1</span>/m*n) * sum(sum((C2 - C1).^<span class="number">2</span>, <span class="number">1</span>), <span class="number">2</span>);</span><br><span class="line"> <span class="keyword">end</span></span><br><span class="line"> </span><br><span class="line"> E1 = [E1 e1/<span class="number">10</span>];</span><br><span class="line"> E2 = [E2 e2/<span class="number">10</span>];</span><br><span class="line"><span class="keyword">end</span></span><br><span class="line"></span><br><span class="line"><span class="built_in">hold</span> on;</span><br><span class="line">semilogy(mm, E1, <span class="string">'-k'</span>);</span><br><span class="line">semilogy(mm, E2, <span class="string">'-xr'</span>);</span><br><span class="line"><span class="built_in">legend</span>(<span class="string">"sq err cov = avg cov per sample"</span>, <span class="string">"sq err cov = (1/m) * X' * X"</span>);</span><br><span class="line">grid on;</span><br><span class="line"><span class="built_in">hold</span>
C# parsing and evaluating using Roslynhttp://kflu.github.io/2016/08/27/2016-08-27-csharp-parsing-evaluating-roslyn/2016-08-27T07:00:00.000Z2023-10-14T22:36:20.098Z
<p>Using Roslyn you can parse c# code into AST and given a c# code snippet, it can be evaluated. You need the following binaries:</p>
<ul>
<li>Microsoft.CodeAnalysis.CSharp</li>
<li>Microsoft.CodeAnalysis.CSharp.Scripting</li>
</ul>
<p><code>CSharpSyntaxTree.ParseText</code> converts a c# code (string) into a <code>SyntaxTree</code>. <code>CSharpScript.EvaluateAsync</code> can be used to evaluate a c# code snippet. There're other useful API for scripting, documented <a href="https://github.com/dotnet/roslyn/wiki/Scripting-API-Samples#prevstate" target="_blank" rel="noopener">here</a>, including inspecting defined variables, continuing with a previous state, etc.</p>
<p>Note that</p>
<pre><code>CSharpScript.EvaluateAsync("new DateTime(2016,12,1)");
</code></pre>
<p>throws an exception:</p>
<p><code>Microsoft.CodeAnalysis.Scripting.CompilationErrorException: (1,5): error CS0246: The type or namespace name 'DateTime' could not be found (are you missing a using directive or an assembly reference?)</code></p>
<p>Since the code snippet needs to be "self-contained", namespace needs to be properly used. Below is a fully working example of the parsing and evaluating.</p>
<figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">using</span> System;</span><br><span class="line"><span class="keyword">using</span> System.Threading.Tasks;</span><br><span class="line"><span class="keyword">using</span> Microsoft.CodeAnalysis;</span><br><span class="line"><span class="keyword">using</span> Microsoft.CodeAnalysis.CSharp;</span><br><span class="line"><span class="keyword">using</span> Microsoft.CodeAnalysis.CSharp.Scripting;</span><br><span class="line"></span><br><span class="line"><span class="keyword">namespace</span> <span class="title">GettingStartedCS</span></span><br><span class="line">{</span><br><span class="line"> <span class="keyword">class</span> <span class="title">Program</span></span><br><span class="line"> {</span><br><span class="line"> <span class="function"><span class="keyword">static</span> <span class="keyword">void</span> <span class="title">Main</span>(<span class="params"><span class="keyword">string</span>[] args</span>)</span></span><br><span class="line"><span class="function"></span> {</span><br><span class="line"> <span class="comment">// demonstrate parsing</span></span><br><span class="line"> SyntaxTree tree = CSharpSyntaxTree.ParseText(<span class="string">@"var x = new DateTime(2016,12,1);"</span>);</span><br><span class="line"> Console.WriteLine(tree.ToString()); <span class="comment">// new DateTime(2016,12,1)</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">var</span> result = Task.Run<<span class="keyword">object</span>>(<span class="keyword">async</span> () =></span><br><span class="line"> {</span><br><span class="line"> <span class="comment">// CSharpScript.RunAsync can also be generic with typed ReturnValue</span></span><br><span class="line"> <span class="keyword">var</span> s = <span class="keyword">await</span> CSharpScript.RunAsync(<span class="string">@"using System;"</span>);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// continuing with previous evaluation state</span></span><br><span class="line"> s = <span class="keyword">await</span> s.ContinueWithAsync(<span class="string">@"var x = ""my/"" + string.Join(""_"", ""a"", ""b"", ""c"") + "".ss"";"</span>);</span><br><span class="line"> s = <span class="keyword">await</span> s.ContinueWithAsync(<span class="string">@"var y = ""my/"" + @x;"</span>);</span><br><span class="line"> s = <span class="keyword">await</span> s.ContinueWithAsync(<span class="string">@"y // this just returns y, note there is NOT trailing semicolon"</span>);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// inspecting defined variables</span></span><br><span class="line"> Console.WriteLine(<span class="string">"inspecting defined variables:"</span>);</span><br><span class="line"> <span class="keyword">foreach</span> (<span class="keyword">var</span> variable <span class="keyword">in</span> s.Variables)</span><br><span class="line"> {</span><br><span class="line"> Console.WriteLine(<span class="string">"name: {0}, type: {1}, value: {2}"</span>, variable.Name, variable.Type.Name, variable.Value);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span> s.ReturnValue;</span><br><span class="line"> </span><br><span class="line"> }).Result;</span><br><span class="line"> </span><br><span class="line"> Console.WriteLine(<span class="string">"Result is: {0}"</span>, result);</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<p>The above code give the output:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">var x = new DateTime(2016,12,1);</span><br><span class="line">inspecting defined variables:</span><br><span class="line">name: x, type: String, value: my/a_b_c.ss</span><br><span class="line">name: y, type: String, value: my/my/a_b_c.ss</span><br><span class="line">Result is: my/my/a_b_c.ss</span><br></pre></td></tr></table></figure>
<h1>References</h1>
<ul>
<li><a href="https://github.com/dotnet/roslyn/wiki/Scripting-API-Samples#expr" target="_blank" rel="noopener">Roslyn scripting</a></li>
<li><a href="https://github.com/dotnet/roslyn/wiki/Getting-Started-C%23-Syntax-Analysis" target="_blank" rel="noopener">Roslyn syntax analysis aka parsing</a></li>
<li><a href="https://social.msdn.microsoft.com/Forums/vstudio/en-US/e6364fec-29c5-4f1d-95ce-796feb25a8a9/is-it-possible-to-convert-a-roslyn-ast-expression-tree-to-a-linq-expression-tree-is-there-a-roslyn?forum=roslyn" target="_blank" rel="noopener">Roslyn AST to Linq expression tree? This may not be necessary anymore since Roslyn can be fully functional</a></li>
<li><a href="https://github.com/dotnet/roslyn/wiki/Scripting-API-Samples#prevstate" target="_blank" rel="noopener">Roslyn scripting scenarios</a></li>
</ul>
<h1>Using it in F#</h1>
<p>Same thing can be used in F#. I can successfully use it in a F# console application with nuget (via Visual Studio).</p>
<figure class="highlight fsharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">open</span> Microsoft.CodeAnalysis</span><br><span class="line"><span class="keyword">open</span> Microsoft.CodeAnalysis.CSharp</span><br><span class="line"><span class="keyword">open</span> Microsoft.CodeAnalysis.CSharp.Scripting</span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> ast = CSharpSyntaxTree.ParseText(<span class="string">"""var x = new DateTime(2016,12,1);"""</span>)</span><br><span class="line">printfn <span class="string">"%s"</span> (ast.ToString())</span><br><span class="line"></span><br><span class="line"><span class="keyword">let</span> result = </span><br><span class="line"> async {</span><br><span class="line"> <span class="keyword">let!</span> s = CSharpScript.RunAsync(<span class="string">"""using System;"""</span>) |> Async.AwaitTask</span><br><span class="line"> <span class="keyword">let!</span> s = s.ContinueWithAsync(<span class="string">"""var x = "my/" + string.Join("_", "a", "b", "c") + ".ss";"""</span>) |>
Visualizing Precision Recallhttp://kflu.github.io/2016/08/26/2016-08-26-visualizing-precision-recall/2016-08-26T07:00:00.000Z2023-10-14T22:36:20.097Z
<p>Andrew Ng's <a href="https://www.coursera.org/learn/machine-learning/lecture/tKMWX/error-metrics-for-skewed-classes" target="_blank" rel="noopener">lecture on error analysis of Machine Learning</a> gave a good explanation on Precision and Recall. Here I have a visualization of the concept.</p>
<p>Given the classification results on a test set, comparing the prediction result (P) and the actual label (L), each comparison can be categorized into four buckets:</p>
<ol>
<li>True positive (TP): P = L = 1</li>
<li>False positive (FP): P = 1, L = 0</li>
<li>False negative (FN): P = 0, L = 1</li>
<li>True negative (TN): P = L = 0</li>
</ol>
<p><img src="2016-08-26-visualizing-precision-recall-1.png" alt="PR"></p>
<p>Three metrics are often used:</p>
<p><strong>Accuracy</strong></p>
<p>Accuracy measures, among all test data, how many samples does the algorithm get correctly. This is defined as the number of correct classifications divided by the total test samples:</p>
<p>$$ Accuracy = \frac{TP + TN}{total} $$</p>
<p><strong>Precision</strong></p>
<p>Precision measures, among all samples that the algorithm claims to be positive ($TP + FP$), how many are correct:</p>
<p>$$ Precision = \frac{TP}{TP + FP} $$</p>
<p><strong>Recall</strong></p>
<p>Recall measures, among all sample that are actually positive ($TP + FN$), how many the algorithm classified as positive:</p>
<p>$$ Recall = \frac{TP}{TP + FN}</p>
<p>When the data has <strong>"skewed classes"</strong>, precision is not a good performance metric. For example, this shows a skewed classes data:</p>
<p><img src="2016-08-26-visualizing-precision-recall-2.png" alt="skewed"></p>
<p>We can have a cheating algorithm to predict everything as negative (imagine the horizontal threshold line to be moved way up), and the accuracy will be high (due to high TN). But as there are no positive classification, the recall is 0.</p>
<p>That's why precision and recall is a more balanced measurement for performance. An ideal algorithm should have high precision and recall values. But for a given algorithm, the precision and recall is traded off by setting the classification threshold higher or lower. For example, for the same data shown as above, we could choose a threshold so the algorithm only classifies positive when it's very confident:</p>
<p><img src="2016-08-26-visualizing-precision-recall-3.png" alt="confident"></p>
<p>This yields higher precision, but lower recall. If a single metric is desired out of precision and recall, there is F1 score defined as below:</p>
<p>$$ F_1 score = 2 \frac{precision \times recall}{precision + recall} $$</p>
<p>When either recall or precision is small, the score will be small. The perfect score is 1 when both precision and recall are
Paket - using private nuget feedhttp://kflu.github.io/2016/08/16/2016-08-16-paket-using-private-nuget-feed/2016-08-16T07:00:00.000Z2023-10-14T22:36:20.097Z
<p>Paket can be set up (per project) to use private nuget feeds that require authentication. Here's how:</p>
<ol>
<li>In paket.dependencies, add a line <code>source <feed_url></code>. Also add the dependencies you want to pull: <code>nuget <library></code></li>
<li>Encode the credential for the feed by calling <code>paket.exe config add-credentials <feed_url></code>. This stores the credential at paket's global config location <code>%appdata%/paket/paket.config</code>. (to get credentials for VisualStudio Online (VSO) feeds, see below).</li>
</ol>
<p>Now you can run <code>paket.exe install</code>.</p>
<p>Note that nuget credential provider is a better way to manage credentials. But currently paket doesn't support that.</p>
<h1>Get access to private VSO feeds</h1>
<p>The ideal way is to use nuget provider. Paket doesn't support that yet. So the alternative is to get a personal access token for the VSO feed:</p>
<ol>
<li>Go to the VSO site of the nuget feed and sign in</li>
<li>Click on your profile -> security. There you can manage all credentials like the nuget token or SSH public keys.</li>
<li>Create a new token for the project. Store it somewhere as you can't retrieve it from VSO once closed.</li>
</ol>
<h1>References</h1>
<ul>
<li><a href="https://fsprojects.github.io/Paket/paket-config.html" target="_blank" rel="noopener"><code>paket config</code> doc</a></li>
<li><a href="https://fsprojects.github.io/Paket/nuget-dependencies.html" target="_blank" rel="noopener">Paket's nuget dependencies
FsLab Journalhttp://kflu.github.io/2016/08/04/2016-08-04-FsLab-Journal/2016-08-04T07:00:00.000Z2023-10-14T22:36:20.097Z
<p>FsLab Journal is a literate programming tool based on FSharp.Formatting. It's sort of like Jupyter notebook. The advantage of it over IPython notebook
is that it's statically typed and IDE supports are awesome. In order to use it:</p>
<ol>
<li>Download the template from <a href="https://github.com/fslaborg/FsLab.Templates/archive/journal.zip" target="_blank" rel="noopener">here</a></li>
<li>Run <code>build run</code> to automatically restore packages and start a live server</li>
<li>Unzip and open the <code>.fsproj</code> file to start editing in Visual Studio</li>
<li>The web page is automatically updated (there's a several seconds delay)</li>
</ol>
<p>To add a reference, add a line in <code>paket.dependencies</code>, and reference the assembly in the <code>.fsx</code> script file by the following. Then intellisense will work!</p>
<pre><code>#r "packages/Argu/lib/net40/Argu.dll"
</code></pre>
<p>FSharp.Formatting lets you register custom object output by <a href="https://tpetricek.github.io/FSharp.Formatting/evaluation.html#Custom-formatting-functions" target="_blank" rel="noopener"><code>RegisterTransformation</code></a>. Here's <a href="https://github.com/BlueMountainCapital/Deedle/blob/5d347cf9329d427e3872c1197303f20554e37a32/docs/tools/formatters.fsx#L288" target="_blank" rel="noopener">Deedle's implementation</a> (e.g., frame as table, etc.). But currently it doesn't let you do table cells conditional formatting.</p>
<h2>References</h2>
<ul>
<li><a href="http://fslab.org/download/" target="_blank" rel="noopener">FsLab Journal</a></li>
<li><a href="http://tpetricek.github.io/FSharp.Formatting/" target="_blank" rel="noopener">FShapr.Formatting</a></li>
<li><a href="http://bluemountaincapital.github.io/Deedle/tutorial.html" target="_blank" rel="noopener">Deedle
Pandas referencehttp://kflu.github.io/2016/08/03/2016-08-03-pandas-reference/2016-08-03T07:00:00.000Z2023-10-14T22:36:20.097Z
<p>See <a
Some F# Updatehttp://kflu.github.io/2016/06/17/2016-06-17-fsharp-update/2016-06-17T07:00:00.000Z2023-10-14T22:36:20.097Z
<p>I looked into <a href="http://ionide.io/" target="_blank" rel="noopener">Ionide</a> that enhances Atom/VSCode into an F# IDE. I'd say
compared with the IDE toolings of other languages I've looked into recently, like
Haskell and OCaml, the experience is pretty good. It has auto-complete, Paket
and Fake integration. It is still not as good as VisualStudio though. One thing
I miss a lot is "Go to definition" on a type in assemblies. With VisualStudio,
it brings you to the "ILDASM" style metadata. This is greatly useful for
exploration.</p>
<p>But I still appreciate the cross-platform capability that the tool brings in -
with Ionide, Fake, Paket/Nuget, there's no need for VS and full profile .NET
develop environment.</p>
<p>So an F# development pattern I would like to establish from now on, is to use
the <a href="https://github.com/fsprojects/generator-fsharp" target="_blank" rel="noopener">fsharp yeoman generator</a> to scaffold the project (rather than
letting VisualStudio to the job). And when VisualStudio is available, use it
with the <code>.fsproj</code> file. When it's not available, I can fallback to Atom/VSCode.</p>
<p>P.S. just realized <a href="http://websharper.com/" target="_blank" rel="noopener">WebSharper</a>(<a href="http://websharper.com/docs" target="_blank" rel="noopener">document</a> looks awesome!) is
a fullstack web framework using F#, including frontend. Its <a href="http://websharper.com/tutorials/rest-api" target="_blank" rel="noopener">REST service</a>
support also looks
Two's complement noteshttp://kflu.github.io/2016/05/29/2016-05-29-twos-complement-notes/2016-05-29T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>In the following discussion, I assume the cardinality of the number set is
2^32, or, 32 bit integer. But it can be generalized to any size.</p>
<p>A visualization of the integers on the number domain.</p>
<p><img src="twos-complement-visualization.png" alt="visualization"></p>
<p>There's always one more negative numbers than the positive numbers. That's
because, total available numbers is even (2^32). 0 take a spot, leaving all
positive and negative numbers to split the rest 2^32 - 1 spots, which is an odd
number.</p>
<p>This further leads to the fact that</p>
<pre><code>abs(int.MinValue) = abs(int.MaxValue) + 1
</code></pre>
<p>So on any integer domain, negation should not cause overflow except for
<code>int.MinValue</code>. Interestingly,</p>
<pre><code>-int.MinValue == int.MinValue
</code></pre>
<p>For two's complement representation, $b_{31} b_{30} ... b_0$, the most significant
$b_{31}$ represents the sign of the integer. That's the reason why negative
numbers are more than positive numbers, as 0 takes a spot in the $0 b_{30} ... b_0$
space.</p>
<p>Negating a number $x$ can be done by computing $0 - x$, or more commonly, by
inverting the bits and add 1.</p>
<pre><code>-x = ~x + 1
</code></pre>
<p>In terms of math operations, note that two's complement representation works
intuitively with addition and substraction: <code>-1 == 0 - 1 == 0xffffffff</code>.</p>
<p>Also note that for signed integers, the fill value of right shift bit operation
depends on the sign of the number. For negative numbers, right shifting will
shift in 1 instead of 0 for the most significant bit, to maintain the sign.</p>
<p>Note about integer shifting and two's complement - integer division are
roundings towards 0. So $\frac{5}{-1} = \frac{-5}{2} = -2$, $\frac{-1}{2} = \frac{1}{-2} = 0$. <strong>This is different
than right shifting</strong>, rounding of negative numbers resulting from right shifts
are towards <strong>minus infinity</strong> $-\infty$. <strong>Divide by two and right shifts
are only equivelent when the result to be rounded is positive</strong>.</p>
<p>Taking absolute values of negative numbers in two's complement can be achieve by
negating the number and cast to unsigned counterpart of the integer type:</p>
<pre><code>abs(x) = (uint)(-x), when x < 0, x is integer
</code></pre>
<p>The reason of the casting is because integer range can't represent numbers in
the domain of the absolute function - <code>abs(int.MinValue)</code>. Casting to unsigned
integer works even for <code>int.MinValue</code>.</p>
<h1>References</h1>
<ul>
<li><a href="https://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html" target="_blank" rel="noopener">Cornell's two's complement course note</a></li>
<li><a href="https://en.wikipedia.org/wiki/Two%27s_complement" target="_blank" rel="noopener">Two's complement on
TypeScript, Type Definitions, and Promisificationhttp://kflu.github.io/2016/05/12/2016-05-12-TypeScript Type Definitions Promisification/2016-05-12T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>Got disgusted by JS's dynamic typing nature (same thing I hated Python). So I
gave typescript another try. The one thing I care most is the support for
async/await syntax, which it already supported.</p>
<h1>Bare minimum TypeScript Setup</h1>
<p>To setup a TypeScript project you need a <code>tsconfig.json</code>:</p>
<pre><code>{
"compilerOptions": {
"module": "commonjs",
"target": "es2015",
"noImplicitAny": false,
"sourceMap": false
},
"exclude": [
"node_modules",
"typings/browser.d.ts",
"typings/browser"
]
}
</code></pre>
<p>Compilation is then done with a simple <code>tsc</code> command. This would compile your
scripts into <code>.js</code> files.</p>
<h1>Working with Type Definitions</h1>
<p>To work with external libaries, you'll need to download a lot of <code>.d.ts</code> type
definition files. That's where <code>typings</code> comes into play. Install with:</p>
<pre><code>npm i -g typings
</code></pre>
<p>Install type definitions with:</p>
<pre><code>typings install node --save --ambient
</code></pre>
<p>Installed defs are put under <code>typings</code> folder with the structure:</p>
<pre><code>typings
│ browser.d.ts
│ main.d.ts
│
├───browser
│ └───ambient
│ ├───bluebird
│ │ index.d.ts
│ ├───commander
│ │ index.d.ts
│ └───node
│ index.d.ts
└───main
└───ambient
├───bluebird
│ index.d.ts
├───commander
│ index.d.ts
└───node
index.d.ts
</code></pre>
<p>The <code>browser.d.ts</code> and <code>main.d.ts</code> are top level definition for browser and server use respectively.
They contain the same content, simply referencing each installed <code>.d.ts</code> files:</p>
<pre><code>/// <reference path="main/ambient/bluebird/index.d.ts" /
/// <reference path="main/ambient/commander/index.d.ts"
/// <reference path="main/ambient/node/index.d.ts" />
</code></pre>
<p>Since they're duplicates to each other, and causes compilation warnings (a lot of them!)
if not treated. Therefore in <code>tsconfig.json</code> you need to exclude
the portion (either broser or server) you don't intend to include when TypeScript compiles the project.
E.g., this project is a node project, so I excluded all browser ones by
specifying in <code>tsconfig.json</code>:</p>
<pre><code>"exclude": [
"typings/browser.d.ts",
"typings/browser"
]
</code></pre>
<h1>Working with Promisified Node Modules</h1>
<p>It's common to <code>promisifyAll</code> a node module. Since you want to use the
<code>*Async()</code> function variants, and they don't have definitions since they're
interpolated by <code>bluebird</code>, sadly all intellisense and type checking for them
are gone.</p>
<pre><code>import fs = require('fs');
import cp = require('child_process');
Promise.promisifyAll(fs);
Promise.promisifyAll(cp);
async function main() : Promise<string> {
let content : string = await fs.readFileAsync('./package.json', 'utf8');
console.log(content);
let out : string = await cp.execAsync('cmd.exe /c dir');
console.log(out);
return content;
}
</code></pre>
<p>The pattern I adopted is to type <code>fs.readFile</code>, filling in the parameters except
the callback, and then append the <code>Async</code> to the function name. Note that this
way <code>tsc</code> would complain no such functions. An alternative to that is:</p>
<pre><code>import fs = require('fs');
import cp = require('child_process');
let fs2 = Promise.promisifyAll(fs);
let cp2 = Promise.promisifyAll(cp);
async function main() : Promise<string> {
let content : string = await fs2.readFileAsync('./package.json', 'utf8');
console.log(content);
let out : string = await cp2.execAsync('cmd.exe /c dir');
console.log(out);
return content;
}
</code></pre>
<p>The benefit is now that compilation errors are gone. But when you need
intellisense, you would type <code>fs.readFile</code> first, then convert that to
<code>fs2.readFileAsync</code> when you're ready.</p>
<p>Theoretically speaking, someone would hand craft a <code>promisified-node.d.ts</code> for
all the <code>*Async()</code> functions. That took a lot of patience to me, but would be
greatly joyful to
Debugging Yeoman generatorshttp://kflu.github.io/2016/05/11/2016-05-11-Debugging Yeoman generators/2016-05-11T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>I found myself sending much more time than I want to make one of my Yeoman
generators to work. It's hard because by default, Yeoman is so user friendly
that it decides to swallow all error information. So if something failed, it
silently completes, leaving an empty directory to you.</p>
<p>I found the most useful thing is to tell it to display error information. It can
be achieved by setting environment variable <code>DEBUG=yeoman:generator</code> in the
shell, and then run the generator. This time, it's more developer friendly:</p>
<pre><code>PS> yo kfl-node
yeoman:generator Queueing prompting in prompting +0ms
yeoman:generator Queueing writing in writing +4ms
yeoman:generator Queueing install in install +3ms
yeoman:generator Running prompting +20ms
? Your project name (serman)
? Your project name serman
yeoman:generator Running writing +8s
yeoman:generator An error occured while running writing +16ms { AssertionError: Trying to copy from a source that does not exist: C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\generators\app\templates\.gitignore
at EditionInterface.exports._copySingle (C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\mem-fs-editor\lib\actions\copy.js:45:3)
at EditionInterface.exports.copy (C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\mem-fs-editor\lib\actions\copy.js:23:17)
at module.exports.yeoman.Base.extend.writing (C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\generators\app\index.js:26:15)
at Object.<anonymous> (C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\yeoman-generator\lib\base.js:436:25)
at C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\run-async\index.js:26:25
at C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\run-async\index.js:25:19
at C:\Users\kfl\AppData\Roaming\npm\node_modules\generator-kfl-node\node_modules\yeoman-generator\lib\base.js:452:8
at tryOnImmediate (timers.js:543:15)
at processImmediate [as _immediateCallback] (timers.js:523:5)
name: 'AssertionError',
actual: false,
expected: true,
operator: '==',
message: 'Trying to copy from a source that does not exist: C:\\Users\\kfl\\AppData\\Roaming\\npm\\node_modules\\generator-kfl-node\\generators\\app\\templates\\.gitignore',
generatedMessage: false }
</code></pre>
<p>Some other tips (may also apply to general node.js developing):</p>
<ul>
<li>Always test it out locally (<code>npm link</code>) before publishing</li>
<li>Always <code>npm pack</code> and examine the package before publishing</li>
<li>Always <code>git clean -xd -n</code> before <code>npm pack</code> to eliminate unwanted files</li>
</ul>
<p>Another caveat that cost me an hour - in my generator template I have a
<code>.gitignore</code>. But <code>npm publish</code> insists to leave it out of the package, which in
turn caused a silent failure when I <code>yo</code> the generator. <em>Note</em> that this error
cannot be caught with local testing <code>npm link</code>. But you might spot it manually
inspect the package tarball if you are more carefully than I was.</p>
<p><a href="http://yeoman.io/authoring/debugging.html" target="_blank" rel="noopener">Here</a> is the document for Yeoman
Sending email with raw SMTP (sending emails without an account)http://kflu.github.io/2016/05/10/2016-05-10-Sending email with raw SMTP/2016-05-10T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>I often need to send notification emails to myself from applications. It's not really reliable to authenticate your existing email account
and send emails, as authentication can randomly be audited by robot checks and
fail. Recently, I found how to send email with raw SMTP protocol
without needing an account.</p>
<h3>Assumptions & Requirements</h3>
<ol>
<li>
<p>There's no firewall blocking you from contacting the recipient's mail
exchange (MX) server port 25</p>
<ul>
<li>For instance, at home I can't seem to contacting gmail MX server port 25.</li>
</ul>
</li>
<li>
<p>For Gmail at least, you can force your program to use IPv4 when
communicating with the MX server, or you have a consistent PTR record if
IPv6 must be used</p>
</li>
<li>
<p>The emails would usually be delivered right into the spam folder. So you
must be OK with it. You might create a rule to override it.</p>
</li>
</ol>
<p>The workflow is that (inspired by <a href="http://stackoverflow.com/a/12747310/695964" target="_blank" rel="noopener">this SO answer</a>):</p>
<ol>
<li>Look up the MX server for the recipient email address by using <code>nslookup mx</code>. I find <a href="http://mxtoolbox.com/" target="_blank" rel="noopener">this</a> online mx lookup service is invaluable.</li>
<li>Talk with the MX server in SMTP to send the mail.</li>
</ol>
<p>Here's an example SMTP session I had with gmail server. Lines marked with <code>S></code>
is server response, those marked with <code>C></code> is from the client:</p>
<pre><code>S> 220 mx.google.com ESMTP hj1si1000592pac.235 - gsmtp
C> HELO kfl.com
S> 250 mx.google.com at your service
C> MAIL FROM:<k@kfl.com>
S> 250 2.1.0 OK hj1si1000592pac.235 - gsmtp
C> RCPT TO:<kfl@gmail.com>
S> 250 2.1.5 OK hj1si1000592pac.235 - gsmtp
C> DATA
S> 354 Go ahead hj1si1000592pac.235 - gsmtp
C> From: "KL" <kfl@gmail.com>
C> To: "You" <kfl@gmail.com>
C> Subject: This is from me
C>
C> Hello there:)
C>
C> .
S> 250 2.0.0 OK 1462862782 hj1si1000592pac.235 - gsmtp
</code></pre>
<p>Note that if you're accessing internet through a regular consumer ISP, e.g.,
I'm using the crappy Comcast, firstly, you most likely is blocked from
accessing port 25 of the MX server, I haven't tried SSL, maybe it will work.
But from my work environment, I can access port 25 just fine.</p>
<p>Secondly, if you have IPv6 address and don't have a consistent PTR record (some
sort of DNS reverse lookup strategy), you'll be blocked by Gmail at the end of
the session, like this one. The workaround is to force the client to use IPv4.
Linux telnet client has <code>-4</code> command line option. Windows telnet client doesn't
have that, but putty does have this option.</p>
<pre><code>telnet.exe gmail-smtp-in.l.google.com 25
220 mx.google.com ESMTP m90si995173pfj.201 - gsmtp
HELO kfl.com
250 mx.google.com at your service
MAIL FROM:<k@kfl.com>
250 2.1.0 OK m90si995173pfj.201 - gsmtp
RCPT TO:<kfl@gmail.com>
250 2.1.5 OK m90si995173pfj.201 - gsmtp
DATA
354 Go ahead m90si995173pfj.201 - gsmtp
From: "KL" <kfl@gmail.com>
To: "You" <kfl@gmail.com>
Subject: This is from me
Hello there:)
.
550-5.7.1 [2001:4898:80e8::436] Our system has detected that this message does
550-5.7.1 not meet IPv6 sending guidelines regarding PTR records and
550-5.7.1 authentication. Please review
550-5.7.1 https://support.google.com/mail/?p=ipv6_authentication_error for more
550 5.7.1 information. m90si995173pfj.201 - gsmtp
</code></pre>
<p>Finally, don't forget to check the spam folder for the
Scan Documents with ImageMagick and Canon MX922 Scannerhttp://kflu.github.io/2016/03/26/2016-03-26-document-scanning/2016-03-26T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>I have a WLAN scanner Canon MX922. It has a document feeder on the top that can take multiple pages. The default Windows scanner application
revert every other page. Installing the <a href="https://www.usa.canon.com/internet/portal/us/home/support/details/printers/inkjet-multifunction/mx-series-inkjet/mx922" target="_blank" rel="noopener">MX920 series MP Drivers and software</a> from Canon solved the issue.</p>
<p>If the pages are scanned as JPEGs, they can be merged into PDF with ImageMagick:</p>
<pre><code>convert "*.jpg" output.pdf
</code></pre>
<p>In PowerShell, to convert a set of images whose paths are copied from Explorer ("copy as path", the copied paths are on separate lines), <ctrl-v> means pressing control-V to paste the copied paths. They are on separate lines but are in the PowerShell array and PowerShell knows to join them by space when invoking convert.exe.</ctrl-v></p>
<pre><code>PS> convert @(<CTRL-V>) output.pdf
</code></pre>
<p>To avoid naming conflict with Windows builtin filesystem conversion tool (<code>convert.exe</code>), I renamed ImageMagick tool to <code>convertimg.exe</code>.</p>
<p>Note the Canon scan utility is able to output PDF directly. But I like separte images files since I like to feed in several multi-page
documents at once, and I don't like to have them mixed into a single PDF.</p>
<p>If the images have their orientations adjusted in Windows File Explorer by setting EXIF info, the above command does not respect the
orientation setting. Instead, use the below command:</p>
<pre><code>convert "*.jpg" -auto-orient output.pdf
</code></pre>
<p>There is also a PDF utility command line tool called PDFtk, more specificaly <a href="https://www.pdflabs.com/tools/pdftk-server/" target="_blank" rel="noopener">PDFtk server</a>. It's open source and seems to be of good quality and have a large user base. I might want to add it to my toolkit in the future, but for this scanning scenario, I don't need it yet.</p>
<p>Rotating every page in a PDF 90° counter-clockwise:</p>
<pre><code>pdftk old.pdf cat 1-endwest output new.pdf
</code></pre>
<p>Extracting page(s) from a PDF document:</p>
<pre><code>pdftk original.pdf cat <start_page>-<end_page> output new.pdf
</code></pre>
<p>Check out <a href="https://www.pdflabs.com/docs/pdftk-cli-examples/" target="_blank" rel="noopener">other pdftk
Setting up Jupyter on Windowshttp://kflu.github.io/2016/03/25/2016-03-25-setting-up-jupyter-windows/2016-03-25T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>Installing any python module on Windows is an advanture. Fuck Python. It's no exception for Jupyter.</p>
<p><code>pip install Jupyter</code> fails at the end. Boot up <code>jupyter nbconvert <notebook></code> would fail with <code>import failed for XXX, module XXX not found</code>.
Then you'll have to <code>pip install XXX</code> and the same thing happens you'll follow down the deps manuall. Here's a list of deps you'll probably install
for yourself.</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">pip install path.py</span><br><span class="line">pip install functools32</span><br><span class="line">pip install jsonschema</span><br><span class="line">pip install
Deploy Node.js web app with Enterprise Network Authenticationhttp://kflu.github.io/2016/03/24/2016-03-24-setup-iis-nodejs-enterprise/2016-03-24T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>This article describes how to deploy node.js web app to be accessible in a Windows domain controlled (ActiveDirectory) network. For the ease of
discussion, lets assume:</p>
<ul>
<li>the machine hosting the app is named <code>DOMAIN\MACHINE</code></li>
<li>the users access the website at <code>http://machine</code></li>
<li>the website should only be accessible by users within the ActiveDirectory security group <code>DOMAIN\SecurityGroup</code></li>
</ul>
<h2>Approach 1 - Plain Vanilla Node.js</h2>
<p>Now, to achieve the above three goals, we can do it the plain vanilla way:</p>
<ol>
<li>Host the app directly with Node.js http module, or anything built on top of that.</li>
<li>In the app, authenticate with NTLM/Kerboros (maybe with <a href="https://github.com/einfallstoll/express-ntlm" target="_blank" rel="noopener">express-ntml</a> module)</li>
<li>Roll your own AD code to check if the authenticated user is a member of <code>DOMAIN\SecurityGRoup</code>. This step is extremely easy. Even doable in PowerShell. Proof in the last section. To use .NET in node, <a href="http://tjanczuk.github.io/edge/" target="_blank" rel="noopener">edge.js</a> can be used.</li>
</ol>
<p>Totally doable. But is it necessary? I think not.</p>
<h2>Approach 2 - IIS + Node.js</h2>
<p>This approach delegates the entire authentication and authorization to the IIS. And uses <a href="https://github.com/tjanczuk/iisnode" target="_blank" rel="noopener"><code>iisnode</code></a> to integrate the node.js app into IIS. I'm going to talk about the steps in detail.</p>
<h3>Setup IIS for Security Group Authorization</h3>
<p>For this step I'm mostly based on <a href="http://serverfault.com/a/721855/309638" target="_blank" rel="noopener">this</a> article.</p>
<ol>
<li>Install IIS, ensure URL authorization and Windows Authentication are enabled (under IIS/WWW Server/Security)</li>
<li>Go to the desired web site in IIS manager</li>
<li>Enable Windows Authentication</li>
<li>Configure Authorization Rules to ONLY allow the security group. Specify it in the form of "DOMAIN\SecurityGroup"</li>
</ol>
<h3>Setup <code>iisnode</code></h3>
<p>For this step I'm mainly following the guidance <a href="https://github.com/tjanczuk/iisnode" target="_blank" rel="noopener">here</a>.</p>
<ol>
<li>Enable ASP 4.6 in IIS</li>
<li><a href="http://www.iis.net/download/URLRewrite" target="_blank" rel="noopener">Install URL rewrite module for IIS</a></li>
<li>Install node of course (matching OS bitness)</li>
<li>Install iisnode matching OS bitness</li>
<li>Install iisnode samples by running <code>%programfiles%\iisnode\setupsamples.bat</code> in admin cmd</li>
<li>Go to http://localhost/node for verification (make sure your authentication works in previous section!)</li>
</ol>
<h1>Check if a domain user is a member of a security group</h1>
<p>This <a href="http://stackoverflow.com/a/12029478/695964" target="_blank" rel="noopener">SO answer</a> helped.</p>
<figure class="highlight powershell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">Add-Type</span> <span class="literal">-AssemblyName</span> System.DirectoryServices.AccountManagement</span><br><span class="line"><span class="variable">$ctx</span> = <span class="built_in">New-Object</span> <span class="literal">-TypeName</span> System.DirectoryServices.AccountManagement.PrincipalContext <span class="literal">-ArgumentList</span> ([<span class="type">System.DirectoryServices.AccountManagement.ContextType</span>]::Domain,<span class="string">"DOMAIN"</span>)</span><br><span class="line"><span class="variable">$user</span> = [<span class="type">System.DirectoryServices.AccountManagement.UserPrincipal</span>]::FindByIdentity(<span class="variable">$ctx</span>, <span class="string">"user"</span>)</span><br><span class="line"><span class="variable">$group</span> = [<span class="type">System.DirectoryServices.AccountManagement.GroupPrincipal</span>]::FindByIdentity(<span class="variable">$ctx</span>, <span class="string">"SecurityGroup"</span>)</span><br><span class="line"><span class="variable">$user</span>.IsMemberOf(<span
New Javascript directionhttp://kflu.github.io/2016/03/23/2016-03-23-new-js-pattern/2016-03-23T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>This is to quickly summerize my recent thinking into using JS. There're many options, coffeescript, typescript, even livescript. Liked all of them. But I in general like something plain. So this suites me better:</p>
<ul>
<li>Babel to enable ES6 and ES7 (async-await)</li>
<li>bluebird to promisify Node APIs</li>
</ul>
<p>This generally solves the problem of callback hell. Promises was a step further. But ultimately it comes to async-await. I'm not too worried about type safety, which TypeScript provides. But the requirement of <code>.d.ts</code> declarations scared me away. No thank you. Maybe later.</p>
<p>In <code>.babelrc</code>:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">{</span><br><span class="line"> "presets": ["es2015"],</span><br><span class="line"> "plugins": ["syntax-async-functions","transform-regenerator"],</span><br><span class="line"> "sourceMaps": true</span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<p>In <code>package.json</code>:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">{</span><br><span class="line"> "name": "awesome-async",</span><br><span class="line"> "version": "1.0.0",</span><br><span class="line"> "description": "",</span><br><span class="line"> "main": "github.js",</span><br><span class="line"> "scripts": {</span><br><span class="line"> "test": "echo \"Error: no test specified\" && exit 1"</span><br><span class="line"> },</span><br><span class="line"> "author": "",</span><br><span class="line"> "license": "ISC",</span><br><span class="line"> "dependencies": {</span><br><span class="line"> "babel-plugin-syntax-async-functions": "^6.1.4",</span><br><span class="line"> "babel-plugin-transform-regenerator": "^6.1.4",</span><br><span class="line"> "babel-polyfill": "^6.1.4",</span><br><span class="line"> "babel-preset-es2015": "^6.1.4",</span><br><span class="line"> "bluebird": "^3.3.4"</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure>
<p>In <code>test.es6</code>:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line">require('babel-polyfill');</span><br><span class="line">var Promise = require('bluebird');</span><br><span class="line">var fs = Promise.promisifyAll(require("fs"));</span><br><span class="line"></span><br><span class="line">async function read(p) {</span><br><span class="line"> var data = await fs.readFileAsync(p, 'utf8');</span><br><span class="line"></span><br><span class="line"> /*</span><br><span class="line"> * data is the actual file content. You can print it:</span><br><span class="line"> *</span><br><span class="line"> * console.log(data);</span><br><span class="line"> *</span><br><span class="line"> * But if it's returned from this function, it's wrapped</span><br><span class="line"> * into a Promise</span><br><span class="line"> */</span><br><span class="line"> return data;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">/*</span><br><span class="line"> * async functions return Promises!!!</span><br><span class="line"> *</span><br><span class="line"> * Note: you can't await outside of async:</span><br><span class="line"> var data = await read('package.json');</span><br><span class="line"> */</span><br><span class="line">read('package.json').then(console.log);</span><br></pre></td></tr></table></figure>
<p>Now run:</p>
<figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">> babel test.es6 -o test.js --source-maps</span><br><span class="line">> node test.js</span><br></pre></td></tr></table></figure>
<p>Note that async await needs polyfill the app, and hence the <code>require('babel-polyfill');</code> statement at the beginning of the script. Note that polyfill only needs to be done once for the app. So if you don't want to pollute the business logic code base, write a opt-level wrapper to polyfill and call the actuall
Deploying Node App On Windowshttp://kflu.github.io/2016/03/23/2016-03-23-Deploying-Node-App-On-Windows/2016-03-23T07:00:00.000Z2023-10-14T22:36:20.096Z
<p>Requirements:</p>
<ol>
<li>app can auto restart when crashed (via pm2)</li>
<li>have clustering support (via pm2)</li>
<li>can start on boot (via scheduled task)</li>
<li>run as SYSTEM</li>
</ol>
<p>1-3 are straightforward. Step 4 was not - pm2 requires the <code>HOMEPATH</code> to be set, which is not the case for SYSTEM account. For a regular user,
it's set to <code>c:\users\<user></code>. So I have to set it properly in the batch file that starts the
Data Analysis With Excel Pivot Tablehttp://kflu.github.io/2016/03/06/2016-03-06-Data-Analysis-With-Excel-Pivot-Table/2016-03-06T08:00:00.000Z2023-10-14T22:36:20.085Z
<p>In today's cloud computing industry, data is essential to understand your
business, your customer and your system. In my experience, it usually boils
down to analyzing a key performance indicator (KPI) in terms of a set of
aspects. For example, I might be interested in the number of activities
performed by a user, and I might want to know how that is correlated with the
user's gender, region s/he lives in, and the client used.</p>
<p>There are many ways I can use that data. I might be interested in comparing the
KPI by user gender first, then by the client types, or by the regions first,
then by the genders, etc. This could all be done writing scripts. But Excel
PivotTable can be handy in such scenarios.</p>
<p>Suppose <a href="https://github.com/kflu/kflu.github.io/files/160868/stats.txt" target="_blank" rel="noopener">this is my data
source</a>. And
this can be the result if it's drilled down by Gender, Region, and finally
Client type:</p>
<p><img src="cca0312a-e3e9-11e5-8929-6f99035c9ff4.png" alt="gener-region-client"></p>
<p>If you need a different order of drill down, or different aggregation function
(e.g., averaging instead of summing), you can do so by adjusting the pivot
table options:</p>
<p><img src="29356eb4-e3ea-11e5-9d59-2f519279bb73.png" alt="pivot table options"></p>
<p><a href="6a3b0b1e-e3e8-11e5-8ff1-98a87e14db9b.gif">This screencast</a> illustrates the use of PivotTable to analyze it.</p>
<p>It is also important to note that, the "source data" that I used here is
already a digest of raw data from the enormous amount of log files from our big
data system. The log files have many columns than just the few "aspects" that I
mentioned earlier. It is very important to understand and plan ahead the
several aspects you will be interested in analyzing, so you can use big data
systems like map-reduce to cook down them into the digest. In a summary, an
end-to-end workflow of analyzing big data is usually like this:</p>
<p><img src="d50cb834-e3ed-11e5-8657-8ba115cf7290.png" alt="data work
Portable Node.Js Installationhttp://kflu.github.io/2015/12/07/2015-12-07-Portable-NodeJs/2015-12-07T08:00:00.000Z2023-10-14T22:36:20.085Z
<p>I've been looking for a way to deploy Node.Js apps onto machines that doesn't have the Node.Js runtime installed, more particularly, on Windows machines. I've been looked into various tools to "package" the app, the runtime, and all its dependencies into a single executable. None of those tools work properly. Luckily, it is much easier than I thought to setup a portable Node.Js environment, e.g. all through files copies and without registry change. <a href="https://github.com/nodejs/node-v0.x-archive/issues/3978" target="_blank" rel="noopener">This is the original discussion inspired me</a>.</p>
<p>For the runtime, all you need is a single executable <code>node.exe</code>, or <code>node.lib</code> if this is to be linked to an application. The latest runtime build is available at http://nodejs.org/dist/latest. If you need <code>npm</code>, download it from http://nodejs.org/dist/npm/ and unzip it to where <code>node.exe</code> is. Now your portable node installation will look like:</p>
<pre><code>node
|- node.exe
|- npm.cmd
`- node_modules\
</code></pre>
<p>Now you can use node and npm from anywhere on the system:</p>
<pre><code><path_to_node>\npm init
<path_to_node>\npm install express
<path_to_node>\node <your_script.js>
</code></pre>
<p>Armed with the portability of <code>Node.Js</code> runtime, deploying apps to systems without runtime is really straightforward:</p>
<ol>
<li>Prepare a portable <code>Node.Js</code> runtime on the dev machine</li>
<li>Install all dependencies using <code>npm</code> locally</li>
<li>Develop the application in this portable runtime locally</li>
<li>Zip the portable runtime and deploy
Using Vagrant on Windowshttp://kflu.github.io/2015/11/18/2015-11-18-Using-Vagrant-On-Windows/2015-11-18T08:00:00.000Z2023-10-14T22:36:20.085Z
<p>Vagrant is a virtualization technology for creating development environments. It is based on virtual machine technology and can be used with multiple VM providers. Not surprisingly, for a technology rised up from Linux ecosystem, even if it claims to be cross-platform, setting it up on Windows won't be easy. This article documents the steps to use Vagrant on Windows, the issues found, and the way to address them.</p>
<p>Before started, make sure Hyper-V is enabled on the machine. This can be done via "Add/Remove features" dialog in Control Panel. Don't forget to enable the Hyper-V PowerShell tools - Vagrant needs it to work against the Hyper-V provider. Why using Hyper-V rather than VirtualBox, etc.? Hyper-V comes standard with all Windows versions after 8.1. It might be the most feature rich and efficient Virtual Machine technology on Windows - it powers Windows Azure!</p>
<p><img src="f1cf17c2-8e12-11e5-8b8a-60d8a1061ccd.png" alt="feature hyper-v"></p>
<p>The <strong>getting started guide</strong> is <a href="https://docs.vagrantup.com/v2/getting-started/index.html" target="_blank" rel="noopener">here</a>. Use <code>vagrant init <boxname></code> to initialize one. But not all "boxes" supports all providers. The default one in the getting started doc <code>hashicorp/precise32</code>, for example, does not support hyper-v. But all the boxes can be explored <a href="https://atlas.hashicorp.com/boxes/search" target="_blank" rel="noopener">here</a>. And it doesn't take long to realize that <a href="https://atlas.hashicorp.com/hashicorp/boxes/precise64" target="_blank" rel="noopener"><code>hashicorp/precise64</code></a> supports hyper-v. Once initialized, use <code>valgrant up</code> to download and set up the virtual machine.</p>
<pre><code>valgrant init hashicorp/precise64
valgrant up --provider hyperv
</code></pre>
<p>Once done, use <code>valgrant ssh</code> to log in to it. It uses <code>ssh</code> to log in. So make sure <code>ssh</code> is on <code>PATH</code>. If <code>git</code> is installed, it usually comes with a set of unix commands including <code>ssh</code>. So to bring <code>ssh</code> onto <code>PATH</code> in PowerShell:</p>
<pre><code>PS> $env:path += ";C:\Program Files\Git\bin"
</code></pre>
<p>In <code>cmd.exe</code> use</p>
<pre><code>SET PATH=%PATH%;C:\Program Files\Git\bin
</code></pre>
<p>With that done, you can log in and (finally!) be greeted with the Linux command line:</p>
<pre><code>PS> vagrant ssh
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Thu Mar 6 09:02:28 2014
vagrant@precise64:~$
vagrant@precise64:~$ uname -a
Linux precise64 3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>#References</p>
<ul>
<li><a href="https://docs.vagrantup.com/v2/getting-started/index.html" target="_blank" rel="noopener">Vagrant getting started</a></li>
<li><a href="https://atlas.hashicorp.com/boxes/search" target="_blank" rel="noopener">Explorer Valgrant
PsExec to salvage a remote PC that can't connect tohttp://kflu.github.io/2015/11/18/2015-11-18-PsExec-to-salvage-a-remote-pc-that-cant-connect-to/2015-11-18T08:00:00.000Z2023-10-14T22:36:20.085Z
<p>My work PC is a VM that is only accessible through remote desktop. Today after rebooting it, it can't be connected via remote desktop again. The remote desktop connection diaglog flashed with the usual "setting up connection" messages and quitted silently. I can't connect through hyper-V manager either. I guessed that some process is in a bad state and needed to be restarted. To do this, I'll need to log on remotely to kill the process. So I connected via <code>PsExec</code>:</p>
<pre><code>PsExec -h -u <domain\user> \\<remote_machine> cmd.exe
</code></pre>
<p><code>-h</code> is for elevated access. Here's what it looked like once logged on:</p>
<pre><code>PsExec v2.11 - Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
Password:
Microsoft Windows [Version 10.0.10240]
(c) 2015 Microsoft Corporation. All rights reserved.
C:\WINDOWS\system32>
</code></pre>
<p>I killed the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa969540(v=vs.85).aspx" target="_blank" rel="noopener">desktop window manager</a> with:</p>
<pre><code>taskkill /im dwm.exe
</code></pre>
<p>The <code>PsExec</code> connection was lost (I guess <code>dwm.exe</code> is too important to be killed). And after a while, remote desktop worked again. I guess <code>dwm.exe</code> was restarted
How to build SQLite on Windowshttp://kflu.github.io/2015/11/04/2015-11-04-How-to-build-sqlite-on-Windows/2015-11-04T08:00:00.000Z2023-10-14T22:36:20.085Z
<p>The official compiling document is <a href="https://www.sqlite.org/howtocompile.html" target="_blank" rel="noopener">here</a>. You'll need:</p>
<ul>
<li>MSVC compiler (<code>cl.exe</code>) and <code>nmake</code></li>
<li>TCL 8.5 (make sure <code>tclsh85</code> is on PATH)</li>
<li>Some build utilities that're common on Linux, like <code>gawk</code>. On Windows use
<a href="https://github.com/bmatzelle/gow" target="_blank" rel="noopener">Gow</a>. Add gow/bin to <code>PATH</code>.</li>
</ul>
<p>As a personal preference, I do not wish to install programs to pollute <code>PATH</code>.
So before compiling I need to make sure the necessary tools are on the <code>PATH</code>:</p>
<pre><code>set PATH=%PATH%;c:\gow\bin;c:\tcl\bin
</code></pre>
<p>SQLIte source comes in two flavors. The simpliest to compile is the
"<a href="https://www.sqlite.org/amalgamation.html" target="_blank" rel="noopener">amalgamation</a>" source that's a preprocessed huge <code>sqlite3.c</code>.
To compile this file, simply do:</p>
<pre><code>cl shell.c sqlite3.c
</code></pre>
<p>It builds the interactive shell <code>shell.exe</code>. This approach doesn't require
<code>tcl</code> or <code>gow</code>.</p>
<p>The other flavor is the raw source which contains 1000+ files. To build it, you
first build the the amalgamation file, then follow the steps for amalgamation
source.</p>
<p>Don't use any source that's from a git mirrow like <a href="https://github.com/mackyle/sqlite" target="_blank" rel="noopener">this one</a>. When
not properly mirrored, the source doesn't have <code>manifest.uuid</code>, which is
critical to compilation. So make sure to use the official repository, or just
download the source from the offical website.</p>
<pre><code>nmake /f Makefile.msc sqlite3.c
cl shell.c sqlite3.c
</code></pre>
<p>Compiling the shell requires some generated headers like <code>parse.h</code>. If there is
error during <code>nmake</code>, make sure <code>parse.h</code> is correctly generated and is
non-empty. Otherwise do</p>
<pre><code>lemon.exe parse.y
</code></pre>
<p>to re-generate <code>parse.h</code>. Note that <code>lemon.exe</code> is itself built from <code>lemon.c</code>
during the build process.</p>
<p>It is also possible to build <code>sqlite3.dll</code> to be linked by applications:</p>
<pre><code>nmake /f Makefile.msc
Enumerable and Disposablehttp://kflu.github.io/2015/10/02/2015-10-02-Enumerable-and-Disposable/2015-10-02T07:00:00.000Z2023-10-14T22:36:20.085Z
<p>Enumerables are deferred executions. This can be problematic when used with Disposables as the latter are tend to be disposed prematurely. The below example shows the difference. <code>ildasm</code> shows that the compiler generates a class that implements <code>IEnumerable<int></code> for <code>Foo2</code> behind the scene, and returns an instance of it for <code>Foo2</code>. Because of that, the <code>using</code> is embedded into that instance so the constructing and disposal of <code>Disposable</code> is carried on by the deferred executed implementation. On the contrary, <code>Foo</code> just returns the source enumerable as a pass through, at the time it's executed, the disposal already happened. This causes the problem that if the returned enumerable depends on the disposable, at the time the enumerable is extracted from, the disposable is in the disposed state.</p>
<figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">static</span> <span class="keyword">void</span> <span class="title">Main</span>(<span class="params"><span class="keyword">string</span>[] args</span>)</span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"> <span class="keyword">int</span>[] source = <span class="keyword">new</span>[] { <span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span> };</span><br><span class="line"> Console.WriteLine(<span class="string">"Using Foo..."</span>);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Outputs:</span></span><br><span class="line"> <span class="comment">// Disposed</span></span><br><span class="line"> <span class="comment">// 1</span></span><br><span class="line"> <span class="comment">// 2</span></span><br><span class="line"> <span class="comment">// 3</span></span><br><span class="line"> <span class="keyword">foreach</span> (<span class="function"><span class="keyword">var</span> item <span class="keyword">in</span> <span class="title">Foo</span>(<span class="params">source</span>))</span></span><br><span class="line"><span class="function"> Console.<span class="title">WriteLine</span>(<span class="params">item</span>)</span>;</span><br><span class="line"></span><br><span class="line"> Console.WriteLine(<span class="string">"Using Foo2..."</span>);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Outputs:</span></span><br><span class="line"> <span class="comment">// 1</span></span><br><span class="line"> <span class="comment">// 2</span></span><br><span class="line"> <span class="comment">// 3</span></span><br><span class="line"> <span class="comment">// Disposed</span></span><br><span class="line"> <span class="keyword">foreach</span> (<span class="function"><span class="keyword">var</span> item <span class="keyword">in</span> <span class="title">Foo2</span>(<span class="params">source</span>))</span></span><br><span class="line"><span class="function"> Console.<span class="title">WriteLine</span>(<span class="params">item</span>)</span>;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">static</span> IEnumerable<<span class="keyword">int</span>> <span class="title">Foo</span>(<span class="params">IEnumerable<<span class="keyword">int</span>> source</span>)</span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"> <span class="keyword">using</span> (<span class="keyword">var</span> disposable = <span class="keyword">new</span> Disposable())</span><br><span class="line"> <span class="keyword">return</span> source;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">static</span> IEnumerable<<span class="keyword">int</span>> <span class="title">Foo2</span>(<span class="params">IEnumerable<<span class="keyword">int</span>> source</span>)</span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"> <span class="keyword">using</span> (<span class="keyword">var</span> disposable = <span class="keyword">new</span> Disposable())</span><br><span class="line"> <span class="keyword">foreach</span> (<span class="keyword">var</span> item <span class="keyword">in</span> source)</span><br><span class="line"> <span class="keyword">yield</span> <span class="keyword">return</span> item;</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="keyword">class</span> <span class="title">Disposable</span> : <span class="title">IDisposable</span></span><br><span class="line">{</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">Dispose</span>(<span class="params"></span>)</span></span><br><span class="line"><span class="function"></span> {</span><br><span class="line"> Console.WriteLine(<span class="string">"Disposed"</span>);</span><br><span class="line"> }</span><br><span