To make maintaining true-to-the-default demos easier for developers, we present Theme-Space - a GitHub-based repository for Octopress theme previews.
Would it not be great if all Octopress themes had reliable ture-to-the-default previews for users to peruse at their convenience? Would it not be great if making previews for themes required less manual updating? Would the second not facilitate the first?
Theme-Space thinks it would! Over on GitHub, TheChymera and Yous have set up a small organization, called Theme-Space, which aims to address much of the above, by:
A succinct how-to for publishing theme previews is available in the README file the Theme-Space Octopress fork’s themespace branch.
This branch simply contains the standard octopress core, the theme-Space update scripts, and the aforementioned modified README, which helps developers set up their previews.
The theme previews are hosted on the gh-pages branches of their respective Theme-Space theme forks.
It is important to note here, that the workflow does not require Theme-Space theme forks to be up-to-date, but rather just pulls the newest versions directly form upstream.
The previews are all available via themespace.github.io/THEME-NAME/
themespace.github.io/THEME-NAME/
web address to publish to, a fork of the respective theme must first be created on the Theme-Space organization account.To ease the burden of the aforementioned caveats, Theme-Space tries to be as open (and open-handed with organization membership) as reasonably possible. In principle, any Octopress theme developer is welcome to join, and will be given all required permissions on a moment’s notice.
To make it even easier for theme developers to benefit from the its scripts and concept, Theme-Space plans to become more distributed by designing its scripts to easily and safely be used by developers on their own repositories.
]]>We assume that this incorrect appraisal is based on a series of misconceptions about what science news actually constitute, and we present a brief theoretical explanation of how one should best contextualize science journalism.
There are, of course, rare and commendable exceptions to the trends detailed below. Some scientists do in fact dedicate part of their time to personally report in an easily-understandable language on their own and their colleagues’ work. Also, some former science students or autodidacts may actually read, link to, and understand primary reports when writing articles. Lastly, a few top-tier scientific journals and some scientific communities do manage to maintain well-cureted news sections (e.g. Nature News or physicsfocus, respectively).
Sadly, these are not the science news you routinely come across.
Contrary to many a layman’s impression, pop-science is seldom written by actual scientists or by people somehow affiliated to research. Admittedly, a very small number of science journalists may try to leverage extra credibility with some quasi-relevant degree obtained at some point in their lives, but alas, a BSc. does not a scientist make - and certainly not one who can report on topics across the board. It is entirely possible, if not common, to coast through Master’s or Bachelor’s courses without ever being up-to-date with primary scientific literature - and certainly without acquiring the skills needed to critically report on it.
To put science journalists’ personal credibility into perspective: as a rule of thumb, there is little reason to assume they are any more educated on the subject matter than the average college graduate. Consequently, their writing should never be taken at face value in the absence of corroborating references.
The chief claim of pop-science authors is that they want to educate the public on the wonders of science. To be sure, there are many science educators out there: working in schools, universities, and even on the internet (teaching basic principles of science on YouTube, or answering questions on StackExchange). What these science educators have in common is that they package information in a teaching-friendly format: starting with their target demographic’s basics and working up from there. Journalism, typically, does not fit this format. Knowingly or otherwise, science journalists undermine science education, and instead mangle research reporting to fit their dissenting requirements:
A more veracious motivational statement is in fact hidden in the the claim mentioned above - “the wonders of science”. That is what many science journalists actually want to sell. In doing so, they reformat (or re-imagine) scientific discoveries according to a formula more closely resembling sci-fi than science education: high entertainment value, no prerequisites. Current research, however, seldom makes fitting material for bombastic headlines (aptly satirised in “Questiosn to which the Answer is No”); and sacrificing insight to better deliver awe-inspiring reports leaves readers oblivious to crucial limitations of primary/secondary research, research tools, and the scientific method. This incites both inappropriate expectations, and wild speculation as to why they are not being met - ranging from conspiracy theories to distrust in scientists’ capabilities, but somehow omitting distrust for the actual reporter.
Other science journalists may choose to also (or rather) market the credibility of science. Opinion pieces can easily become more authoritative if they are disguised as science news; and people are easier to rally to a cause if they can be convinced it is a logical response to objective evidence. This motivation is most clearly seen with politically controversial issues, such as global warming or GMO-related risks. The strong agreement of the scientific community on both these issues (plainly: global warming is happening, and GMOs are not dangerous - also see sources here for the latter) is obscured to consumers of pop-science, precisely because many science journalists use selectively reported research to license a political agenda.
Unsurprisingly, science news originate with scientists. Members of the scientific community publish new findings in specialized periodicals called scientific journals. The reasons why the public does not directly read these journals are twofold: the articles are often laden with difficult jargon, and most of these articles are in fact inaccessible to the public in the first place (the latter of these two phenomena being duly combated by the open access movement).
One of the many detrimental effects of such closed-circuit publishing is that both journalists and the broad public are certain to not be able to read the actual research. Thus forms an abstruse system that both precludes journalistic competence and transparency and hinders public review. In practice this translates into two licenses for pop-science journalists:
While cases where journalists completely fabricate a scientific finding are not unheard of, most often they base their reporting on any of the following sources of information:
So, while you might believe that science journalists translate dense information from primary sources while keeping things accurate for your convenience; the science news you actually read are but superficially (and perhaps repeatedly) rehashed summaries of reports simplified elsewhere by other people. Such repeated simplification by people other than the article’s author leaves both of him and his reader in the dark on why and what information was left out. This in torn makes both author and reader soundly unable to draw any pertinent conclusions in the article, or based on the article.
It would be folly to propose that the sorry state of popular science arises solely from journalists selling sub-par content to people who want something else entirely. While it is debatable whether offer gave rise to supply - or vice-versa - the reasons to read pop-science fall remarkably well into the pattern given by the reasons to write it.
Readers do not seek out pop-science to increase their understanding, they seek it for the thrill of awe-inspiring factoids or for moral guidance (or worse, support) on heated issues. So perhaps, in the appraisal of science news, being critical of one’s own motivation to read is just as crucial as being critical of the journalist’s personal credibility and sources.
Readers who seek entertainment should best beware that more often than not that is exactly what pop-science offers them: Brief entertainment - which is best not taken to heart as veritable fact after the reading is done.
As a reader in search for moral guidance on heated issues, it is best to understand that reasearch is a process much different from debate. As hypotheses go, making sense is a very cheap quality, and rhethorical accomplishment counts for little in scientific discourse. Plainly put, if you want to cut a debate short by falling back on science: a breathtakingly well-written article is worth naught without supporting primary references or meta-analyses that the reader can verify.
Lastly, readers seeking to be informed on the developments of science might best look to obtain a clear picture of the respective research field elsewise - e.g. via academic review articles and tertiary sources, including Wikipedia. Only with a clear picture of current research in mind can one understand and interpret others’ unclear writing (at which point that writing becomes less informative regarding the research, but more informative with regard to the author or publisher).
]]>This is a consequence of the infrastructure requirements that static site generation places on a system (in case of Octopress these being at least Ruby and a number of Ruby gems - possibly also Git), and the fact that most mobile platforms cannot yet meet them. The different paradigm of static site publishing thus also mandates a different approach to remote blogging. Here we present a 2-element (sync & inotify-triggered scripts) automatic solution for remote Octopress blogging, and a short section on remote content authoring (sans publication) via GitHub.
Sync (short for synchronization) means keeping your directories and files consistent over multiple machines, with updates spreading from the machine they were authored on to all others. There are many software solutions for syncing (commonly referred to as sync clients), with some of the more popular being: Dropbox, Google Drive, BitTorrent Sync, and Syncthing (the latter two being non-could-based, and the latter also being open source).
To set up syncing for static site generation, install a sync client of your choice on all your mobile devices plus on one (just one - in order to avoid update conflicts) machine that is neigh-continuously online. This latter machine can either be a server or simply your desktop computer (if you never turn it off). To follow this guide’s inotify-specific sections you should also make sure that the respective server or desktop machine runs a Linux distribution.
Once your sync network is ready, simply enable syncing for your Octopress blog root - or just for your /source
or /source/_posts
directory.
In spite of its shortcomings (limited storage, closed-source, cloud-based) we currently recommend Dropbox over all other alternatives for remote static site generator blogging. This is chiefly due to Dropbox being supported by all major platforms (Windows, OSX, Linux, Android, and iOS), and being well integrated with text-editors on operation systems which do not commonly allow apps to access the same files (e.g. iOS).
In Dropbox you cannot simply select what directories you want synced, but rather have (one) dedicated ~/Dropbox/
directory where you have to place all your content.
You will thus need to move your blog folder from the location you may have become accustomed to.
Still, this inconvenience can be mitigated by creating a symlink from your previously used path:
1
|
|
In short, inotify allows the Linux kernel to detect changes in the filesystem. Once you have successfully set up syncing you can use this to conveniently trigger site rebuild and deployment whenever a specific event happens in your synced blog directory. Our (and advisably your) tool of choice for this task is incron.
You can configure incron via incron tables (text files that tell it where to look for what events and what to do upon the specified events occurring).
These tables contain single-row instructions formatted as <path> <mask> <command>
; where <path>
is watched, <mask>
specifies what events to look for, and <command>
specifies the command to run upon occurrence of the aforementioned events.
You can get a list of the incrontab mask tags (as well as a more in-depth explanation of incron tables) from the relevant man page - just run man 5 incrontab
.
Aditionally, incrontab also provides dollar sign wildcards which you can pass as arguments to your command of choice (read more on this in the aforementioned man page).
Two things worth an explicit mention here, however, are:
Incrontab commands are not fully shell-compatible, meaning that you will not be able to reliably use - among other things - the &&
or ;
operators.
This is expecially relevant to our purposes, since you want to change the directory to your blog root, and generate and deploy your blog.
The issue can be circumvented by calling a script, which you can either write yourself or clone from our remote-octopress-incron utilities repo on GitHub.
Incrontab does not recursively monitor your directories - meaning that you will have to directly specify your ../source/_posts/
path and manually add multiple entries for all the pages (i.e. ../source/<page-title>
directories).
A sample incron table for the task at hand (just monitoring your posts) would look somewhat like this:
1
|
|
It is however advisable to check what inotify events your specific sync agent triggers (since these triggers may vary depending on how the sync client works - as seen in the Dropbox sub-section below).
If you decide to use Dropbox due to it being the most portable choice, there will be one additional quirk to account for.
Dropbox currently does not open and modify your files in place. instead it moves them to a temporary directory, modifies them there, and moves them back (as tested by us, and also documented here).
It thus becomes mandatory to check for IN_MOVED_TO
events rather than IN_CLOSE_WRITE
:
1
|
|
There are a number of published GutHub-based remote Octopress blogging solutions (the most noteworthy documented here, by Holger Frohloff). Such solutions - though feasible - generally require a bit more trial-and-error scripting and often assume you use not only GitHub, but also GitHub-pages.
For GitHub users we, however, recommend the increasingly powerful web-interface for content creation or editing. Though it will not publish to your blog without additional scripting, it is easy to edit your content (and commit) directly via GitHub from any mobile platform which is HTML5-compatible.
]]>The most basic steps needed to stack bracketed shots are:
More advanced features (which are arguably better implemented further downstream in the image processing workflow) include:
The Hugin software package may in itself be geared towards graphical interface use, yet it ships with a number of functions that can be used from the command line to automate the basic photo processing steps:
align_image_stack
- as the name says, this aligns the imagesenfuse
- this fuses the imagesHugin, sadly, cannot load RAW files, and therefore (in order to keep the 16bit dynamic range) we need to convert the files to TIFF.
This is best done in batch with the ufraw-batch
command from UFRaw.
The stackHDR script (which is available via the aforementioned GitHub link) is based on earlier efforts by Edu Perez, which have also lead to at least one other offshoot. Sadly, however, both these solutions are unmaintained at least as of 2011.
The improvements which stackHDR brings are:
Notable omissions of our script include:
.hdr
format outputWhile we try to accommodate for rule variations in the following instructions, please be aware that they are meant specifically for 2-player (1v1) games (optimal gameplay is very different for multiplayer games). We follow the most popular battle mechanism, where the attacker rolls up to 3 and the defender up to 2 dice (with the defender winning ties). Our notation for attack configurations is AvD, with A being the number of attackers (troops on the attacking territory minus one) and D being the number of defenders (total troops on the defending territory).
In Risk, single battle outcomes are decided by random events (roll of the dice) best thought of in terms of probability. While as much seems obvious, players should make sure to understand that:
Total probabilities for taking over a territory - based on the total number of attacking and defending troops - can be determined via a Markov chain (as for instance in this paper) or via a Monte Carlo simulation. That information, however, is mostly irrelevant in a 1v1 game, since you should never auto-attack. With every attack the state of the game changes, and you should always re-evaluate your next move in light of the new state.
Whilst sounding complicated, this actually makes the mathematics of your gameplay easier, as there are a very limited number of scenarios for every attack (6 scenarios in total - given the aforementioned battle cap of 2 defenders and 3 attackers). Based on that knowledge you can calculate the odds for each scenario and determine when it is advantageous to attack.This article demonstratively calculates the odds for all single-attack configurations and and presents a summary table.
The single-battle odds table shows that the optimal attack configurations are those with the maximum number (3) of attacking units. The compound odds table allows us to compare the victory odds for alternative troop placements. Based on this information, you should:
Unless otherwise strategically mandated, reinforcing territories that contain only 1 or 2 troops is a poor move - as you will thus deprive yourself of using 1 or even 2 of your troops in an optimal attack configuration.
This might indeed sound counterintuitive, but based on data from the aforementioned compound odds table, two attack scenarios starting with 3v3 each are more advantageous than one attack scenario starting with 4v3. Keep in mind that as per our notation e.g. 3v3 means 4 troops on the attacking territory and 3 on the defending territory. Regarding this demonstrative calculation’s probability notation refer to this page:
Also, keep in mind that the added benefit of splitting troops instead of placing them on the same territory decreases as the number of placeable troops goes up.
Since you will want to attack in all cases where you have 3 available attackers (see next section), place your troops in a way that allows you to make optimal use of favorable outcomes - as for instance by allowing you to attack a further territory if you have enough troops left after conquering your initial target. Probability tells us that you can and will get “lucky” - it is important to put yourself in a position where you can make best use of good fortune.
The single-battle odds table shows that it is always advantageous to attack in 3v1, 2v1, and 3v2 configurations. It follows that you should:
Even if the adversary has more defenders than you have attackers, it is still beneficial to attack until your troop count in the attacking territory is close to 3. This is not necessarily in the hope of conquering his territory (though for troop counts exceeding 15v16, this might actually be the case - as shown here). It might seem like you are weakening your defense, but in fact your troops are more efficient at causing attrition in your opponent’s ranks ranks if used for an attack rather than kept for “defense”.
The most favorable attack scenarios are 3v1 and 2v1. If you find yourself presented with such an attack opportunity, do not miss it! This is an excellent opportunity to erode you adversary’s total troop and territory count. An exception to this rule may be made for AR3:
Optimizing your attack opportunities is on par with containing the opponent’s opportunities. If for some reason your opponent has large troop stacks (more than 4) in a territory not adjacent to any of yours (e.g. to protect a bottleneck), avoid attacking the adjacent territories if you cannot reasonably expect to also neutralize the threat of the stack. By doing this you will deprive him of the use of a large number of troops come next round. Otherwise the next round will find you faced with an attack force you may not be apt to defend yourself from.
As previously discussed, 3v1 and 2v1 attack configurations are very favorable to the attacker. So is, in fact, 3v2 - but 3v2 battle events can, however, not be averted in the reinforcement phase (they can be averted in the attack phase, as seen in AR1). Key concepts for reinforcement include:
Troops left alone on a territory get high attrition during the adversary’s attacks - meaning that you not only cannot hold the respective territory, you are also wasting the unit.
Bottlenecks (territories which control access into regions otherwise inaccessible) provide an excellent means to make sure as many as your troops as possible will be attacked in a 3v2 scenario - which is the most advantageous configuration you can have against an informed attacker. The advantages of reinforcing a bottleneck can outweigh RR1, though it is best to keep both these rules in mind during the attack phase - so that you will not be forced to chose.
Since you do not want to start the next round with unusable troops, it is advisable to place your troops where they will be able to attack from. This includes not only the enemy lines, but also territories next to other territories the enemy is likely to attack - for example a neutral territory which he needs to get a bonus.
Some regions, as well as a certain number (commonly 3) of controlled territories give troop bonuses. We shall be referring to these as regional and territorial bonuses respectively.
In a sense, a unit lost for the enemy is a unit gained for you. Breaking bonuses is the best way to weaken your enemy’s position, and you should let that high payoff come into play when deciding whether an attack is desirable. In general you should go beyond your usual attack configuration comfort zone when trying to break a bonus. However, breaking a bonus is not worth losing more troops than the value of the bonus.
You should definitely try to get bonuses, but always ask yourself whether you can afford them. Regional bonuses in particular introduce a potential (and obvious) weakness in your position. You should also only attack neutrals to get a bonus, if you are sure you can hold it enough rounds to account for both the troops you lost attacking them and the troops the enemy did not lose while he was not being attacked.
Some Risk variations allow you to turn in cards (received for every taken turn where you conquered a territory) to get a fixed or incremental number of units. Cards can be of any of 3 different types (e.g. “colors”) or they can be “wildcard” (usable as any type) - and you need 3 cards of the same type or 1 of each type to turn them in.
This rationale also means that if you get the first turn in an incremental turn-in game, you can comfortably afford to fail conquering a territory during one round. Pretty obvious, you want as many troops as you can get. However, an exception should be made for cases where the damage you cause to the enemy by an immediate turn in (e.g. breaking a 5-unit bonus) outweighs the number of additional units (e.g. 2) you would have received turning in later.
In fixed turn-in scenarios, there is no cost associated with turning in earlier, you do, however, still get the benefit of breaking territorial or regional bonuses. Additionally, you can take advantage of the adversary not (yet) having cards to turn in, and thus no means to undo the damage you have done.
]]>There are already texts providing more wide-ranging whole-battle predictions (as for instance here), and closed-source (and in many cases, also inaccurate) battle simulators. Here we try to offer a transparent formulaic reference and odds table for all single-attack scenarios.
We use the following notation to represent the probability of the attacker obtaining a specific outcome given $a$ attacking units (ergo $\geq a+1$ troops on the attacking territory) and $d$ defending units:
Given a cap of 2 defenders and 3 attackers we can expect $2 \times 3$ attack scenarios, which we shall be listing according to the value of $(a,d)$:
Here calculations become easier since there is no possibility of a tie: $\Pr(T|(a,d)) = 0$. Further, we sum the probabilities of the attacker winning, contingent on the 6 possible and equally probable defender die outcomes:
Thus:
As before, there is no possibility of a tie, and we account for the $\left(1-\frac{6-n}{6}\right)\frac{6-n}{6}$ probability that one die loses but the second one wins.
Thus:
Similarly we solve for $(a,d)=(3,1)$, where a tie is again impossible and there is an added probability that two dice lose but the third one wins.
Thus:
Here we take one die result of the defender (of value $n$) as the minimum requirement for attacker victory and adjust the probability for the cases where the second defender die (value $m$) scores higher. Again, there can be no tie.
Thus:
For more complex cases determining the odds becomes a far more non-uniform probability problem. While we are still looking out for a formulaic solution, we currently solve these cases via a (significantly slower) exhaustive lookup of all possible combinations.
Based on the above we have written a Python script (named Risky) that can be used to calculate the victory, tie, and defeat odds given $a$ attackers, $d$ defenders, and $s$ sides of the dice. For a more exhaustive documentation of how to use the script from the command line please consult its README document. The calculations are done preferentially based on the general formulae for the aforementioned cases:
For $ a \in \mathbb{N}$ and $d=1$:
For $a=1$ and $d=2$:
In case no formula is defined for the specified number of attackers and defenders, the script defaults to a subroutine which populates an array with all the possible dice outcome combinations and looks up all combinations meeting the respective (victory, defeat, and tie) criteria. One should note that this method can also be used as a validation tool for formulaic calculations, but is significantly slower.
For ease of overview we have compiled a table with all the odds (victory, tie, defeat) of a single attack in the common Risk set-up (6-sided dice and a cap of 3 attackers and 2 defenders).
a | 1 | 2 | 3 | |||
---|---|---|---|---|---|---|
d | 1 | 2 | 1 | 2 | 1 | 2 |
V | 41.67% | 25.46% | 57.87% | 22.76% | 65.97% | 37.17% |
T | 0% | 0% | 0% | 44.83% | 0% | 29.26% |
D | 58.33% | 74.54% | 42.13% | 32.41% | 34.03% | 33.58% |
A | 1.40 | 2.93 | 0.73 | 1.14 | 0.52 | 0.95 |
V stands for victory odds, T for tie odds, D for defeat odds, a for attacking units, and d for defending units. A stands for attrition and represents the number of units you can expect to lose for one unit lost by the defender - this value is given by $\frac{D+T}{V+T}$. We use A as an indicator of attack configuration desirability from the point of view of the attacker (though attack desirability is also contingent on strategical context, which is not accounted for here). Undesirable attack configurations are highlighted in pink.
]]>Here we present a model release form concept based on encryption technology which you can use to transfer rights for single photographs.
This document does not address every circumstance or possibility, and might not fit all of your needs exactly. Further, it is not intended as legal advice. Horea Christian and other authors of this work are not lawyers and take no responsibility for the use of this form. Your use of this Model Release in no way creates an attorney-client relationship between you and Horea Christian, or any other person or entity. We suggest that you contact a lawyer to discuss your circumstances, and verify that this Release fits your needs.
A versioned repository of our release form is published on GitHub, and the document is available for direct download here.
The document is written in the LaTeX markup language, and the text is based on a number of other model release forms published on the internet. We have tried to be as inclusive as possible in the document, and we welcome law-aware contributions with different versions for different countries. We also provide check-boxes to accommodate not only per-photo model release, but also the more generic all-photos model release.
For per-photo model release we include a form structure with which you may specify and accurately identify up to 10 individuals photos. This works via MD5 checksums, sequences of 32 hexadecimal digits which uniquely identify your photos. While explaining checksum functions exceeds the scope of this article, the gist of the concept is that two photos with a difference of even only one pixel will still have different MD5 checksums.
On most Unix-like operating systems (OS X, Linux, BSD, etc.) checksums can easily be computed in the command line via:
1
|
|
Windows users may use a number of free online services instead - such as this.
After obtaining the checksum of the photos for which the photographer desires release rights, these 32 digit sequences should be transcribed to the document. Due to the sequence length we provide 2 entries per photo, in which both photographer and model can transcribe the code to ensure it is correct.
The picture being accurately identified and assuming the agreement is legally valid, you secure the rights over both the original image and all derivative works. All you need to do is keep the original file on record and evidence that all resulting pictures are indeed processed from it. You can always recalculate the checksum and it will always be the same. This is pretty much it!
]]>.tar
archives) are a common medium for installing the Gentoo Linux operating system.
The standard Gentoo installation starts with a non-bootable “stage 3” tarball, which includes only very limited software.
As discussed in a previous article, on the Raspberry Pi - and other embedded systems - it is in certain respects better to start off with a bootable (and wifi-capable) tarball.
For these purposes we are publishing a stage 4 tarball with all the basic software you need on the Raspberry Pi - including the sys-kernel/linux-firmware package for broad wifi-device support and a Git repository for customizing and deploying the newest Raspberry Pi kernel sources from upstream. Though the archives total under 1 GB in size, we recommend you use at least an 8 GB SD card for use with your Raspberry Pi.
The structure is fairly straightforward, our tarball distribution consists of 2 archives, which can be downloaded from this directory.
The boot.tar.bz2
archive contains files needed for the boot partition (namely the /boot
directory), whereas the system.tar.bz2
archive contains the files for the system partition (the rest of the root directory).
It is important to partition the drive so that the tarball system will recognise everything properly. For this you will need:
/boot
partition).You may partition the drive with gparted, parted, or whatever else you are most comfortable with.
To extract the tarballs simply mount your first and third partitions (boot and system respectively) on a functioning system; navigate to them and extract the archives with the tar -xvjpf
command.
As an example:
1 2 3 4 |
|
We recommend you perform the above operations as root. Unmount, plug the SD card into your Raspberry Pi, and you are good to go!
As per our minimal tarball, your root user is passwordless. You are well advised to set a password immediately after your first log in (you can log in by simply entering the user root and pressing enter).
To set your root password, type:
1
|
|
You should also add a new user for yourself and add a password for that user as well. To do this type:
1 2 |
|
Our tarball comes with a basic Connman installation, so that you can easily access your network whilst keeping system resource use at a minimum. For a more thorough explanation of how to use Connman we kindly refer you to the connman page on the Arch wiki.
We have added Connman to your default runlevel, and after you connect to your network once, everything should work automatically the next time you boot up - presuming the network is still reachable.
We have set up a repository which you can use to download the newest kernel sources.
The repository is located under /usr/src/linux-9999-rpi
and pulls the files from git://github.com/raspberrypi/linux.git
.
You can control which kernel your /usr/src/linux
symlink points to via eselect, though you should bear in mind that navigating to the Git repository via the symlink directory prevents you from using Git.
To update your kernel, run:
1 2 3 4 5 6 7 |
|
To instruct your Rapsberry Pi boot loader to use the new kernel edit your /boot/config.txt
file.
The first entry should read:
1 2 |
|
For ease of overview - here is a paste of our @world
file, which specifies all the packages we have explicitly installed.
1 2 3 4 5 6 7 8 |
|
To speed up your Raspberry, we have enabled a few optimizations (medium overclocking and reduced GPU memory) in the /boot/config.txt
file.
For more information on these or other potential optimizations please consult the relevant section on the Gentoo wiki.
Installing Gentoo on a dedicated platform makes it easy for the user to strip down his system to suit his needs precisely - e.g. in the guise of an ultra-minimalist installation. Here we provide an overview of current means of installing Gentoo on your Raspberry Pi.
This is standard “quick” way of installing Gentoo on the Raspberry Pi. It is widely recommended and features its own Gentoo Wiki guide, mainly because it is consistent with common ways of deploying Gentoo, and more-or-less conveniently circumvents the need for cross-compiling. Sadly, this approach leads to a number of drawbacks, mainly revolving around the fact that it necessitates compiling everything with the limited resources of the Raspberry Pi.
Strengths:
Weaknesses:
This installation method allows you to unload all compilation work for the Raspberry Pi to the machine from which you are installing. As your desktop machine will have a different architecture than the Raspberry Pi, this requires cross-compiling. There are a number of Gentoo tools which help you set up cross-compilation toolchains - and some of them are covered specifically for our use case by unofficial blogs.
A few guides for a Raspberry Pi cross-compile installation with: crossdev, static QEMU, QEMU. Please note that static QEMU will not work if your system uses systemd (last checked May 2014); and crossdev-toolchains are known for being prone to compilation errors.
Strengths:
Weaknesses:
Stage 4 tarballs, are bootable fully working Gentoo systems. Installing them is as easy as extracting the archive to a partitioned disk, and as they ship with more software than stage 3 tarballs, generally they reduce the time needed for subsequent compilation on the Raspberry Pi. There are no official stage 4 tarballs for the Raspberry Pi; this owes to the fact that in setting up a stage 4 tarball, the developer invariably makes a few choices for the end-user. These changes are easily undone, but this is not considered the Gentoo way of doing things.
We have a detailed guide for this simple Gentoo installation, and the stage 4 tarballs are available for download, here (one archive for the boot partition, one for the system partition). We strive to update the tarball once every 3 months, and we last updated May 2014. There are a few other Gentoo stage 4 tarballs for the Raspberry Pi available for download, e.g. one by intelminer (tarball unmaintained as of July 2012).
Strengths:
Weaknesses:
NOOBS is the standard operating system install manager for the Raspberry Pi.
It provides you with a series of installation options, and while Gentoo is not supported by default, you can dd a Gentoo image to the NOOBS OS folder (/os/Gentoo
).
Note that while the linked image is updated daily , the DD image is unmaintained as of May 2013.
In principle, however, this is the same as doing a stage 4 installation (see above), with the added overhead of the NOOBS install manager. Overall we do not recommend this.
]]>Stage 4 tarballs are very well suited for system backups or use cases where chrooting and emerging your basic system requirements can become very tedious. Situations in which stage 3 installation is difficult include:
Making a stage 4 tarball - while in principle as simple as tar
-ing a Gentoo system - requires you to remember a long list of directories to exclude and a number of tar
options.
The Gentoo Wiki used to have a guide for this process, which has since been removed from the official webpage (but is still archived on a mirror).
Especially for users who are looking towards stage 4 tarballs for backup solutions, this process is excruciatingly tedious and repetitive, and begs for slips of the pen - so to say.
Not surprisingly, a bash script to keep all the details in place has been around since at least 2005: “mkstage4”. This script became unmaintained in 2009, but was later re-edited - though this edition also became unmaintained by 2012.
Here we present a new, maintained version of mkstage4, with broader functionality, and a more flexible command line interface.
The script is hosted on GitHub, and provides a basic installation and usage guide on its README page.
It can be installed system-wide (via Portage) or called as root from the directory it is located in.
Here we will assume you are running it from the parent directory, and thus use the ./mkstage4.sh
command.
You can create a stage 4 tarball of the current system - by specifying the -s
(sytem) flag - under the name archive_name.tar.bz2
, for instance:
1
|
|
If you would like to use mkstage4 to create a tarball of an other mounted system you can point it to the respective mount point with the -t
(target) flag:
1
|
|
The folders which are excluded by the script can be seen in the EXCLUDES
variable in the script file.
Note that the exclude list is adaptive: specifying the name of the archive itself when the script is used for a current-system backup, or optionally specifying other folders (e.g. the /boot
folder with the -b
flag).
Using the results of mkstage4 is equally simple. Broadly speaking you only need to partition and mount your disk, and extract the archives.
Partitioning is simple and can be done via a graphical interface with GParted; but if you have never done this before you might want to consult the respective in-depth walk-through from the Gentoo handbook. It is important to note that the disks should be partitioned similarly to those of the system from which you created the stage 4 tarballs.
The archives should be extracted with tar xvjpf
.
Assuming that you have all of your system in one partition, extraction is as simple as:
1 2 3 |
|
If you use the more sensible approach and keep your /boot/
partition separate you would be dealing with two archives, so additionally to the former, you would run:
1 2 3 |
|
With so much talk about FOS going on - and even spilling over into high-profile editorials about publication review - it is distressing how little FOS is actually taking place. Talking about the next big thing is indeed quickening, but we would rather show you how to become a part of it. And what better place to start your voyage into both academia and the FOS world than your thesis or dissertation?
To aid in your comprehension of the following instructions we would like to point you to the following Master’s thesis: “Neuronal Correlates of Occulometric Parameters in Face Recognition”. This thesis is written and programmed in accordance with many FOS concepts; and we will be discussing excerpts from it to showcase individual features of FOS.
Those of you already accustomed to Git and LaTeX are also very welcome to just fork the said GitHub thesis repository and prune the code so that it can compile locally on your machine. We’re happy to guide you along, but hacking is a great way to learn too!
To set up a thesis similar to our example you need:
If you are just now setting out into the world of FOS, we recommend publishing your thesis (or any other open research projects) via GitHub. Additionally - and for the thesis format specifically - we encourage you to use the LaTeX document preparation system.
GitHub (register here) is a social coding website which helps you make your work openly available on the web, and which allows others to easily contribute. GitHub uses the popular Git revision control system - meaning that on GitHub you will also benefit from all the features of Git, of which the following are of particular relevance to FOS:
After installing Git (available in Portage as dev-vcs/git), you can clone our example thesis and all its history to your local machine via:
1
|
|
LaTeX helps you create reproducible and high-quality typeset documents. It is the standard markup language for most high-profile publishers and a great number of scientific journals.
One drawback of LaTeX in the context of FOS is that it is geared towards static, for-print output formats (as for instance .pdf
) and integrates less well with web-based publishing than Markdown or IPython would.
However, as typography is of greater concern to your thesis than portability, we would still recommend LaTeX over other alternatives here.
Analyzing data on the fly removes the uncertainty and intransparence inherent in off-line data analysis. Additionally, live data analysis saves you a great deal of time in the long run: Whenever you complete your data with additional samples, or whenever you update your processing scripts you will no longer need to re-export any graphics. All the new data and processing will pipe through your document without any expenditure on your part.
PythonTeX is a library that tries to provide integration between programing and markup languages. Its strongest suit (as the name suggests) is Python integration in LaTeX.
Pythontex can be used to produce figures or text (including tables) from python functions and place that into your .tex
document.
A powerful feature of PythonTeX figure piping is that it can use the .pgf
graphics format which natively draws the figure in the LaTeX environment - meaning that text elements in your figure can be selected and can contain clickable links in the final document.
To call functions from the .tex
source of your document, you need to place them in an appropriate environment (the following excerpt is sourced from the dedicated environment listing file of our example thesis):
1 2 3 4 5 6 7 8 9 |
|
As you can see, there are a few functions called here which are not defined in the snippet - they are inherited from the {pycode}
environment.
This is done via an other dedicated file which is very handy for writing down functions and imports which you will need in all or most of your python snippets.
Upstream has nicely elaborated on this topic here.
Finally, to actually get some output from the above you have to call the snippet via its ID (for the aforementioned example [pe_ss1]
).
1 2 |
|
These excerpts are taken from a chapter of our example thesis.
The first call prints a figure (as seen above, fig_pe_ss1
is the result of the latex figure function), and the second call uses some of the functions loaded here to perform a t-test and format the output to obtain the best typographical result.
Needless to say, the result of this t-test will be recalculated and the figure re-plotted whenever you update any of the scripts dependencies.
Live data analysis on your machine is awesome in itself - however it is even more awesome to afford others the benefit of seeing your live data analysis work on their machines as welll. This increases trust and transparency and can greatly ease collaborative research, publishing, and review.
A fresh new approach to sharing data is Academic Torrents. this initiative has many merits, including being decentralized, and providing feasible high-speed downloads. As torrents do not serve data directly, piping data from academic torrents to your python scripts would require additional scripting.
In our example thesis we found it far more convenient to serve the data via http
and have python source it from there - without even requiring a formal download.
We do this via the HTML parser in the data acquisition file of an example analysis script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
A good tool to put data online is rsync
.
We use it running the following from our data root:
1
|
|
An added benefit of this command is that it omits files and folders of the */.*
format.
You can use this format to store crash logs, botched output, or other (meta)data from your experiments which you may be interested in keeping but which is irrelevant for the data analysis.
Of course for the command to work you need to set up ssh
for your server.
This is totally easy, and most hosting companies have short how-tos for this online (we use Dreamhost, and here’s ours).
Especially when coding you may find yourself changing a variable name or some library path, which is referenced in multiple places across multiple files. sed and grep come in handy here, and can help you do all that menial name changing in one simple line from your terminal:
1
|
|
Please note that the slashes (/
) in the above code are part of the sed syntax and should stay the same independently of your text change.
Also note that some characters need escaping in sed.
Say you are keeping your photography files in a single directory and have them incrementally numerated.
And say you would like to check if there is any index number wherefore neither a .JPG
nor a .NEF
file is present.
The following script would help you find any such indices starting form DSC_a0000
and up to DSC_a8888
:
1
|
|
This command can also be easily modified to suit slightly different needs - say you are only interested in indices for which .JPG
files are missing:
1
|
|
Or specifically indices for which a .NEF
file is present but a .JPG
file is missing:
1
|
|
1
|
|
Quotation marks are optional here and come in handy only if your code snippet contains many special characters (;
, \
, etc.).
You can escape single characters by prefixing them with a backslash (\
).
emerge
needs to be run as root.
Other commands such as equery
can be run as user.
The Gentoo Wiki hosts a longer (though different) Portage/Gentoo cheat sheet on this page.
The hackish way (re-emerges all packages versioned 9999).
1
|
|
The smart way (re-emerges live packages only if the upstream checksum has changed):
1
|
|
This is useful if the package is prone to breakage when using parallel processing (some things can become required before they are compiled).
1
|
|
Some scripts (like revdep-rebuild
or perl-cleaner
) check the portage tree and the packages on your machine, and then pipe an emerge
command for Portage.
Mostly they emerge -vD1 *
, you can usually add more arguments via --
.
Like so:
1
|
|
Additionally, this also works if the script takes arguments of its own:
1
|
|
1
|
|
1
|
|
1
|
|
Many git hosts offer you automatic summaries of pull requests in the form of diffs or patches (simply append .diff
or patch
to your commit link).
To apply such patches directly (without having to manually download the files) you can run the following command:
1
|
|
The --ignore-space-change
and --ignore-whitespace
are not needed, but they save you the pain of patches failing due to unmatching white spaces.
There are a number of commands which you can use to transfer files and folders remotely via the command line interface (CLI).
On of them is scp
(which stands for secure copy) and uses the SSH protocol.
1
|
|
Keep in mind that /path/to/your/files
is an absolute path.
If you do not have root access on your server, use a home/directory/relative/path
(without the initial slash).
To get a list of all the files in your git repository (excluding untracked files from your repo directory), run the following:
1
|
|
There are a number of ways to do this. One would be via the GIMP script-fu, which is awfully complicated to use (as an example of how you would batch rotate files with script-fu, you can see this thread on StackOverflow).
If your image happens to be in the JPEG file format, however, you can easily rotate it with a library function called jpegtran
.
This function ships with the libjpeg-turbo
package, which you already have installed if your system is capable of viewing JPEG format files.
To use this function simply run:
1
|
|
An other option is via ImageMagick, which is easy, but which would require you to download software you may otherwise not need. The respective command would be:
1
|
|
Sometimes your output is too large to paste in whole.
If the part of the output your are interested in is located close to the end, tail
can be of good use.
If you want to paste from a file, run:
1
|
|
and if you want to pipe some output directly from a command (e.g. dmesg
) to a bastebin, run:
1
|
|
This command grants all users read permissions to the folder, subfolders, and all files therein - in addition to whatever permissions are already set.
Note that chmod
may need to be run as root.
1
|
|
Or better yet, use octal to accurately define what permissions the owner, the group, and everyone else has:
1
|
|
Occasionally Git merges may leave you with residual .orig
files which clutter your repository.
This question on StackOverflow exemplifies how the issue may arise.
To solve the issue run the following command from your repository root.
1
|
|
The grep
command with the -r
option lets you find the occurrences of a string inside all files within a folder and all its subfolders.
1
|
|
Use the following too get a bold (or italic, etc.) link via markdown:
1
|
|
In reStructured Text (reST) this feature is on the to do list, but not yet available.
This assumes you are using OpenRC and will not work on systems set up with systemd.
Note that rc-update
may need to be run as root.
1
|
|
Here you will need ffmpeg - a command line program which ships with the FFmpeg libraries, and comes together with media playback dependencies on many linux distros (meaning that you probably have it installed already).
1
|
|
Run from the directory containing DIR
. DIR
is the directory to be synced and created if needed on the remote host.
Do not use trailing slashes after the DIR
directory name or all the contents will get dumped directly into your/remote/path/
:
1
|
|
Sometimes you want to copy all files in one (large) directory to another - which already contains some of these files.
Usually using a file manager of cp
(whithout arguments) for such a task can prove quite tedious.
Here is an variant using rsync
(recommended):
1
|
|
And alternative using the --no-clobber
argument for cp
:
1
|
|
Youtube-dl is a FOSS python script which allows you to download flash videos from not only YouTube, but over 150 websites. You can download it directly from the official website or through your package manager of choice (it is provided by portage and many others).
In case you foresee running into DRM restrictions, you may also want to get RTMPDump. Youtube-dl calls RTMPDump automatically if it encounters Adobe’s proprietary RTMP protocol and the software is installed.
chrome://...
part at its start)1
|
|
1
|
|
1 2 |
|
Note that chown
may need to be run as root.
1
|
|
This documentation assumes that you are somewhat aware of how Octopress generates your site.
The key concepts here are that all theming should be done exclusively in the /sass
(mainly for CSS) and /source
(mainly for HTML) directories (as relative to your blog root);
and that generating your blog via $ bundle exec rake generate
processes styles and layouts defined in those directories and creates a slightly differently formatted static output exported to /public
.
To hack beyond the scope of documented use cases it’s best to have a tool which quickly matches parts of the visual output to code snippets. You can do this via an element inspection function in your browser (as for example the Chrome DevTools). These functions allow you to either highlight parts of the website with your cursor and see the code - or browse through the code and see individual sections (e.g. DIVs).
Once you have identified a section and a style specification which interests you (e.g. margin-left: 1.3em;
), you can try to locate it in the theme directories via grep.
Grep is part of the GNU coreutils, so any linux user should have it out of the box.
It is also easy to get grep for Windows.
1
|
|
This should get you started wherever use case approaches fail.
You can customize the fonts of your Octopress website via Google Fonts - a framework which allows you to choose between hundreds of free (mostly SIL Open Font or Apache licensed) fonts, all with large character sets.
The fonts are loaded in /source/_includes/custom/head.html
and selected in /sass/base/_typography.scss
.
You can add the following snippets to your theme to load and use - for instance - the Lato font from Google Fonts.
1
|
|
1 2 3 4 5 6 7 8 9 |
|
For the creation of a new theme, colors are best edited under sass/base/_theme.scss
.
If you would also like to edit the console (code-block) colors provided by the default Solarized palette by Ethan Schoonover - you can find the relevant color specifications under sass/base/_solarized.scss
.
This is a bit tricky because it depends quite a bit on what elements you are trying to center.
For center-alignment of text inside any element, you should add the text-align: center
specification to the style sheet of the element containing your text.
Additionally, you may want to justify your text (most Octopress themes do not do this) - do this by adding text-align: justify
to the CSS instead of text-align: center
.
1 2 3 4 5 |
|
This alone may not always suffice to center the text on the page. Often the element containing the text is not centered, meaning that center-aligning the text will only center it inside its container (which may be placed anywhere and aligned anyhow on the page).
The most common (and easiest) way to center-align an element via CSS is to set both the left and right margins to auto
.
1 2 3 4 5 6 |
|
Now, as you will notice if you actually try this - it won’t work.
Well, at least not just like that.
Many elements in the default Octopress theme (and in many web designs generally) are floats.
The problem with floats is they cannot be center-aligned by design -
they are created to float as far in one direction as it goes.
So, to actually center many of the default Octopress theme elements (as for instance the article container), you will need to change the type of the element to block
, or inline-block
.
1 2 3 4 5 6 7 |
|
Most elements’ style sheets are found either in /sass/base/_theme.scss
, /sass/base/_layout.scss
, or /sass/partials/_blog.scss
.
More specific elements (such as the website header or the navigation section) have additional style sheet specifications under /sass/partials/
- for instance /sass/partials/_header.scss
or /sass/partials/_navigation.scss
.
The default Octopress theme scales quite nicely on mobile phones and tablets by being semi-fixed width. What semi-fixed means is that the elements of the theme scale in width along with the display, but do so non-linearly - which creates a consistent, seemingly “fixed” visual experience.
The website does no complex math for continuous non-linear scaling (though this could be interesting to implement!). Instead it detects the display width and sets progressively smaller padding sizes (in px) based on 4 discrete monitor width cut-off values:
uses $pad-min
< 480px < uses $pad-narrow
< 768px < uses $pad-medium
< 992px < uses $pad-wide
These values are set at the start of /sass/base/_layout.scss
and you should tweak them to best complement your design.
Also, to maintain quality scaling for smaller-width mobile devices, you should always use the $pad-*
variables for padding spaces which should make way for your content whenever a width constraint is present.
An example of how the code for scaling a padding variable should look like can also be seen in /sass/base/_layout.scss
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Many Octopress themes use the provided customization directories, namely /sass/custom
and /source/_includes/custom
, to make changes.
You should generally not do this.
These directories are intended for end-user customization and should not be edited when creating a new theme.
Adding any code to (or uncommenting any code from) /sass/custom/*.scss
or /source/_includes/custom
will make your theme slightly less responsive, harder to hack by the end-user, and harder to build any further themes on.
Btsync is a peer-to-peer (p2p) file sharing client. Btsync uses the BitTorrent protocol, and is formally named “BitTorrent Sync”. The p2p file sharing protocol allows users to keep folders in sync without actually having to upload data into the cloud (meaning without having to depend on servers - which may not be theirs - for storage). This is highly beneficial as it allows for better security, more privacy (should you care), and - possibly most importantly - very fast sync over local networks.
Some users have voiced concerns over the fact that Btsync is closed source. While this is regrettable, we have not come across any actual instances of negative repercussions.
Here we present an ebuild and associated files which enables you to easily get Btsync up and running on your Gentoo system. (Please pay attention to the einfo message printed after emerge). Our ebuild is currently mainly optimized to run with systemd, though we provide some (untested) initd and confd scripts We are currently trying to get it Btsync into the portage main tree (see our feature request), but need udev/init.d testers.
You can check whether or not you are using sytsmed with eix -I systemd
- if anything comes out, you most probably are.
You can get the ebuild from our very compact chymeric overlay. To enable the overlay we suggest you follow the “Manually setting overlay locations” instructions from the Gentoo overlay guide. In short, the procedure is:
PORTDIR_OVERLAY="/usr/local/portage/chymeric"
(or whatever directory you prefer) to your /etc/portage/make.conf
file.git clone https://github.com/TheChymera/chymeric.git /usr/local/portage/chymeric
(or whatever other directory you previously chose).Simply go ahead and run emerge btsync
as root from your terminal.
There - wasn’t that easy?
Based on suggestions found mostly on Arch Linux wiki or forum pages (such as these instructions), we have put together a Btsync unit to let you run Btsync as your user.
You may see our btsync_at.service
file here.
This script affords you the following functionalities:
1
|
|
1
|
|
With this set-up your files will be written to your synced directory by your user (not by root or btsync as you may see elsewhere). The main benefit of this is that permissions will never change within your synced directory, and you will always have read, write, and execute access to your files.
Which users are allowed to run btsync is managed by the btsync group (which our ebuild automatically creates). Without belonging to that group users will be unable to write to the PID file (meaning the service cannot be launched), and unable to write to the storage path (meaning -independently - that the web-GUI cannot be viewed).
The Btsync binary blob is installed to /opt/btsync
.
The config, systemd, and init.d files are located in the respective system directories.
The btsync.pid
and btsync.conf
files as well as the storage path are located either under the relevant system directories, or under ~/.btsync/
if you run Btsync as user
(see the exact locations in the setup file).
Btsync can be used with or without a config file. A sample config file (containing the default settings) can be generated by running
1 2 |
|
Our ebuild uses a custom config file, which is edited to remove the "login"
and "password"
fields of the webUI, and to make some other modifications (see the respective setup file).
the ebuilds are brought to you by Robert Walker and Horea Christian.
]]>Gentoo linux is a modern, extremely flexible, and very transparent linux distribution. Among many other things it provides:
Cutting the Gentoo publicity short, and getting to the point: Gentoo is awesome for science.
Sadly, until July 2013 Gentoo provided almost no neuroscience software. In response, we started writing up some ebuilds for popular neuroscience (mainly neuropsychology, to be precise) software packages. With the help of a handful of enthusiastic Gentoo-Science overlay maintainers we have managed to help Portage bring you up-to-date and development versions of the following software packages (in order of ebuild pull):
Which you can conveniently access over the popular and stable gentoo-science overlay. To enable the overlay we suggest you follow the “Manually setting overlay locations” instructions from the Gentoo overlay guide. In short, the procedure is:
PORTDIR_OVERLAY="/usr/local/portage/sci"
(or whatever directory you prefer) to your /etc/portage/make.conf
file.git clone https://github.com/gentoo-science/sci.git /usr/local/portage/sci
(or whatever other directory you previously chose).There - wasn’t that easy?
But all is not always fun and games in the world of NeuroGentoo. Arguably the most important software packages for neuropsychology and brain imaging - AFNI, FSL, and SPM - got stuck in the pull phase. Apparently the packages do not really meet Gentoo security, build, and file management exigences and need to be patched - quite a bit. The Project is community-lead, and help would be much appreciated!
But the good news is: The packages kind of work! Not yet well enough for the gentoo-science overlay, but perhaps well enough for you and me. So, these are the packages we are still working on (and which you can already use):
While officially unsupported, these packages are just as easy to get as the supported ones. You can simply merge the NeuroGentoo branch from our gentoo-science fork into your local gentoo-science repository. After following the gentoo-science overlay instructions from the previous section, run:
1 2 3 |
|
Now that you have read, understood, and followed the instructions above - How-To: NeuroGentoo boils down to the following:
The stable versions are at your fingertips - and if you want the cutting-edge development versions you can just tell Portage.
1
|
|
Yes, NeuroGentoo is supported by Gentoo users and neuroscientists (if you are here you might well be at least one of those) - we do not have paid employees nor do we make a direct profit from this. We contribute because neuroscience is important and Gentoo is awesome!
Please submit patches and contribute to the pull requests for AFNI, FSL, and SPM!
Additional packages are welcome, and we would recommend you submit pull requests directly to gentoo-science. We will however gladly include any working ebuilds in our overlay - if they take too long to get into gentoo-science.
The Neurogentoo initiative is coordinated by Horea Christian, and contributors include François Bissey and Martin Luessi.
]]>