[1+1=2]

OneAndOneIs2

Sun, Jan 15, 2017

[Icon][Icon]The joy of Just Works

• Post categories: Omni, FOSS, Technology, Helpful

I've been making do with Linux running from cheap laptops or in a VM for a fair number of years now. I finally had enough of it: I wanted a proper desktop again.

It's been years since I last built a PC. Possibly over a decade. An IDE hard disk, an Athlon XP CPU, 512MB of RAM, powered by a 350W PSU. It was a very capable machine for its time. But it had its faults: An Nvidia graphics card that would break regularly courtesy of the binary blob nonsense. Wifi only via a USB dongle. And it no longer booted, or even POSTed.

So, I could have built a machine that would meet my meagre needs very cheaply. But after so long making do, I was willing to go over-spec to make sure I ended up with a machine that could do what I needed comfortably, quietly, and would be easy to upgrade if necessary.

So, my requirements: Multiple core CPU, mostly because I want to play with Erlang; solid-state hard drive for the system files to live on, for performance; a big hard drive for the home partition, since that can cheaply be in the TB range; dual monitor output; decent wifi. Most importantly of all, everything had to Just Work with Linux - no binary blob, third-party nonsense.

That actually simplified a lot of things. Easiest way to have Linux-friendly graphic stuff is to stick with Intel - given that I have no need of gaming or whatnot, this is no hardship and a big cost saving. Intel is *the* way to have hassle-free graphics under Linux.

Sadly, the motherboard I settled on didn't have wifi onboard, so I did have to rummage around for a wifi card, but a bit of poking around found me what seemed a suitable option. RAM was fairly easy - I always buy Crucial, and I went for a single 16GB stick since it would make it easy, should the need arise, to add more RAM up to a 64GB max. The case and PSU was also pretty simple: I like Antec, and soon found a nice roomy case. I splashed out a bit on a PSU that's way more than I need, because it added so little to the final cost. The CPU I ultimately went for seemed to be the best bang-for-buck processor available at the time, and again I went with a CPU that was way more than I needed since it wasn't really that big a cost. A new HDMI monitor completed the list, to go along with my existing VGA one.

Building it took up most of my free time on a Saturday, and I was struck by a few changes since my last build:

  • I remember when CPU heatsinks were optional. My first PC had a heatsink that was barely more than an aluminium plate with a little fan on it. The mass of copper pipes and monster fan of this things's sink seem way over the top to my aging brain :)
  • The PSU had detachable cables, instead of the rat's nest of the past. So I only needed as many cables as it took to power the hardware. So much nicer!
  • SATA has triumphed over IDE everywhere, even on optical drives. This is awesome, IDE cables were a PITA!
  • Those silver plates that come with your motherboard to be fitted to the case where all the output sockets go: WHAT is the point of them? All they ever do is get in the bloody way and make alignment awkward! Argh!
  • Everything feels just a little more polished and thought-out. Like the little doohickey for the HDD lights and reset/power buttons etc. - on my old build, this was just a lot of faffing around directly on the motherboard. This mobo came with a little separate part that you plugged all the case switches and lights into, which then in turn went into the board - so much easier!

On Sunday, the moment of truth came: The first time I turned it on. To my surprise and joy, it Just Worked - no wires to adjust, no settings to change. On it came, up came the UEFI screen. And that's a big change, given my last build had a BIOS.

It was all very nice, and well-featured: Told me all about my case fan speed and CPU temperature and the like. It seemed happy it had detected all my hardware: Two HDs and a DVD; 16GB RAM, etc. All the temperatures seemed to be holding at pleasantly-low numbers. I plugged in my Ubuntu USB stick and attempted to boot off it.

Once again, joy and surprise! It just booted, straight into the installer menu. No playing around with boot order or anything. How nice. The only place I had to do anything manual was when it came to partitions: I didn't want the swap or /home partition on the solid state drive. Otherwise, it was just a case of confirming I wanted UK setting throughout and all went well. It even (and this was the truly unprecedented part) brought up a list of available wifi networks and asked me what to connect to!

Unless running an install via something plugged into ethernet, I have never had working internet during an Ubuntu install. That's something I always had to meddle around with to get working. Even though I had bought all my hardware specifically so this kind of thing would happen, it was starting to feel spooky that everything was working so well. Installations NEVER go this smoothly!

I left it to get on with it, and when I came back it was done. I shut it down, removed the USB drive, and gave it its first-ever boot off the hard drive. Half-expecting the UEFI stuff to break it, as it did when I installed Ubuntu on my last laptop...

But no. Straight to the Ubuntu login screen, in seconds. I logged in, and there was my Unity desktop. Wifi was already established, the web was right there. I copied my Firefox profile across, and after a couple of attempts at entering my somewhat-complicated password for my password manager and verifying it with my phone, it was up and running. A quick download of the git repo with all my dotfiles and the like, and the installation of a few needed packages, and everything was feeling very homey - my usual ZSH prompt, latest Git release, vim all ready to go...

Seriously, the smoothest Linux install I ever had. Absolutely nothing went wrong. The memory use is barely registering on the Gkrellm graph; the CPU likewise; and the fans are keeping it room-temperature cool with barely a sound. Everything I could have asked for from a machine.

I'm still waiting for the new monitor - it got held up for some reason - but in case you're wanting to build a new Linux machine that's easy to set up, the component list might be useful to you, so here it is:

  • Mobo: ASUS Z170-A Intel ATX
  • CPU: Intel Core i5
  • Heatsink: Cooler Master Hyper 212 Evo
  • RAM: Crucial 16 GB DDR4
  • Wifi: TP-LINK TL-WN881ND 300 Mbps
  • Case: Antec Three Hundred Two Midi Tower with Antec HCG-850M 850W PSU
  • Drives: 500GB Samsung SSD; 2TB WD hard disk; DVD writer

The only thing I haven't confirmed for sure yet is that it'll work properly with the second monitor. Since there are three graphics output ports built in to the motherboard, I anticipate no problems, but I'll update when the HDMI monitor gets here.

 

Fri, Nov 04, 2016

[Icon][Icon]Thinking in functions

• Post categories: Omni, Technology, My Life, Programming

AKA "When you're been reading so much Lisp that it starts to affect your brain"

A while ago I did a talk on functional programming. As my example, I showed how to impliment linked lists using closures instead of the built-in data structures. Recently, as I was reading some Lispy code, it suddenly dawned on me that there was one step further I could have taken that hadn't occurred to me. So I sat and had a play, and sure enough it worked.

So here's the process, for anyone who wants it :) I've put it in Javascript, since it makes it easy for anyone to try via the browser.

Firstly, the specified task: We want a new data structure, "pair", which can hold two values. From this, we want be able to build linked lists. We want all access to be handled via functions: Create a pair with pair(), get one value with head() and the other with tail().

The quick, easy, obvious approach is to use a built-in data structure under the hood. Such as an array. So:

function pair (head, tail) {return [head, tail] }
function head (pair) { return pair[0] }
function tail (pair) { return pair[1] }

This is a full implementation of the desired "pair" functionality. A quick test will bear this out:

> var test_pair = pair("the_head","the_tail")
undefined
> test_pair
[ 'the_head', 'the_tail' ]
> head(test_pair)
'the_head'
> tail(test_pair)
'the_tail'

All good! Works perfectly. So.. we're done?

Not quite. This obeys the letter of the law, but falls foul of the spirit. Firstly, it's too easy to bypass the abstraction and access the data directly, instead of through our accessor functions. Secondly, it's all too easy for somebody to shunt more data into the array and make our "pair" hold more than the desired two values.

> test_pair[0]
'the_head'
> test_pair[2] = "sneaky misuse of pairs!"
'sneaky misuse of pairs!'
> test_pair
[ 'the_head','the_tail', 'sneaky misuse of pairs!' ]

Not good. We must fix this! Let's switch away from arrays, and go instead to closures:

function pair (head, tail) {
 return function (fn) {
  return fn(head, tail)
 }
}

function head (pair) {
 return pair(
  function (head,tail) { return head }
 )
}

function tail (pair) {
 return pair(
  function (h,t) { return t }
 )
}

If you're not used to closures, this may seem a little abstract. Let's break it down!

pair() now returns a function. That shouldn't be a surprise in this day & age, functions are first-class objects in most languages. Let's call it the p1 function. The p1 function is a very simple function, as all it does accept another function (call it p2) as its argument and call it. Very simple.

The important thing is that p1 still has access to the original arguments pair() was called with, via the magic of closures. So both the head and tail are available to p1, which means when you pass a p2 function into p1, p2 is called with the original head and tail. So now the p2 function can decide what to do with them. In the case of head(), it passes in a function that accepts the head and tail, and returns the head. Tail is identical, except for which value it returns.

Once you get used to closures and functional programming in this style, it all makes perfect sense. I appreciate it can make the eyes glaze a little at first though. Let's test that it works:

> var test_pair = pair("the_head","the_tail")
undefined
> test_pair
[Function]
> head(test_pair)
'the_head'
> tail(test_pair)
'the_tail'

So, we have continued to fulfill the specification as stated, and now we have a data structure that can ONLY be used to hold our head and tail values - no sneaky extras, and no way to view the data directly.

Onwards, then! We want to use pairs to implement a linked list. In this implementation, a list of, say, (1,2,3) could be implemented by pair(1,pair(2,pair(3))). The head of a pair contains a value, the tail contains the rest of the list. Let's see how that works:

> var list = pair(1,pair(2,pair(3)))
undefined
> list
[Function]
> head(list)
1
> head(tail(list))
2
> head(tail(tail(list)))
3

Okay, simple enough. But a long list calls for a lot of nested pair() calls! Let's automate that.. we need a list() function! And whilst we're at it, let's write a few list accessors to cut down on all that head-tail-tail stuff.

function list () {
 var list;

 for (i = arguments.length; i > 0; i--) {
  list = pair(arguments[i-1], list)
 }

 return list
}

function first (l) { return head(l) }
function second (l) { return head(tail(l)) }
function third (l) { return head(tail(tail(l))) }

And make sure it works as expected:

> var lst = list('a','b','c')
undefined
> first(lst)
'a'
> second(lst)
'b'
> third(lst)
'c'

So far, so good. Now, a common desire with lists is an ability to apply a function that expects a single value to a list of values instead: Say we have a triple() function that expects a number, but we want to be able to apply it to the list (1,2,3) to get (3,6,9).

This is typically done with a recursive function called map(), which creates and returns a new list, which it generates by applying the desired function to the head value of the original list, and pairing it to the result of calling itself on the tail of the original list.

function undef (x) { if (typeof(x) === 'undefined') { return true } }

function map(fn, list) {
 if ( undef(list) ) {
  // If the list is undefined, we've reached the end - stop recursing
  return;
 }
 // We're still going: Call the function on the head, recurse over the tail
 var h = head(list);
 var t = tail(list);
 return pair( fn(h), map(fn, t) )
}

function triple (x) { return 3 * x }

And let's see if it works:

> var small_list = list(1,2,3)
undefined
> var big_list = map(triple, small_list)
undefined
> first(big_list)
3
> second(big_list)
6
> third(big_list)
9

Looks good! We are making linked lists and mapping over them to make new lists. Very Lispy!

Now, this is as far as I ever got just from generic reading about closures, functions, and Church numbers. It does the job, and does it using nothing but functions. All very clever.

But one day as I was reading up on Lisp, I suddenly realised there was another way. Instead of map() navigating the list via head() and tail(), I could skip the overhead of those extra function calls. In the original approach, pair() creates p1, and then head() and tail() both call p1, passing it a function to return either the head or the tail. Map uses head() and tail() and is thus implementation-agnostic - the original array-based approach would still work with map as defined.

But if we throw that agnosticism out and write something directly for the function-based approach, then we can avoid the overhead of calling head() and tail(). Instead, we can pass p1 the recursive function that does the mapping. Because p1 will call this function with both the head and tail arguments as parameters, this means the new function gets the values it needs directly instead of through the accessor functions, thus:

function fmap(fn, list) {
 var rec;
 rec = function (h,t) {
  if (undef(t)) { return pair( fn(h) ); }
  else { return pair( fn(h), t(rec) ); }
 }
 return list(rec);
}

And sure enough:

> var big_list2 = fmap(triple, small_list)
undefined
> first(big_list2)
3
> second(big_list2)
6
> third(big_list2)
9

I know, I know. It's evil. It's over-engineered. It makes far too much use of passing around recursive functions. And I'd probably be very upset to find anything like it in production code.

But.. it was fun to work it all out :)

 

Sat, Jun 25, 2016

[Icon][Icon]The perfect compromise

• Post categories: Omni, In The News

So, the biggest vote of a generation has been and gone. It came after a campaign painfully lacking in useful information on either side, and it departs leaving nobody really looking good.

The Prime Minster, David Cameron, whose best answer to the huge responsibility of deciding how to handle an enormously complicated political decision was to say "Fuck it, you decide, I'm not going to", has made it clear he'll be continuing with his abdication of responsibility by handing in his notice rather than deal with the results himself. Everybody is therefore asking "What happens next?" with a certain amount of trepidation.

Well, there's a lot of uncertainty, of course. But I think I have the answer. It's based on something once said by the great philosopher, Calvin:

A good compromise leaves everybody mad

One of the most notable lacks in the Leave campaign was its answer to "What's the plan for after the vote?" which was famously answered with "lol, dunno!" by the head of UKIP. Slightly more useful answers cited examples of other non-EU countries that we could emulate.

My argument is that the proposal of the Norweigian model is not just good, but the best possible answer, and here is why:

The original referendum back in the 70s was to join the common market. A persistent complaint from Leave was that the EU was "not what we voted for" and it's perfectly true. So we could satisfy both referendums by leaving the EU and staying in the single market.

So far so good. But there's more!

The Norway model gives access to the single market via the EEA - the European Economic Area. The UK could continue to trade with the Europe pretty much as it always has. This would settle the markets and ease the fears of the international corporations that are suddenly finding London a less-attractive base of operations.

Even better, as all informed voters will be aware, the House of Commons indicated long before the vote took place that it would use it's (roughly) 3:1 pro-Europe majority to block any attempt to take us out of the EEA. So it would actually be very difficult for our leader (whoever that turns out to be) NOT to go with the Norway model or something very close to it.

But what makes it the ideal solution is this: Membership of the EEA not only gives unrestricted access to the single market. It also requires that members: abide by EU regulations regarding the market; allow free movement of workers; and pay the EU for membership - to the tune of something like £200 million a week.

The Leave campaign never ceased to talk about reclaiming control of our borders; and the (wrong) amount of money we pay into the EU each week was even emblazoned on the side of Boris Johnson's Leave campaign bus.

The Norway model would mean nothing changes on either front. And that's what makes it so perfect!

Everybody who voted Remain - the 48% - gets to feel pissed off because they didn't get what they voted for.

Everybody who voted Leave - the 52% - gets to feel pissed off because they got EXACTLY what they voted for. Just not what they actually wanted.

The entire population thus gets to come together, unhappy but united again in our common hatred of our politicians, who always, ALWAYS get it wrong.

And the politicians? It's ideal for them, too: They get to answer every problem with their tried-and-tested approach of blaming everything on the immigrants they can't stop from coming here; and the lack of cash they have to work with because it all goes to Europe.

Life would quickly settle back to normal. The Europeans already here could stop worrying about being thrown out, the Brits abroad likewise. Trade would continue much as it always did before. The UK stops having to worry about EU policymaking because it no longer has a say in it. The entire population settles down and resigns itself to not having got what it wanted, as usual, and the whole thing is over.

 

Mon, Dec 14, 2015

[Icon][Icon]Slides for my talk

• Post categories: Omni, FOSS, Technology, My Life, Programming, Helpful

On Saturday, I attended the London Perl Workshop - an annual event that's totally free to attend and has plenty of useful stuff even for people who don't know Perl.

I even did a talk myself - an introduction to Functional Programming for beginners. Nobody fell asleep, so I called it a success. As requested, I'm also making the slides available. I'm using Dropbox because meh, it's convenient.

It's Powerpoint rather than Open/Libre Office because, basically, that's what happens to be installed on my laptop already. If you weren't at the presentation and want to read the slides to get the content, I advise "notes view" because there were a few worthwhile points that didn't get turned into bullet points on the slides.

Download here

If you have any problems getting the file, tweet me or something and I'll see what I can do about it. I'm not turning comments on here because I get spammed to hell if I do that, sadly.

 

Mon, Aug 10, 2015

[Icon][Icon]Tablet keyboards

• Post categories: Omni, FOSS, Technology, Programming, Helpful

So I got sick of some things on my Android tablet. The crappy privacy settings, the unrelenting stream of adverts in youtube, and so on.

So I figured it was time to try out the alternative: Cyanogenmod, an open-source fork of Android.

A lot went well. Things that were broken by latest Android switching away from the Dalvik runtime sprang back into life with a switch to CM and leaving it set to Dalvik. Such as YouTube AdAway (See previous post for more on that)

Also, CM has privacy settings that let me say "Yes, install the Facebook app, but NO, don't let it know where I am, thankyouverymuch" and profiles so I can have it automatically mute when I'm at work and regain sound when I get home. And I lost far fewer files in the transition than I expected, which was good.

There was just one little niggle. And it's to do with keyboards.

For most tasks, I like to use the standard Android keyboard. It's a pretty good kb. But when I use the ConnectBot ssh client, I *need* a full keyboard, with keys like Ctrl, Alt, and Esc. No problem, Hacker's Keyboard to the rescue!

But when I'm in Firefox, I want the LastPass keyboard so it can enter my username & passwords for me.

So depending on what app I'm using, I might be desiring one of three keyboards. And it's a pain to have to switch them manually.

No problem, because Tasker to the rescue! An app that can watch for context changes, like switching to an app, and automatically apply specified actions. Except that the UI isn't the easiest to get your head around, but eventually I was pretty confident that it wasn't me being an idiot: The option to change keyboard just wasn't there.

But it had to be, because I'd used it before...

A bit of Google-fu, and the answer came to light: Keyboard-switching power isn't exposed in the normal course of things. You can only get at it if you install the Secure Settings app.

Which I did. But that didn't work either. A bit more poking around, and I found that it needed me to install the System+ Module to allow it to expose the functionality I wanted. No problem, it has a button that does exactly that.

But I clicked on "Enable" and it failed to grant the permissions.

I checked the SuperSU logs, and was told that I'd need SuperSU *Pro* to access that feature.

Motherfucker!

So I threw a few quid their way, and got SuperSU Pro installed & running. (You have no idea how many reboots I'm leaving out of the story here, by the way :(

Tried again, saw a failure in the logs but needed to enable expanded logging for a more useful answer. Enabled it, tried again, finally got a useful error: <stdin>[1]: pm: not found</stdin>

But *why* isn't pm found? It's in the /system/bin/ directory, with correct 755 permissions. Fscking hell. Maybe I'm missing something?

Switch away from my Linux VM and back into OS X because that's where all the Android SDK stuff is installed. Open up an iTerm, and fire up adb. Which fails to find my tablet because (a) it's not plugged in at the moment, and (b) I turned off USB debugging.

Fix those two problems, and *now* I can get a shell. One quick confirm on SuperSU Pro later, and I have a root shell. Can I run the pm binary? Yes.

Okay, clearly there's a problem in the installer preventing it from using the correct $PATH to find the binary that's right where it fucking should be. It *can't* be that many commands to run to install this fecking module, right..? And I already know the first one from the error logs. It's pm grant com.intangibleobject.securesettings.plugin something something something

So, let's dump that command into google and see if anyone has been helpful enough to list the *other* commands that are needed..?

Yes!

pm grant com.intangibleobject.securesettings.plugin android.permission.WRITE_SECURE_SETTINGS
pm grant com.intangibleobject.securesettings.plugin android.permission.CHANGE_CONFIGURATION

That's all it needs! Just those two commands!

So I run those from adb where pm is actually working, and the module is *still* not enabled. So I reboot just for good measure.

AND IT WORKS!!!

System+ Module is enabled, and Secure Settings is now allowing me to open the System+ Actions in Options and activate the "Input Method" entry.

Finally! I have reached the point where Tasker should be able to change my keyboard settings.

So now it's just the Tasker UI to worry about...

This time, I remember the lessons from last time: Do *not* start with the profile. First define the tasks you want. That means Tasks -> New -> Name it -> Add Action -> Select Plugin -> Secure Settings -> Edit configuration -> System+ Actions -> Input method -> Select desired keyboard from dropdown -> Save -> Leave all other conditions in the Tasker screen alone and hit the top-left button to say "I'm done here"

Repeat this for every keyboard I'll be wanting - in my case, LastPass, Hacker's, and stock.

Now that I *have* the tasks, create a profile to use them. Profiles -> New -> Application -> Select desired app(s) -> Top-left button -> From drop-down select desired keyboard task

Halfway there, I will now get the keyboard I want when I switch to a target app. But it won't go away when I leave. How do I add an "exit" task? I see nothing obvious.

A bit more poking around, and I get it: Long-press the green arrow, and it'll pop up a dropdown that allows you to add an exit task. Select that option, choose the stock keyboard task, and finally, after all that work, you're done.

Yes, I'm aware, it's an insane amount of work to have to go to just to establish a link between apps and keyboards. I only stuck with it so long because at every step of the way, it seemed like I was just *one* step away from success.

If somebody else is stuck with trying to get their CM tablet/phone to change keyboards based on app, I hope something in here helps you to save your sanity. Mine is, clearly, gone already :)


 

:: Next >>

[Links][icon] My links

[Icon][Icon]About Me

[Icon][Icon]About this blog

[Icon][Icon]My /. profile

[Icon][Icon]My Wishlist

[Icon]MyCommerce

[FSF Associate Member]


February 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28          

Search

User tools

XML Feeds

eXTReMe Tracker

Valid XHTML 1.0 Transitional

Valid CSS!

[Valid RSS feed]

powered by b2evolution free blog software