Compsoft Flexible Specialists

Compsoft plc

Compsoft Weblog Compsoft Website News Archive Privacy Policy Contact Us  

Saturday, November 13, 2010

Øredev 2010 - Day 5 - Tristan Smith

Ten Big Holes - Software for the Next 20 years - Nolan Bushell (Founder of Atari)

Nolan covered some really interesting futures such as Auto Cars, a future without credentials, nanobots, augmented realities and swarms.

My last 30 failures and what I've learned so far - Ted Valentin

Ted Valentin gave us a decent talk covering his path into software development and covering some of the lessons he'd learned.
He talked about how success and failure are both parts of the same coin.
He cited some indy successes with Plenty of Fish, Million dollar homepage and Chat Roulette.
The success criteria he covered:
  • It should be quick and easy to build,
Make use of Facebook login, Twitter integration, Flickr images, RSS, APIs and open data to make it easy to buiild.
  • Simple to maintain
  • Easy to make money
Using Google Adsense, Affiliates (Plenty of Fish, Tradedoubler), Freemium, Selling (other peoples) things, Partners, Sell the site.
  • Easy to get visitors
Use Google, make your site good link bait, use good keywords, give it a long tail, prefer low keyword competition, aim for low bounce rates. Make use of Google Trends to find the right keywords, Social networks, Piggybacking on popular services, Returning visitors (email), Blog widgets, iPhone, Referrals, PR
  • Fills a need
He gave some examples where not following these resulted in fails:
  • Making a site focussed on an unsearched for keyword.
  • Making it too hard for visitors (making them work when they reach your site) leads to high bounce rate.
  • Not making the site monetisable leads to less investment. Geographic site for used books, not needed.
  • Not having a deadline, either GTD throw it out or set a launch party to get social pressure to make you do it.
The bottom line was that It's fine to fail, it's better to fail fast and cheap.

97 Things Every Programmer Should Know - Kevlin Henley

An open source crowd sourced compilation of collective wisdom from the Experts. Why 97? Strong prime? nope, because it's not quite 100, it's not trying too hard.
Deliberate practice to master the task, not complete the task(Katas) - Jon Jagger
Estimates negotiated to Targets which is made into a commitment - a promise to deliver specific functionality to a certain level of quality by a certain date.
Kevlin only managed to cover a few of the points in the book, well worth getting.

Social Media and Personal Branding as Project Leadership Tools - Dave Prior

In 15 words or less, describe your brand.
What does this say about your promise of value?
Think about your persona - the mask you wear to portray a definite impression and conceal your true nature.
Putin is a good example of this, holiday photos all show a beefy masculine character, showing strength.
Define your brand that will enable you to stand out and make sure that you can live up to it. Be consistent with it or people will lose trust in it.
Sender Receiver Model
Sender > Encoding > Message - Noise - (Encoding can be a bad mood making your message change)
How are you encoding the message of your brand?
Be interested and be interesting, this is key to social media.
4 types of social media users
1. Carefree / Careless (detracts from your brand if you say inane things)
2. Noise Makers (Look at me)
3. Barkers (Associative or braggers - Not engaging)
4. Strategic / Tactical users
Your digital footprint is your new permanent record. You have to consider this when you're posting online. Would I want anyone to be able to see this in 10 years? Make sure it's something you can wear, use and maintain without being excessively protective about it.
Social Tokens - The Fawn Liebowitz Experiment - Using core references to find people on the same wavelength to skip straight to trust. Something people have an emotional tie to. Eases communication flow, establishing more than just a common interest, forms a deeper bond by finding joined ownership to tribes.
There's no award for volume, in fact too much can dull your message.

The 9 Reasons you're not apprenticing anyone - Dave Hoover

"Today we have more developers than needed, but we have a shortage of good developers". Pete McBreen
  1. Your company won't let you Your company can't stop you, it doesn't have to be a full apprenticeship 'program'. It could just be grabbing lunch together.
  2. You don't like your job Change your job or change jobs, work on it, introduce change, challenge people.
  3. You're too busy You've got things to take care of before you can build in time for apprenticeship. Take control!
  4. You're a road warrior Leave assignments, just because it's not ideal doesn't stop it being possible.
  5. You're independent Feels like way too much responsibility. Contract with options to hire and defined milestones makes it easier in this case. Leave a legacy.
  6. You're not good enough Yes you are, you don't need to be the ultimate master of a subject to get started down the path.
  7. You can't find any apprentices Keep looking! Check the user groups they're full of passionate people.
  8. You don't like to mentor  Even if you don't want to do it yourself, try to be gentle, don't stop others from trying.
  9. You don't believe in it. (Gave some examples where it really worked)
Start small, incrementally, do retrospectives.
Pair programming, pet projects with milestones and code reviews

True Tales of the App Store Making iPhone Apps for profit - Jack Nutting

Some points form his talk:
Cheaper apps attract more haters.
Reducing price can often decrease ratings.
The structure of the App store means it's a hit driven economy, you can see flocking behaviour as a result.
Apple has a number of methods of raising the focus on an app (Staff Picks, Themes etc).
Ways of making money
In app purchases, lite versions, iAds + competitors.
Build free apps that help people use your own business.
Gameify dull activities.

If you're going to be a copy cat, add something extra.
People love stories, no matter how stupid, work them into your games.
Updates can increase popularity and increase value (Doodle Bug, Pocket God)
Eliminate Choice - Make your app as simple as possible but not simpler - modified Einstein quote.

Market while coding, find enthusiasts and give them sneak peaks.
Work your social networks.
Issue a press release.

Øredev Conclusions

Having attended Tech Ed in previous years due to our focus on .NET technologies, we've changed our focus more towards mobile development in general. This means we've hit Android, BlackBerry, iPhone and Windows Phone 7.
Øredev has up to 8 tracks going on simultaneously with tracks from below:
Java | .NET | Smart Phones | Patterns | Social Media | Agile | Software Craftmanship | Xtra (non coding related extras) | Architecture | Cloud & Nosql | Realizing Business Ideas
The sheer amount of options is quite daunting at first but it really does expose you toa lot of useful content that you just wouldn't find at a domain / technology focussed conference.
One of the biggest takeaways for us has been around process.
Working across so many different technologies and projects at the same time has an overhead in management, one that we've managed so far using Scrum. Scrum's a great methodology but it isn't quite as flexible as we sometimes need it to be, in Scrum you'd choose a number of tasks from the backlog for a sprint and not deviate from it. We do sometimes have emergency work and rush jobs that can't wait until the next sprint. This can be a real pain when you have to keep half heartedly adopting a methodology.
Kanban is a more flexible methodology that both Tim and I have mentioned in the blog so far, we saw a lot of sessions on it. We're both convinced it has the right managerial qualities as well as psychological benefits to suit us really well.
I really hope you've enjoyed hearing about the sessions we've been to, Øredev is a great conference one I can wholeheartedly recommend. Bring on Øredev 2011!

Øredev 2010 - Day 4 - Tristan Smith

Patterns of Parallel Programming - Ade Miller (MS Patterns and Programming)

We've reached the end of the free lunch, processing power and other computing metrics have stopped following Moores law, more cores are being added but they're not actually faster, there are just more of them.
This makes parallel programming all the more important, as we add more cores that serial programming just leaves unused. This makes the work Microsoft have done to make parallel programming easier with the Task Parallel Library (TPL) in .NET 4.
Ade illustrated the problem with a Stock market analysis program.
He pointed out that there are a number of things that block parallelism in your code.
You have to consider IO read/write constraints, actions that have fixed dependencies that impose a required order, that you're targetting items that take most time.
The TPL gives you Factory.StartNew, ContinueWith (To pass data from another task), ContinueWhenAll lets you specify a number of other tasks, ParallelFor lets you iterate in parallel (though you lose the index).
Think about Tasks vs Data, Control Flow, Control and Data Flow
When dividing up parallelisable tasks the size of the data chunks has issues where if you make it too big you underutilise whereas too small you thrash.
Using the ThreadPool is making use of the scheduler that focuses on making use of the available resources, the TPL uses a scheduler itself as seen with ContueWhen.
It's important to check that the tasks taking the time are the ones you're targeting to parallelise. Making sure there's business value that's focussed on, don't do it for the sake of it.
With Parallel.For because you can't do index incrementing parallel code, you have to factor that into the code you run.
Calculating totals for example requires an overloaded Parallel.For where each method calculates a subtotal and the grand total is calculated afterwards with a lock on the value being added to.
Parallel tasks can't really use shared state, you get locking issues so when shariing state, if possible don't share at all, use read only data, synchronise afterwards.
You can consider running parallel tasks to see which the fastest result to come back, then using Cancel and Task.WaitAll to cancel the other running tasks.
BlockingCollection - Reading in a file line by line, adding the lines to the blocking collection such that the processing of each item can be parallel where the collection can't be.
Don't just rewrite your loops, figure out what your users want and what's actually taking the time.
Architect for Parallelism.

In the clouds with the RX framework - Glenn Block

We're increasingly designing our systems to accommodate latency which is an increasing trend now our code is running in the cloud. RX is a handy tool for this..
RX is a library for composing asynchronous and event-based programs using observable collections. Systems where data is constantly coming in and it needs to be reacted to.
You can do this already, but this is currently really hard. We tend to work with collections in a pull based model, iterating - give me the next, which is blocking. This can mean locked programs that become non responsive.
The key interfaces to make use of it are IObservable + IObserver which give you Subscribe and OnNext, OnError, OnComplete meaning you're reacting to, not controlling the flow.
It's reactive rather than interactive.
Perception is reality, users faced with hangs and slow updating data, believe that's what the app is.
You need to be aware that if you're not using a true observable , you're not going to see the asynchronous behaviour. For example Enumerable.Range(0, 3).ToObservable() would be running synchronously.
RX is available in WPF, Silverlight Toolkit and WP7.
You can choose what thread to observe on, for example observing a textbox keyup event.
Throttling by time, filtering of which events to observe. Google Translate example.
They're working on new Async features that work with the RX framework and interestingly LINQ to Objects in JavaScript..

ASP.NET MVC 3 - Brad Wilson

A whistle stop tour of the new features of MVC3. New pluggable view engines like Razor, HTML 5 default templates, integration with jQuery validate, Nuget (automatic installation and dependency filling), DI and IOC injection hooks, global filters and more granular request validation (so individual properties can be marked as ValidateInput(false) to allow HTML content through) and a bunch more.
A search for MVC 3 will give the same info as I would, so I won't enumerate the features in any detail here. There will certainly be some useful additions to our processes though.

Twitter's Real-Time Architecture

An unfortunate choice for me, the speaker Kyle Maxwell was fighting his nerves the whole way through which made it quite excruciating to watch (and frankly dull).
He talked about briefly and somewhat incoherently about various sub systems they use, a queuing system, daemons used to spray messages to data shards. The different parts seem to be composed of entirely different languages (scala, ruby and some others I'd never heard of).
All together a disappointing session, I'd hoped to get some takeaways from it and didn't take anything away.

Kanban and Scrum - making the most of both

Henrik made a bunch of points which are covered in the free book he released [Insert url for PDF] (It requires registration but nothing more). As a result, I'll just describe one example he used to show how limiting work in progress is a productivity win.
After getting a volunteer from the audience, time how long it took to write a name (4 seconds), he asked for 4 more people to come up on stage.
He then asked for a time estimate for writing the 4 names, (20 seconds).
With that he put the obstacle of not limiting work in progress, all the names had to be written one letter at a time. It took almost 2 minutes! As a German being told Swedish names this added to the time taken.
This was a somewhat contrived example but still illustrates that context switching and working with unlimited work in progress can be a real issue.
Scale that up to normal jobs where in a single day you have to context switch 3/4 times a day and you can see that this example bears out in real life.

Øredev 2010 - Day 5 - Tim Jeanes


Omg omg omg! It's Nolan Bushnell, the founder of Atari!

Session 1 - The Technical Debt Trap - Michael "Doc" Norton

The term technical debt refers to problems or bad code that you decide you'll fix later but never really do. A little debt early on speeds development. It's fine so long as you pay the debt back soon, but left unattended is like accumulating interest on that debt as the bad code compounds.

Though developers generally hate technical debt in any form, it can be a good thing provided it's managed correctly. It allows for rapid delivery to elicit quick feedback and correct the design.

This isn't an excuse to write code poorly now: if it's debt, you intend to pay it back. You can only do this if you write the code in a way that you can refactor it into a better shape later on. Dirty code is to technical debt as the pawn broker is to financial debt.

If you don't have a plan to get out of the debt you're in, you don't have technical debt, you just have a mess; cruft.

Much technical debt can be avoided; however, much is unintentional. When you look back at your old code you find it full of cruft that needs cleaning up. Unfortunately, how you got into debt doesn't excuse you paying it back.

A lot of handling technical debt comes down to handling expectations. We know we're going to incur it, so we need to balance that against the pressure for speed over quality. Technical debt then compounds as mess is added to mess in order to maintain velocity. Adding clean-up sprints to the dev cycle generally doesn't work. Studies show that the cruft just re-accumulates rapidly afterwards. A better pattern is to clean continuously: try to leave the code cleaner than you found it.

Session 2 - C#'s Greatest Mistakes - Jon Skeet

We love C#, so it's great to hear such an expert on the subject (and someone who loves it even more passionately) talking about what's wrong with it.

This was almost entirely a code-based session, so it's hard to write about without duplicating reams of code.

Ah well.

Session 3 - Fostering Software Craftsmanship - Cory Foy

We saw a few case studies of people who have opted out of the traditional software development career structure to follow instead a more journeyman-type model whereby they spend their time travelling and pairing with people and exchanging ideas.

Historically, software development became increasingly prescriptive from the 1960s onwards. The Agile Manifesto was a massive backlash against this that reoriented programmers' values much more towards being flexible and adaptive. Though the older models were more heavyweight, they did lead to stricter practices.

Empowerment can be defined as sharing information with everyone, creating autonomy through boundaries, and replacing the old hierarchy with self-managed teams.Agile practices often fail when management still works from a carrot-and-stick way of rewarding goals or punishing failures. If we're to work as a team, individual rewards and bonuses are always counter-productive.

Session 4 - Real Options - Olav Maassen

Similar to buying stock options - by purchasing an option now to buy later at a price set now - almost everything can be considered an option by weighing up the benefit against the loss incurred by choosing against the option, whether than cost/ benefit is financial, emotional or moral.

Options have value; options expire; never commit early unless you know why.

Similarly, buying a plane ticket far in advance is much cheaper than buying it at the last moment. The earlier ticket is non-refundable, but the flight is not mandatory: all you've bought is the option to fly then. If you choose not to take the option, you've not lost much.

Being uncertain can often be better than being wrong.

In agile practices, pair programming provides options: two people think of different ways of tackling the same problem.

Scrum backlog provides options: every idea is there. Action is postponed until the time the priority decision needs to be made.

Session 5 - True Tales of the App Store - Making iPhone Apps for (Fun and) Profit - Jack Nutting

Developing apps for any platform's app store let's you skip most of the hard work of selling software - you have no inventory management, payment handling or shipping. However, this has led to a crowded market, so marketing becomes a bigger priority. You can basically run your own software firm from your desk, but marketing isn't something developers normally do that well.

The app store has changed the public's relationship to software - they buy and discard software much more freely now, many of them for the first time.

The lower the price of your app, the more downloads you'll get, but also the more haters you'll get - more low ratings and more negative comments.

The app store is a hit-driven economy: the better you're doing, the better you continue to do. If you're outside the top fifty or so, you can sink without trace. If there were ever a bug in the app store that showed a non-entity app as number one, within a day it would probably actually be number one.

Releasing a "lite" version of your app can help to gain publicity, but Apple have strict rules against non-functional buttons that only tell the user to buy the full version to activate that feature.

Other than making money directly through the app store, advertising can be a revenue stream, but it's generally a slow one. For games, in-game purchasing of additional levels can a;also work.

An alternative is to build free apps that help people use your business: the app doesn't make any money in itself, but acts as direct or indirect advertising to the existing business model.

Before you even write a line of code, be smart: don't reinvent the wheel. Or if you must, at least add something new.

Making an app that people will keep using, then even if they don't spend any more money in it, it lengthens its word-of-mouth viral capacity. People love stories - even (or perhaps especially) stupid ones. This gives you the excuse to continue to add new content. You can add variety by adding mini games (even in games).

Make your app as simple as possible, but not simpler. Unless it's a game or a book, people will want to be in and out of your app as quickly as possible. Eliminate settings wherever possible; if you can make a choice on behalf of your users, do so.

Thursday, November 11, 2010

Øredev 2010 - Day 4 - Tim Jeanes

Session 1 - Patterns of Parallel Programming - Ade Miller

[The source code from this session is available at:]

With massively multi-processor PCs becoming increasingly mainstream, it's important to utilise the full scope of processing power available wherever possible. .NET 4 implements a bunch of features to make this relatively painless. Unless we take advantage of parallelism, our software may actually run more slowly as newer machines have more, slower cores, rather than a faster single one.

We saw the Visual Studio 2010 profiler. This is baked into VS2010 and shows clearly where the CPU is being used, together with disk access, etc, on a timeline. This looks really handy for identifying where the bottlenecks really lie. Using profiling is critical - understanding the application and where the problems are first is vital, rather than just wildly parallelising unnecessarily.

There are a couple of models for parallelism: one is task-based parallelism where we consider what tasks need to be done and run them in parallel. The other is data parallelism: for example in image processing you could split the image into pieces, process them in parallel and then stitch them together in the end.

In data parallelism, it's important to get the data chunk size right: too big and you're under-utilised; too small and you waste too much time thrashing.

You also have to take into account at runtime what degree of parallelism is appropriate: your software may end up running on a machine in a few years that has far more processors than were available when you wrote the software.

Rather than counting processors yourself and manually creating threads, it's better if we can hand this responsibility to the .NET framework and allow it to take care of the degree of parallelism itself. Ideally we just express where parallelism can take place.

In .NET we can do this using the Task<> class. We specify a task that needs to be performed, but we don't say when it starts. We only request a result from it. You can specify dependencies between tasks.

There are a couple of standard patterns that are addressed for data parallelism: loops where items can be handled independently; and loops were the required result is some kind of aggregation of all items in the set.

The first of these is trivial: replace for() with Parallel.For() and you're done. Bear in mind though that you can never assume that the items will be processed in any kind of order at all. Parallel.ForEach can even be used on collections where you don't know the collection size up front.

There's also an overload of Parallel.For that allows for data aggregation between threads. The only gotcha is to ensure you do your own locking in the step that combined the sub-aggregations from each parallel section. Locks are pretty bad in terms of performance though, so if you find you're getting a lot of them in your parallel tasks, it's a good idea to consider whether or not this is the right way to go.

this isn't a silver bullet: parallelism is still hell if your tasks need to share data or need to do a lot of synchronisation.

Task.WaitAll allows you to wait until all parallel tasks have completed; Task.WaitAny allows you to continue after just one has finished. Tasks can be cancelled if they're no longer needed. These last two can be combined if you're doing a parallel search for a single item in a large set.

The Pipeline pattern can be used where many tasks have to be performed on data items that are idenpendent of one another. I.e. once Task A has finished with a data item, it can immediately be passed to Task B. Buffers exist between the tasks that can have size limits on them to ensure that processing capacity is used most where you need it. This can prevent thrashing and memory overflows.

In some cases it's appropriate to combine parallel strategies: if you pipeline has a bottleneck, that stage can itself be parallelised (much like adding more worker to the slow step of a production line).

Session 2 - Run!

Billed as a 5km run, it was mercifully a little shorter than that. Man, running along the Swedish coast in November is cold!


What was I thinking?

Session 3 - Personal Kanban - Jim Benson

Building a personal kanban board for your own work (or even for your own dreams) can build a lot of clarity in your own mind. It removes the brain clutter than creates stress and dissatisfaction, giving clarity to your current position and how well you're doing at whatever it is you do.

Even in a work-related personal kanban, it's worth including non-work items. The fact that you're worried about a sick relative is a distraction to you today, so it belongs in your WIP column as it's a distraction to you that's impacting on your performance.

We tend to want to take on way more work than we can deal with, because we want to be productive - or at least be seen to be. We often don't recognise that we have our own WIP limit, that when exceeded, dramatically impacts on our productiveness.

Kanban can also be used for meetings: It makes for a more flexible, dynamic agenda that contains things the attendee actually want to talk about. it also helps to keep the conversation focussed. I'm not totally convinced on this though - it's hard to say for sure when a discussion on a topic is definitely "done".

Session 4 - MongoDB - Mathias Stearn

MongoDB is a document-orientated noSQL database. A document is essentially a JSON object, which gives a few advantages over a traditional SQL-based database.

As the data isn't stored in defined tables, all objects can be expanded dynamically. Also, as relationships aren't used except where needed, parent and child objects can be held as a single object.

For example, if you're storing a blog in a database, you'd hold each post as a document. That would include all tags and comments as array properties on the blog post object. Physically, these are all held in a single location on disk (effectively as a binary representation of the JSON string) making object retrieval very fast. Data writes are also fast. This makes MongoDB appropriate for high-traffic web apps, realtime analytics or high-speed data logging.

Querying the data is function-based rather than SQL based, but this really only leads to a syntax difference: db.places.find({zip:10011, tags:"business"}).limit(10); is an example query equivalent. Pretty self-explanatory, and a little shorter than SQL. Critically though, there's been no join between a Business table and a Tag table that you'd get with SQL.

More complex queries are also possible, such as {latlog:{$near:[40,70]}}.

Data can be indexed by property to improve performance.

Updates to records are achieved by combining criteria to find the relevant document, with a $push command that adds or updates properties on the document.

Where appropriate, objects needed be combined into single documents. Joins can be achieved by adding ObjectId references as properties on documents. There's no such thing as referential integrity in this case though.

Actions on a single document can be chained together and will be treated atomically, giving you a rough equivalent to SQL transactions. There's no such thing as atomic operations across multiple collections.

MongoDB is impressively mature in terms of deployment features such as replication and database sharding.

Session 5 - Challenging Requirements - Gojko Adzic

Customers often ask you to implement their solution to a problem. This often leads to nightmare projects that are way bigger than they need to be. It's generally better to understand what the real problem is and solve that. The implementation of the true solution can well be better than implementing the solution the customer initially identified.

Similarly refuse to use the technology the customer specifies unless you first confirm that the technology actually matches their need. Often they'll think they know the best way to implement a solution, but another option may be far simpler and more appropriate.

Don't rush into solving the first problem they give you; keep asking "why" until you get to the money: that'll be their real requirement.

Know your stakeholders: who is going to use this and why?

Don't start with stories. Start with a very high level example of how people will use the system and push back to the business goals. The story you're presented with may well not be a realistic one.

Great products come not from following the spec; they come from understanding the real problem and whose problem it is.

Effect maps can be used to trace the purpose of all features. They ask why the feature is needed, who the people are that want the feature, then what the target group want to do and how the product should be designed to fulfil that.

Session 6 - Kanban and Scrum - making the most of both

OK, I think it's fair to say I'm officially totally in love with Kanban now. However, I'm also fairly fond of Scrum. Short of a Harry Hill solution to this dilemma, I attended this session to see how we could take the best of both worlds.

The key features of kanban is to limit the WIP at any stage, and to measure the flow (typically by measuring the lead time - the time it takes for a task to cross the board).

Having a lot of parallel tasks or projects running simultaneously leads to more task switching, which leads to more downtime and delays, which leads to all the projects being completed later.

Doing tasks in series, perhaps with a background task to work on while the main project is blocked, keeps everyone focused and more productive, completing projects sooner: leading to happier customers and happier developers.

There's and example of the evolution of a kanban board at

Scrum prescribes rules more than kanban does, such as planning and committing to sprints, regular releases and retrospectives. Some of these items can be useful to add to the basic kanban model, depending on what's appropriate for the company.

Kanban doesn't prescribe sprints (though they are allowed). I think we may well go without sprints, just because at Compsoft we need to be able to react much more quickly - it's often too hard to commit to a period of time during which our workload can't be altered.

Kanban focuses on having multi-ability teams, where team members frequently help out on tasks outside of their normal primary area of expertise. It's not that everyone has to do everything though (just as well - my Photoshop skills are pretty lacking.).

Estimation is flexible in kanban. Some don't estimate at all - just count; some estimate in t-shirt sizes (S, M, L), some in story points, some in man-hours.