Tim Jeanes: TechEd 2008 - Day 4
DAT318 - Microsoft SQL Server - Data Driven Applications from Device to Cloud
This was an extremely interesting session where we saw how SQL Server will evolve in the future, most particularly in connection with cloud computing.
Yesterday I was thinking that using SQL Data Services in the cloud would be pretty much identical to using SQL Server on a traditional server. It turns out that this is far from the case! Though they say they will support traditional database tables sometime soon, that's not the direction Microsoft are taking things and is not really the paradigm to use when thinking about cloud data storage.
SQL Services will store data in the Table Storage. It chucks everything into one big table, with all your different types of entities in together. It doesn't query the table using SQL statements, but rather uses LINQ. A typical statement might look like this:
from customer in entities.OfKind("Customer")
where customer["Surname"] == "Smith"
select customer
What makes it crazier is that there's no schema whatsoever to go with this table. If our application changes and we now need to add a MiddleName property to our customer class, we can just do that and start chucking our new form of customer object into the same table. We'll have to bear that in mind when we query the customers, of course. Even more so if you ever change the datatype of a property.
Another consideration is partitioning of data. Whenever you insert a record of any kind, you have to specify the partition that that record belongs to. The partition name is just a string and you can create as many as you want.
When you query data, your query can't cross the partition boundary: every query runs on a single partition. That's both a massive strength and a massive weakness. On the one hand, if you know which partition your data is in, your query will run very quickly. Let's say you partition on customer id (which was one option the speaker recommended), when you're considering data belonging to that customer you'll get results in no time. However, when you want to search for customers called Smith, you're stuck.
The only thing you can do to query across partitions is to launch queries in parallel, running one in each partition. In our customer search example, that means you're going to run one query for every single customer, which is clearly madness.
So what makes for a good partition decision? I don't know yet, and from talking to the speaker after the session, it doesn't seem anyone else really does either. For massive database (such as those for Windows Live Messenger or for Hotmail), a poor partitioning decision early in the project's lifecycle can have massive repercussions later on. Fortunately, Microsoft plan to have SQL Data Services make it a whole lot easier (for a start, possible) to change your partitions on a live database.
For the size databases we typically use at Compsoft, where (excluding binary data) they typically don't go far over 5GB, it's tempting to chuck everything into a single partition. That's not a scalable solution, though, and it's going to be a nightmare to go from one partition to multiple partitions later on. Also, the CTP release of SDS won't allow partitions bigger than 2GB. So should we arbitrarily assign each customer to one of a handful of partitions, then add more partitions as the application usage grows? Should we split on the first letter of the surname for now, moving to the first two letters when our application becomes too popular? Should we split on country? On the year the customer signed up? Which partition do we put our lookup data in (maybe it belongs in its own partition and we cache it all in the app)? Let's just try it and see!
WUX310 - Beauty and the Geeks: Developer-Designer Workflow
The Compsoft/BlueSulphur partnership is still a pretty new one, and we haven't really figured out quite how to balance the relationship between designers and developers. This session focussed on how to work side-by-side on the same Silverlight project. As this is something we're moving into, I hoped to pick up some pointers on good working practices. I wasn't disppointed!
We saw the development of a Silverlight control, what work the designer and the developer each did, and what considerations they had to make to how the other worked.
The project is available on codeplex: look for DeliciousSilverlight.
This is the first of seen of any real code behind in Silverlight development, and was impressed with how easy it all appears, and well-structured it is.
We saw some good practices for how to make your Silverlight controls unit testable: to do this we create a view model that exposes properties to the user interface. The designers can work against these in Blend, while the developers hook them up either to the actual view or to unit tests. We can detect whether the control is being rendered in a browser or in Blend, and by using dependency injection we can provide mock data that will appear in Blend for the designers to work against. This is genius!
When binding properties on the controls we develop to the view model, use the syntax: Text="{Binding MyTextProperty, Mode=TwoWay}" - this ensures the Text property of the control reads and writes to the MyTextProperty in the view model, both from Blend and from the browser.
We also saw how to make controls that are skinnable in Blend. Using the TemplateVisualState attribute on the control class exposes a number of states that the control can be in (visually). Then in code, you handle the various events that trigger different display states (mouse over, click, etc.) and use VisualStateManager.GoToState to change the control's appearance.
Using the DependencyProperty type, we can expose further properties of our custom controls to Blend. You use this to tell it which actual property in the class you want the designer to affect, set default values, and (optionally) set a change handler method to do anything you need to in order to reflect changes made in Blend on your control is real time.
That's pretty much all you need to do to enable the designers to use the Blend world they're used to to play with your controls and style them visually to their heart's content.
PDC307 - Microsoft Visual Studio 10: Web Development Futures
Here we covered some of the improvements that will be coming out in Visual Studio 10.
First up was the designer view for web pages. At last, this renders in a standards compliant way! Yay!
If like me you prefer to type your own markup in source view, it's just great to see that code snippets now work in there as well as in the code window. They're giving us about 200 out of the box as well as the ability to add your own. I think this will really make a massive difference in churning out easy pages.
Also, when it comes to deleting controls in the source view, triple-clicking an element now selects the whole of it, which will make the whole process a lot less fiddly.
There are some improvements to javascript intellisense. It used to work cross-file only, but is now available within the same script file. Also, if you have separate javascript documentation for intellisense purposes - such as the recently Microsoft-released jQuery support files - there's a fix that means those files (though included in your project) won't be shipped to the browser. This change is already available as a hotfix to VS2008.
Another cool new feature is config transforms. You can set up your base web.config file, then create transforms that describe (in XML) how that file should be modified for use in debug, release, staging, etc. You can create one of these for each Build Type you have defined. When using msbuild to perform your compilation, these can be called in using the switches /t:TransformWebConfig and /P:Configuration.
There are some improvements on the web site publishing front too. They've reworked the publish dialog box, and the publish tab in the project properties has been expanded to include such things as which files are included, which IIS settings, etc.
ARC309 - Using WPF for Good and not Evil
OK, it had been a long day of amazing stuff and my brain was pretty fried, so I was up for something a bit lighter, and (as ever) David Platt didn't disappoint.
We took a look at a few applications in WPF, where (with all the whizz-bang graphical power it gives you) it's extremely easy to make a truly bad user interface. It's also easy to make a really good one, but only if you know what you're doing.
The key fact is that most developers don't know what they're doing, and though a designer can make something look pretty awesome, that doesn't mean they're any good at designing a good user interface. The best user interface is one that the user doesn't notice. The effects available to you should be used sparingly, with subtle transitions appealing to the user's right brain to send subconscious signals. Any effect you use should have a reason for being used (and "because I can and it's cool" doesn't count).
I really must get round to reading The Design Of Everyday Things by Donald Norman.
0 Comments:
Post a Comment
<< Home