Waldo Codes

Pragmatic insights on software craftsmanship and other topics.

Prism documentation is thorough. However, that makes it lack a certain conciseness that makes it easily digestible. One key thing that you will want to understand is how to get your views and view models connected. This post will quickly detail how to make it all work.

View Model Locator Configuration If your view model has a default constructor, it will be instantiated and matched to the view automatically. If your view models do anything interesting, they will likely not have default constructors. You will need to configure the view model locater to resolve constructor dependencies from your IOC container.

ViewModelLocationProvider.SetDefaultViewModelFactory(
    t => container.Resolve(t)
);

View to view model wiring is based on convention. They are match based on name, much like the familiar MVC conventions. Be careful though, if your views are suffixed with “view”, it will not work.

✅ Do this ViewModels.MyScenicViewModel – Views.MyScenic ❌ Don't do this ViewModels.MyScenicViewModel – Views.MyScenicView

The convention can be overridden with the following configuration point.

ViewModelLocationProvider.SetDefaultViewTypeToViewModelTypeResolver(
(viewType) => {
    ...
    return viewModelType;
});

View Specific Wire up Your view must implement the IView marker interface found in the Microsoft.Practices.Prism.Mvvm namespace.

public partial class Wizard : Window, IView {
    public Wizard() {
        InitializeComponent();
    }
}

And finally don't forget to set the auto wire up flag to true in your view.

<Window x:Class="MyProject.Views.Wizard"   
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
 xmlns:mvvm="clr-namespace:Microsoft.Practices.Prism.Mvvm;
 assembly=Microsoft.Practices.Prism.Mvvm.Desktop"
 Title="Wizard" Height="600" Width="800"
 mvvm:ViewModelLocator.AutoWireViewModel="True">

For a more in-depth read, check the Prism 5.0 Developers Guide.

I was listening to .Net Rocks! show 971 with Brian Noyes, and I was glad to hear that Microsoft dropped a new release of Prism – v5.0 for WPF. I have been using the Prism framework on a large application. It allows me to easily break the application into modules. For applications that need more than the simple MVVM support offered by frameworks like MVVM Light, it is a solid choice. All the code is on CodePlex. Here are the highlights of what is new in v5.0 – Broken into smaller more targeted assemblies – Updated NotificationObject to BindableBase – Includes a conventions-based View-Model Locator – Objects can be used to pass data around in Region Navigation

The Haters These are all great updates. Sadly, reading comments on CodePlex would make you think otherwise. Most comments are negative, directed at the complexity and learning curve of the Prism framework. Other comments seem like people blowing off steam about issues in WPF. Normally, I wouldn't pay much attention to comments of this nature. However, it struck me as odd that people have such backlash at technologies that are a clear improvement on previous generations.

For anyone discouraged by the complexity of Prism or WPF in general. Do not lose sight of what the patterns and frameworks surrounding WPF allow you to do. Ultimately, they are allowing you to write testable decoupled code. Be willing to embrace the pain that may be pushing you toward the pit of success.

Microsoft, thanks for the Prism Update! Happy Coding!

While working on a large .Net API, I ran into an interesting issue. The codebase was exposing functions in classes that were not necessary for the public API. Originally, this was done to facilitate testing. Several of the classes had methods involving complex calculations. It was easiest to test these in isolation. The code had top level functions that would call several other calculation functions. The result was a testable cluttered API.

While this was not an issue when we used our own API, it was a point of confusion for third parties using our API. There are two lines of thinking one could take.

Test Centric With this approach you justify your design by saying that it is more important to write the code to be testable than it is to worry about hiding a few methods.

Design Centric In this line of thinking you must put the importance of the design beyond that of tests. Testing is important, but it should not drive the design. For example, this approach shuns adding public properties to exam the internals of a class solely for the purpose of testing.

I decided to take a design centric approach. Modifying many access modifiers to internal, I soon had the API cleaned up. Using intellisense in Visual Studio showed a concise list of functions and methods. Utilizing our API would be straightforward and simple.

At this point I realized I needed to address all my failing unit tests. Now that the access modifiers were set to internal, the test harness was unable to see the methods being tested.

The code could have been refactored to pull out a small calculation engine. This then could then act as an internal member of the higher-level classes that comprised the API surface area. This would achieve the goal of making the code testable, while at least keeping the primary API classes clean. However, API users would still see the additional calculation class. The only true way to keep the users out of the lower-level code is to mark it as internal.

As it turns out there is a pragmatic solution to the issue.

In the project AssemblyInfo file add an attribute to expose the internal classes to your testing assembly.

Here is an example with VB attributes...

Exposing the assemblies to Test Framework, and to Moq's Proxy Generating Assembly

<Assembly: InternalsVisibleTo("Company.Library.Calculations.Test")> 
<Assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")>

Notice that I also expose to DynamicProxyGenAssembly2, this is so Moq's proxy engine can still create mocks for internal classes as well.

This works well. It’s a minimal pain, and allows us to hand our third party implementers a very clean API.

When working in WPF the typical property changed notification typically looks something like this…

public bool IsBusy {
  get { return _isBusy; }
  set {
    if (value.Equals(_isBusy)) return;
    _isBusy = value;
    RaisePropertyChanged("IsBusy");
  }
}
private bool _isBusy;

The issue with this is that it uses a magic string and is not particularly refactor friendly. It can be easily improved by using a lambda expression. This gets us the refactor friendliness, but it still is a little ugly.

public bool IsBusy {
  get { return _isBusy; }
  set {
    if (value.Equals(_isBusy)) return;
    _isBusy = value;
    RaisePropertyChanged(() => IsBusy);
  }
}
private bool _isBusy;

The best case would be to not need to raise property changed at all. This is not going to happen, so how about a second best? What if we could just raise property changed without needing to specify the property name at all?

public bool IsBusy {
  get { return _isBusy; }
  set {
    if (value.Equals(_isBusy)) return;
    _isBusy = value;
    NotifyChanged();
  }
}
private bool _isBusy;

With .Net 4.5 and above, this is now possible. Here is a basic implementation of ObservableObject with the new .Net 4.5 CallerMemberName attribute.

/// <summary> Base observable object class. </summary>
public abstract class ObservableObject : INotifyPropertyChanged {

  public event PropertyChangedEventHandler PropertyChanged;
  public delegate void PropertyChangedEventHandler(object sender, PropertyChangedEventArgs e);

  /// <summary> Notify property has changed </summary>
  /// <param name="propertyName">String property name, if not 
  provided, will be picked up from calling member</param>
  protected void NotifyPropertyChanged([CallerMemberName] string propertyName = "") {
  if (PropertyChanged != null){
    PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
        }
    }
}

Happy Coding!

Recently I discovered that NuGet package restore has been dramatically simplified.

The NuGet team has provided the following document Migrating to Automatic Package Restore.

I will provide the abbreviated form of that document here.

The Previous Method Right click on your solution in visual studio choosing >> Enable package restore. This created a .NuGet folder containing the nuget.exe etc. Your .csproj or .vbproj files were modified with an tag containing a NuGet path.

This triggered automatic package restore upon a build of your solution. Thankfully, this is no longer necessary!

The New Method You do nothing. Unless you are building from the command line. If so simply run 'NuGet restore' before the rest of your build.

Enter your email to subscribe to updates.