Uncategorized

Organic Testing?

Friday, July 4th, 2014

Either I’m missing something or the current test frameworks [1] don’t support what I call an “organic test”.

[1] …which are all excellent and have helped me sooo much in the past, present and future!

The fact that I prefer MiniTest over Rspec doesn’t matter in this write-up, as I couldn’t find a way to do what I want in either framework.

What I Do.

I constantly find myself writing tests like this.

it do
  song = Song.new(:title => "Dragonfly")
 
  song.size.must_equal 1
  song.title.must_equal "Dragonfly"
  Something.else.must_equal 2
end

Note that the actual tested code needs to be run before the assertions, but not before every assertion. It’s important to me to run it once, have the same result for every assertion and save time. The result is treated as immutable.

What I Should Do.

The three assertions are what I’d love to have in it {} blocks to have a better overview how many assertions I broke. So, following “the rules” I’d end up with the following ugly code.

describe do
  before { @res = Song.new(:title => "Dragonfly") }
  let(:song) { @res }
 
  it { song.must_equal 1 }
  it { song.title.must_equal "Dragonfly" }
  it { Something.else.must_equal 2 }
end

This just feels wrong. I hate before blocks in general, and usually, I want to set up more variables so I’d have several let lines. It’s not readable.

I can’t use subject or let or let! . As you have already seen, the third line doesn’t call subject and relies on the test state after before.

What I Want.

My idea is to extend MiniTest/Rspec to support this syntax.

spec do
  song  = Song.new(:title => "Dragonfly")
  title = Song.find(song.id).title
 
  it { song.size.must_equal 1 }
  it { title.must_equal "Dragonfly" }
  it { Something.else.must_equal 2 }
end

That is super straight-forward: Setup your variables, and then test them with isolated it blocks. How can I do that with existing frameworks?

My Dilemma Of Semantic Versioning

Monday, June 30th, 2014

Hello. I am a gem author and I got a problem. It’s called versioning.

Yes, you’re right. Versioning. It’s the third big problem of computer science, after expiring caches and naming things.

My gems are all under active development. For some strange reasons, I am avoiding major releases as people might be expecting a major new break-through feature, but all I changed was a minor detail that will affect 1% of all users.

I do minor version bumps instead and try to deprecate as much as possible. Apparently, this is not enough as there’s always angry users with broken builds that “didn’t expect that change with a minor release”.

What Is Semantic Versioning?

The idea of semantic versioning is very clever, and I’m happy there’s something close to a standard.

A semantic version is a string like 1.0.3, the three segments are called major, minor and patch version. You increment major when you made an “incompatible change”, meaning that the new gem version might break your old code. The minor is bumped when the change is “backward-compatible”, so your code is guaranteed to work with the updated gem. Patch is used when incorporating bug fixes and feature additions.

Semantic Versioning And Gems.

The problem arises when you’re actively working on your gems – which I happen to do! Actively in terms of adding features, fixing bugs and constantly refactoring older parts of code. When I refactor stuff I often find better, cleaner ways to achieve something and not uncommonly the restructuring comes with a “minor” behaviour change of the code.

Here are two examples to illustrate my dilemma.

A while ago I started moving common code I use in all my gems into the uber gem. For instance, in representable I allow “dynamic options”.

property :title, if: Rails.env == "production"
property :title, if: lambda { policy.ok? }

That’s a common pattern found in many gems. The :if (or any other option) allows providing a string, a lambda, a symbol, etc and evaluates that at run-time.

Evaluating those options is now handled by uber – which saves me a lot of work. In uber, a dynamic option lambda always receives arguments (at this point it is totally irrelevant what options).

Breaking Things.

So, after updating from representable 1.7.8 to 1.8.0, the above code would break as the old code does not accept arguments.

The fix is to change code as follows.

property :title, if: lambda { |*| policy.ok? }

It’s a really really simple fix. It basically says “I know there might be options passed into my block but I don’t care”.

I didn’t deprecate the former behaviour, but that’s another issue. I knew this was gonna break “some” people’s code. I announced that in the CHANGES, blogged about it, and supported wherever I could.

I didn’t deprecate it as the new solution is way cleaner and consistent – all options receive arguments, are evaluated using the same mechanism, and so on.

Not deprecating this change was stupid. However, deprecating things is pain and not every development team has a horde of willing programmers as in Rails core to work on deprecations.

Most users just updated their :if blocks to receive args and moved on. Some users complained – legitimately – that they were expecting a major bump.

Minor Details.

Another example, also in representable, is when I changed the way inline representers are created. This is a completely internal change.

class SongRepresenter < Representable::Decorator
  property :title
  property :label do
    property :manager
  end
end

Again, the details here are irrelevant. Look at that block.

property do .. end

That’s what we call an “inline representer”. What happens behind the scenes is a new decorator class gets created for that nested block.

Until today, that new class was inherited from SongRepresenter. This is wrong as this might lead to unexpected behaviour in the inline representer. In the new version, that new decorator would be derived from Representable::Decorator.

And, right – that change is so internal that 99% of the users won’t even notice that change because nothing changes or breaks for them.

However, 1% of your users, I call them “power users”, do amazing stuff with your gem – stuff you didn’t even think of when designing the framework.

Their code might break with that update. They will complain and I have to lower my head in shame and admit that I didn’t properly version my shit.

What makes me frustrated here is that you simply cannot deprecate it – there’s no way to find out if a users relies on the old superclass for the inline representer. It’s a dilemma.

Semantic Versioning – Done Right.

The solution I suggest is to add a fourth segment to the semantic version and shift its meanings down one level. That would mean 1.2.0.1 to 1.3.0.0 is not backward-compatible. I find this a stupid idea and I know you hate it, too.

That’s why the only way to keep my beloved users happy is to do semantic versioning right. I have to consistently release major versions whenever I change behaviour without deprecating it.

This would lead to fast growing major numbers, e.g. 14.1.0 or something like that. I am not sure about the acceptance in the Ruby community for that.

To me, it feels strange to see those version numbers. Others, like my friend Ricardo, find it totally ok. Ryan refers to Chrome which is currently at v35.

It appears that semantic versioning was designed to have big major numbers – how would you follow an innovative path for development otherwise?

What’s your thought on that? Fast growing majors? Catch-all deprecations? No changes at all???

UPDATE: The ~> Operator

Thanks for all the twitter responses (and comments)! @bhuga pointed out that fast-growing major numbers make it (almost) impossible to define a dependency using the ~> operator in other gems. The “eating bacon” or “sperm” operator only works with minor and patch level and allows you to lock the dependency to particular minor and patch version ranges.

I didn’t even think of that! As you can see – it’s a real dilemma!

Cells Integrates With The Asset Pipeline

Thursday, May 29th, 2014

Today’s a fantastic day.

Not only has it been nice and sunny so far, also did I release Cells 3.11! It is the last minor release before Cells 4.0, which will finally and forever get rid of the stinky ActionController dependency that has sometimes made our life painful. In 4.0, the new view model will become the default “dialect” for cells.

Anyway, back to 3.11. It got two new features that I personally started loving. You can now bundle assets into your cell’s directory (JS, CSS, images, fonts, whatever) making a cell a completely self-contained MVC component for Rails.

The second addition is purely structural: Cell::Concept introduces a new file layout for cells following the Trailblazer architecture. This new layout feels more natural, is easier to understand and allows cleaner encapsulation.

Packaging Assets.

It often makes sense to package JavaScript and CSS that belongs to a logical part of your page into the cell that implements it. We used to have those assets in global directories and it just felt wrong.

You may now push assets into the assets/ directory within the cell.

app
├── cells
│   ├── comment_cell.rb
│   ├── comment
│   │   ├── show.haml
│   │   ├── assets
│   │   │   ├── comment.js
│   │   │   ├── comment.css

How cool is that? A cell can now ship with its own assets, making it a hundred times easier to find related code, views, and assets. Your designers are gonna love you.

Hooking Into The Pipeline.

In order to use the assets in the global assets pipeline, two steps are necessary.

First, you need to configure the Rails app to find assets from the cell.

Gemgem::Application.configure do
  # ...
  config.cells.with_assets = %w(comment)
end

The with_assets directive allows to register cells that contain assets.

Second, you need to include the files into the actual asset files. In app/assets/application.js, you have to add the cell assets manually.

//=# require comment

Same goes into app/assets/application.css.sass.

@import 'comment';

I know it feels a bit clumsy, but it actually works great and if you have a better idea please let us know!

When compiling the global asset, the assets from your cell are now included.

Think In Concepts.

In the process of implementing the Trailblazer architectural style in Rails, the new Concept cell plays a major role. It allows a completely self-contained file layout. Here’s how that looks.

app
├── cells
│   ├── comment
│   │   ├── cell.rb
│   │   ├── views
│   │   │   ├── show.haml
│   │   │   ├── author.haml
│   │   ├── assets
│   │   │   ├── comment.js
│   │   │   ├── comment.css

See how all relevant files are under the comment/ directory? Views got their dedicated directory, and the actual cell code goes into cell.rb.

This slightly changes the way a cell looks.

# app/cells/comment/cell.rb
 
class Comment::Cell < Cell::Concept
  def show
    render
  end
end

A concept cell is always a view model. I’ll blog about the latter in a separate post. Apart from the slightly different name (see discussion below) everything else remains the same.

A New Helper.

Rendering (or just instantiating) a concept cell works with the new concept helper.

= concept("comment/cell", comment).call

This is exactly the same syntax as found with view models.

One cool new feature comes with that, too! You can also render collections of cells easily.

= concept("comment/cell", collection: Comments.all)

This helper is available in controllers, views and in the cell itself (for nested setups).

An in-depth discussion how to structure cells will be in the Trailblazer book.

Why The New Trailblazer Layout?

Trailblazer is all about structuring – an essential element of software development that Rails has failed to establish.

In Trailblazer, the Rails app is structured by concepts. A concept is usually a domain concern like comments, galleries, carts, and so on. A concept not only contains the cell, but also forms, domain objects (“twin”), operations and more. It has proven to be more intuitiv to structure code by concepts and not by controller, view and model.

The cell is still a fully self-contained component of the concept and can be moved, removed or changed without breaking the app.

More To Come.

I’m lucky to have a great team of developers and we started to deploy several concept cells to production. It just feels so much better and natural. Give it a go!

Rails Misapprehensions: Single Responsibility Principle

Thursday, May 22nd, 2014

During the last months I had a few controversial chats about the “Single Responsibility Principle” (SRP), which is a concept in object-oriented programming for better encapsulation. Interestingly, the same conversation flamed up again and again when discussing Reform’s validate method.

Since that “validate” method does a bunch of things I was accused of exposing “a method that breaks SRP”.

What Is Not SRP?

A Reform form object comes with a handful of public methods, only. Their names are ::new, #validate, #errors, and #save. There are a couple more but that’s not relevant now. As their names are pretty self-explaining let me briefly talk about #validate.

Here’s how you use this obscure method.

result = form.validate(params)

So, what validate does is it first populates the form’s internal attributes with the incoming params. It then runs the defined validations in that form instance and returns the result.

Several people complained that this is not a good API as it breaks SRP – the validate method was “doing too much”.

I don’t really know if SRP only applies to classes but I can say one thing for sure: SRP can in no way be used with methods. If you say “this method breaks single responsiblity” you are talking about private concerns within a class.

You’re right, because it’s a good thing to break up logic into small methods. But you’re wrong, because you’re talking about the private method stack and not the public API of a class.

In my understanding, when talking about SRP you talk about classes.

What Is SRP?

I had this eye-opening moment in a brillant keynote by Uncle Bob at Lonestar RubyConf a few years back: An SRP’ed class is reflected by having exactly one public method.

Having this pretty simple rule, I admit that Reform is not SRP. To have a clean architecture, I should split Reform into one class per public method: Reform::Setup, Reform::Validate, and so on.

form = Reform::Setup.new(model).call
result = Reform::Validate(form).call(params)

Each class would only expose the #call method, in an SRP setup there’s no need to name the only public method, the class name tells you what’s gonna happen.

Of course, this is super clumsy and no one wants to work with a single responsible “API”. :D

As a side note, Reform does exactly that behind it’s manly back – it provides you all the necessary methods via one instance, then orchestrates to separate objects. You don’t need to know how it works as a user.

About API Design.

I have no clue where it comes from, nevertheless, exposing as many methods as possible to your class’ user seems to be “OK” or even “cool”, coming from Rails where an unconfigured ActiveRecord model offers you 284 public methods right away.

During +10 years of designing open-source frameworks I realised that the more public methods I allow my users to call the more work it gets to change my framework’s API later. Deprecating public methods is a pain in the ass.

Coming back to Reform, people suggested to split #validate into two public methods: One to populate (or “fill out”) the form instance, one to actually validate it afterwards.

The word “after” indicates only one of the problems you introduce by extending the API:

  1. Users will fuck it up. They will call #validate without calling #fill_out, then ask why validate doesn’t validate and then someone else will reply that they forgot to call #fill_out before.
  2. They will call #validate, then #fill_out – in the wrong order.
  3. Reform is a form object – there simply is no case where you wanna fill out a form but then leave it unvalidated.

I decided to leave the validate method as it is and I do not regret it. Acceptance for this rebelious method increased after improving its documentation.

Sum Up.

Don’t use SRP when talking about methods. It’s a concept to be used with classes that expose a single public method.

The more methods you expose, the more things can go wrong due to wrong order, not calling a method or general confusion. Don’t make methods public because they “could be helpful”. A good API has a limited set of methods, only. If people ask for more, think about moving it to a separate class.

Applying SRP to workflows and generally to objects in (Rails) app, and orchestrating those, is one of the numerous interesting topics discussed in my upcoming book. Sign up for the mailing list!

Reform 1.0 – Form Objects For Ruby And Rails

Monday, May 5th, 2014

Dear friends – Reform 1.0 is out. It took a while, and a lot of work went into thinking about changes and if they make sense. Not much of the public API has changed, which is a good sign. Internally, Reform has become simpler as I learned what Reform actually is: a validation concept with additional logic for UI and workflows.

The public API is now limited to a handful of methods with well-defined semantics. Tons of “discrepancies” were fixed by simplifying internals.

We also introduce Reform::Contract which is an exiting concept to decouple validations entirely from your models. Even if you’re not interested in the form part of Reform, make sure to check out contracts.

A form class still looks the same.

class AlbumForm < Reform::Form
  property :title
  validates :title, length: {minimum: 9}
 
  collection :songs do
    property :title
    validates :title, presence: true
  end
 
  validates :songs, length: {minimum: 2}
end

You gotta love that intuitive DSL – it has been copied in several other form gems already, so it must be good!

Unlimited Nesting.

I’m not sure if I like the fact but Reform can now do as many nestings as your crazy models need. In earlier versions there were problems that models in the 3rd layer and more didn’t get validated. Not anymore. Go nuts.

Validations Against Nested Models.

In older versions it was a bit of a pain to validate, say, the minimum amount of nested Songs. This is all simplified now in Reform and as always, the simpler the better. Validations like the following just work now.

collection :songs do
  property :title
  validates :title, presence: true
end
 
validates :songs, length: {minimum: 2}

The validation will fail if there’s less than two Song objects in the collection.

Automatic Population.

A big show-stopper for lots of new users was when validating a new form with nested models: When rendering the form, they set up the form correctly.

AlbumForm.new(Album.new(songs: [Song.new, Song.new]))

This renders two song forms into the album form. Submitting usually ended in a fiasco of exceptions, as in the intercepting validating action, the code wasn’t setting up the object graph, again.

AlbumForm.new(Album.new).validate(params[:album])

Reform now tried to validate the incoming song data against Song models that weren’t there (Album.new doesn’t provide Songs). This was a misunderstanding: Reform is not supposed to be stateful over requests and remember how many songs it displayed in the last request.

Whatever – you can tell Reform to “auto-populate” in #validate now.

collection :songs, populate_if_empty: Song do
    property :title
    validates :title, presence: true
  end

This will create Song instances where they’re missing in validate. You can use a lambda and more options in case you wanna customize this process.

Lambdas are executed in the form’s context and need to return an instance (not the class).

collection :songs, 
  populate_if_empty: lambda { |input, args| 
    model.songs.build 
  },

This is all for #validate. I’ planning something similar for the rendering part to configure the number of forms to render, etc.

Syncing.

Synchronizing data with the underlying model has caused some confusion, too. That’s why we split it into two parts with very limited behaviour scopes. BTW – many changes in Reform 1.0 were triggered by vivid, colourful discussions on the issues forum – I hope you guys keep contributing great ideas and criticism.

To write data back to your models, you use the #sync method now. This will go through all models and use the specified writers to sync data from the form to the models.

form = AlbumForm.new(Album.new)
 
form.validate(params[:album])
 
form.sync 
#=> album.title = "Best Of"
#=> album.songs[0].title = "Roxanne"
# and so on

Note that this does change the state of your (persistent) models – it does not save changes, yet!

Saving.

When hitting the #save method Reform will call save on all models – unless you tell it not to do so:

collection :songs, save: false do
  property :title

In earlier version of Reform, saving would only call save on the top model. The idea behind that was that the underlying models are saved using ActiveRecord’s autosave: true feature. This design is still valid, however, Reform can do this for you, if you want it.

Contracts.

This is by far my favourite refactoring: parts of Form have been extracted into Contract which allows validating models without the UI aspect. Allowing you to define nested validations in a separate layer paves the way for dumb data models that just contain associations and persistence-related logic as targeted in Trailblazer.

A contract looks like a form. Actually, contracts can be derived from forms (and cells, and representers) automatically, but this would go too far now. Just keep in mind that there won’t be redundancy.

class AlbumContract < Reform::Contract
  property :title
  validates :title, length: {minimum: 9}
 
  collection :songs do
    property :title
    validates :title, presence: true
  end

This looks familiar. Now, a contract exposes three public methods.

album    = Album.find(1).update_attributes(..)
contract = AlbumContract.new(album)

The contract’s constructor accepts a model, just like a form.

if contract.validate
  album.save
else
  raise contract.errors.messages
end

You then use validate to run validations on the underlying model. Note that it doesn’t accept params – remember, it’s a contract validating the state of a model.

Eventually, you wanna display errors by calling errors on the contract.

The state of the model does not change during contract’s workflows.

See how contracts help you to decouple validations from your persistence layer? On long term, they will help you getting to a layered architecture.

An in-depth discussion of this architecture can be found in my upcoming book (scroll up, left!).

Renaming

Finally, renaming works for all properties, whether it’s Composition or a model form or nested or whatever.

collection :songs, as: :tracks do
  property :title

This will expose songs as “tracks”, i.e. setters/getters on the form and in the HTM, it’ll say “tracks”.

Internals.

Some things have changed in Reform 1.0. The internal workflows have been generalized. They all use representable for mapping data, it might look cryptic but once you got the hang of representable you will easily understand all the transformations that happen (I also added comments, some people complained about the lack of internal documentation).

The Form class is nothing more than an entry point delegating to the requested behaviour. This is reflected in four new modules.

  • Setup contains transformation logic to populate the form when instantiating it.
  • Validate – surprisingly – implements the #validate method along with the new populator option.
  • Sync writes form data to models.
  • Save delegates #save calls to all nested models.

This new file and class layout makes it very easy to navigate through Reform’s codebase – personally, I started structuring all my other gems like that.

Every workflow is implemented by exposing exactly one public method (e.g. #save) which goes through the form’s attributes on that level only. It then calls itself recursively on nested forms, making it a very clean implementation.

Caching In Cells: API Change In 3.10.

Monday, March 10th, 2014

The caching layer in Cells just got an update. By using the new uber gem we could generalize the processing of options resulting in a more streamlined experience for you.

What Changed?

For those of you already using Cells’ caching, please update your code when updating to 3.10!

Blocks do not receive the cell instance as the first argument anymore – instead, they’re executed in the cell instance context.

And, the best: There’s no deprecation for this!

What used to be this…

class CartCell < Cell::Rails
  cache :show do |cell, options|
    cell.md5
  end

…you have to change to the following.

class CartCell < Cell::Rails
  cache :show do |options|
    md5
  end

Note how we simply got rid of the first block parameter.

Caching In Cells.

Cells allow you to cache per state. It’s simple: the rendered result of a state method is cached and expired as you configure it.

To cache forever, don’t configure anything

class CartCell < Cell::Rails
  cache :show
 
  def show
    render
  end

This will run #show only once, after that the rendered view comes from the cache.

Static Cache Options.

Note that you can pass arbitrary options through to your cache store. Symbols are evaluated as instance methods, callable objects (e.g. lambdas) are evaluated in the cell instance context allowing you to call instance methods and access instance variables.

cache :show, :expires_in => 10.minutes

This is passed right to the underlying store.

Dynamic Options.

If you need arbitrary dynamic options evaluated at render-time, use a lambda.

cache :show, :tags => lambda { |*args| tagged_as }

In case you don’t like blocks, use instance methods instead.

class CartCell < Cell::Rails
  cache :show, :tags => :cache_tags
 
  def cache_tags(*args)
    ["updated", "revisited"]
  end

Those evaluated options along with their key are simply passed to the cache store.

Building Your Own Cache Key.

You can expand the state’s cache key by appending a versioner block to the ::cache call. This way you can expire state caches yourself.

cache :show do |options|
  options[:items].md5
end

The block’s return value is appended to the state key, resulting in the following key.

 "cells/cart/show/0ecb1360644ce665a4ef"

Using Arguments.

Sometimes the context is needed when computing a cache key. Remember that all state-args are passed to the block/method.

Suppose we pass the current cart object into the render_cell call.

render_cell(:cart, :show, Cart.current)

This cart instance will be available for your cache maths.

class CartCell < Cell::Rails
 
  cache :show, tags: lambda { |cart| cart.tags }

Cells simply passes all state-args to your code making it very flexible.

A Note On Fragment Caching

Fragment caching is not implemented in Cells per design – Cells tries to move caching to the class layer enforcing an object-oriented design rather than cluttering your views with caching blocks.

If you need to cache a part of your view, implement that as another cell state.

Better Nesting in API Documents With Representable 1.7.6

Sunday, March 2nd, 2014

The recent 1.7.6 release of Representable brings a really helpful feature to all the Roar and Representable users: Better nesting for flat hierarchies.

Simpler Nesting In Documents

Sometimes an API requires you to nest a group of attributes into a separate section.

Imagine the following document.

{"title": "Roxanne",
 "details":
   {"track": 3,
    "length": "4:10"}
}

Both track and length are nested under a details key. Now, this is the required document structure. However, it doesn’t really fit into your model scope, as both nested keys are properties of the outer Song object.

song.title  #=> "Roxanne"
song.track  #=> 3
song.length #=> "4:10"

In earlier versions of Roar/representable you had to provide a 1-to-1 mapping of your object to your document. This usually ended up in something clumsy like this.

class Song < ActiveRecord::Base
  # .. original code
 
  def details
    OpenStruct.new(
      track:  track
      length: length
    )
  end

It got even worse when parsing this nested document was to be accomplished! I’ll spare you the details here.

More Than DSL Sugar: nested

Let’s experience the enjoyment of the new ::nested feature instead.

class SongRepresenter
  include Representable::JSON
 
  property :title
 
  nested :details do
    property :track
    property :length
  end
end

Life can be so easy. This simple change will advise representable to expect those two fellas track and length to be on the outer object, but it’ll still render them into a details: section.

And, even better, this will also parse the document and set the nested attributes on the song instance.

song.extend(SongRepresenter).from_json %{
  {"title": "Roxanne",
   "details":
     {"track": 3,
      "length": "4:10"}
  }
}
 
song.track #=> 3

It’s incredible how this new feature simplified our process to connect to the new Australian Post API – and, frankly, I feel a bit embarrassed I didn’t provide you guys with this feature earlier.

Deep Nesting

The new nested method turned out to be extremely useful for deeply nested “throw-away documents” that don’t need to be persistent.

For instance, here is a typical response from the Auspost API.

{"CreateArticleResponse":{
  "ArticleErrors":{
    "BusinessExceptions":{
      "NoOfErrors":1,
      "BusinessException":{
        "Code":102,"Description":"Internal Error, failed to process request to source"
      }
    }
  }
}}

To get to the actual error message, I need a 4-level deep hash access. The representer code before ::nested would look terrible – you’d spend half an hour on creating an object graph that maps to this document. Sucks.

Here’s how it looks now.

class ErrorsRepresenter
  include Representable::JSON
 
  self.representation_wrap = :CreateArticleResponse
 
  nested :ArticleErrors do
    nested :BusinessExceptions do
      nested :BusinessException do
        property :Description
      end
    end
  end
end

This is all defensive declarative code. If one of the keys is not found in the incoming document, representable will simply stop parsing that property.

To actually retrieve the error description I simply use a Struct.

err = Struct.new(:Description).new
 
err.extend(ErrorsRepresenter).
  from_json('{"CreateArticleResponse":{ ..')
 
err.Description #=> "Internal Error, ..."

What I love about this is: This code won’t break if the parsed document does not contain any of the nested attributes. err.Documents will simply return nil.

Imagining the nightmare I’d have conditionally parsing a 4-level deep hash I really feel like this is a good feature.

Internals.

A fair side note. Internally, a nested block is implemented using a Decorator, even if you’re using a module representer for the original document. This doesn’t really affect you, however, if you add methods to the nested representer, make sure to use the right reference when you wanna access the model.

class SongRepresenter
  include Representable::JSON
 
  property :title
 
  nested :details do
    property :track
    property :length
 
    define_method :track do
      represented.track.to_f
    end
  end
end

Note that I use represented instead of self in the helper method.

Let me know what you think.

Roar Got Better Http Support!

Saturday, March 1st, 2014

Morning everyone! The latest Roar 0.12.3 release comes along with some long-awaited features and I wonder why it took me so long.

I added some functionality to the client layer of Roar. As you might recall, Roar allows representers to be used both for backends and on the client side.

Roar’s Client Layer

Let’s run quickly through how to build a REST client with Roar.

As always, we need a representer to specify the exchanged document.

module SongRepresenter
  include Roar::Representer::JSON
 
  property :title
end

Next, I write a simple client class that consumes from the existing PunkrockAPI™. Please excuse my use of OpenStruct, but I’m lazy. And… aren’t lazy programmers the better programmers?

class Song < OpenStruct
  include Roar::Representer::JSON
  include SongRepresenter
 
  include Roar::Representer::Feature::HttpVerbs
end

We’ll discuss what happens here in a second.

Simpler HTTP API.

Here’s how you use that client, first.

song = Song.new
song.get(uri: "http://songs/roxanne", 
          as: "application/json")
 
song.title #=> "Roxanne"

The HttpVerbs module adds verbs to the client model. In this example, I use #get to retrieve the document from the specified URL, parse it and assign properties to the object. Since we also mixed in SongRepresenter, the client knows about the document’s structure and the attributes.

Note the new API for #get, #post, etc. You now use keys to specify arguments, no positional arguments anymore. No need to panic, we added a soft transition with deprecations.

HTTPS Support!

Let’s assume the PunkrockAPI™ goes SSL, requiring your client to use a HTTPS connection. This was a pain so far, check out how it works now.

song.get(uri: "https://songs/roxanne", 
          as: "application/json")

Exactly – you don’t have to do anything besides specifying https:// as the protocol, Roar does the “REST” for you.

Basic Authentication

To make it even harder, the API wants you to authenticate beforehand. Basic auth was a feature missing for a long time in Roar. Here it comes.

song.get(       uri: "https://songs/roxanne", 
                 as: "application/json",
         basic_auth: ["nick", "secret password"])

Pass in necessary credentials with the basic_auth: option. Done.

Configuring The Request.

The verbs now allow you to mess around with the Request object, too. It is yielded to the block before the request is sent.

song.get(...) do |req|
  req.add_field("Cookie", "Yummy")
end

Couldn’t be simpler to create a cookie, change the Accept: header or whatever. The yielded object is a request instance from the NetHTTP implementation (unless you’re using Faraday).

Too High-Level!

Sometimes the verbs might be too high-level, too smart, doing to much. You’re free to use the underlying @Transport@ methods instead. They just do a raw HTTP request.

res = song.http.get_uri(uri: "http://songs/roxanne")
res.body #=> '{"title": "Roxanne"}'

More Soon!

These minor additions have helped a lot in my current project to communicate with the Auspost API. Stay tuned for a major update of Roar. We’re planning better defaults, full Faraday support, simpler nesting, and more.

New Experimental Feature in Cells: View Models

Wednesday, October 23rd, 2013

The cells gem has been around for almost 7 years now. With more than 300.000 downloads within 3 years it has gained some traction in the Rails community. Many projects are using it heavily to write reusable widgets, testable partials or just to have well-encapsulated view components.

We felt it was time to breath some fresh air and take this mature project a step further.

View Models?

Cells still works the same way as it used to work. Relax, you can still use render_cell as before.

However, we now got a second “dialect” in version 3.9.0. The new view models in cells addresses two issues that have been around for longer.

A streamlined DSL makes it easier to work with the cell instance itself. This is extremely helpful now that view models keep helper methods on the instance level.

Let’s see how that all works in an example.

1
2
3
4
5
6
7
8
9
class SongCell < Cell::Rails
  include Cell::Rails::ViewModel
 
  property :title
 
  def show
    render
  end
end

Mix in the ViewModel feature and get a new semantic for your cell. First, check how a view model is created.

class DashboardController < ApplicationController
  def index
    song  = Song.find(1) # <Song: title: Roxanne>
 
    @cell = cell(:song, song)
  end

Cell instances are created in the controller. This could also happen in the view – if you really want that. Note that the second argument is the decorated model that this cell represents.

Decorating Models – Step 1.

Attributes declared with ::property (line 4) will automatically be delegated to the cell’s model. So, the following call works without any additional code.

@cell.title #=> "Roxanne"

But more on that later.

View rendering still happens by using #render – exactly as in the “old dialect”. That’ll invoke the existing rendering with all the nice things like view inheritance, caching, etc.

The DSL looks a bit different, thou.

@cell.show #=> invokes show state

This line will call the #show method (“state”) which in turn renders app/cells/song/show.haml.

Helpers Are Instance Methods.

We should look at the rendered view to understand what changed in terms of helpers and their scope.

1
2
3
4
5
6
7
/ app/cells/song/show.haml
 
%h1 #{title}
 
This song is awesome!
 
= link_to "Permalink", song_url(model)

Four helpers are used in this view. It is important to understand that all helpers in a view model view are invoked in the cell instance context.

Now, what does that mean?

Well, the call to title (line 3) is not called in some strange module, it is simply invoked on the cell, as we did earlier when doing @cell.title.

The same happens with link_to and song_url: The view model automatically provides the URL helpers on the instance.

The model method is another view model “helper”, an instance method, returning the decorated object (line 7).

Decorating Models – Step 2.

This might look confusing at first glance, but imagine how simple it becomes to write your own “helper” now.

Why not extract the entire #link_to line to a separate, testable method?

class SongCell < Cell::Rails
  # ..
 
  def permalink
    link_to "Permalink", song_url(model)
  end
end

You can just move the entire line to the cell class and it’ll work.

Testing Helpers.

Not that you suddenly get benefits like encapsulation and inheritance, no, also your testing is greatly improved for your new “helper”.

it "renders #self_link" do
  cell(:song, song).permalink.
    should eq "<a href.."
end

This doesn’t fake an environment as Rails helper tests do. It executes the same code in the same environment as in production.

Using Existing Helpers.

To use one of Rails’ numerous helpers, you include the modules into your cell class.

1
2
3
4
class SongCell < Cell::Rails
  include TagHelper
  # ..
end

You can then use the methods in your view – or in your instance methods.

Again, the magical copying of methods into your view doesn’t happen anymore. The view model instance will be the view’s context itself.

Pollution.

I can hear people complaining about stuffing all those helper methods in the the poor cell class. But let me ask you? Do you really feel comfortable pushing your helpers into a scopeless, not object-oriented module that gets mixed into the view somewhere in the stack and hopefully doesn’t collide namespaces?

Also, a cell typically embraces a small part of your UI. As this has a well-defined functionality you’d not mix in all helpers but only those you need. That reduces the number of “polluting” methods.

Another point against pollution is: When including a helper, it should ideally import the public helper methods, only. The internals and private methods should be in separate classes.

Actually, the only helper that does this is the FormHelper that delegates #form_for to the FormBuilder class.

Please, blame Rails’ helper implementation for the pollution, not me ;)

What About Real Decorators?

Don’t use a view model where you just need a simple helper. Use a decorator gem like draper to decorate your model (and push that into the cell, if you like).

Use a cell view model when there’s rendering of markup involved. Cells help to clean up Rails hard-wired partial mess and allow clean testing of the encapsulated widgets.

Use a cell view model when the decorations are needed for a special widget, only, and not across your application.

And use a view model if you found the #render_cell to clumsy and you wanted to invoke different methods on the cell instance.

From Here…

The experimental view model feature is an attempt to move view logic – or, helpers – into an object-oriented space while reducing its complexity.

You still got all of cells core behaviour like rendering views, nesting, inheritance across code and view level, OOP caching. Anyhow, you get an easy way to wire helper methods into your views without falling back into a procedural programming style from the 60s.

There will be problems with the way Rails helpers are programmed, and hopefully we can fix those finally making helpers predictable.

Give it a go, we can’t wait to hear your opinions about this new approach!

Running Multiple MySQL Servers With Different Versions On The Same Machine.

Monday, October 14th, 2013

Today I was urged to install MySQL 5.1 to run a “new” Rails project. Since I refused to uninstall my existing 5.5 I found a way of running two separate instances on my machine using MySQL-Sandbox.

Frankly, it was a pain in the ass and MySQL-Sandbox saved my day. Here is what I did on my Ubuntu machine – conceptually, this should work for other Linuxes, OSX, etc, as well.

MySQL-Sandbox?

This little tool helps you by installing and pre-configuring a separate MySQL instance. It also provides scripts for administrating your servers. It is great.

Download The Binary.

Download the binary tarball from the MySQL download site. I downloaded mysql-5.1.72-linux-i686-glibc23.tar.gz.

Install Sandbox.

I had to install some Ubuntu packages as listed on this helpful post. However, this might not be necessary on OSX.

$ sudo apt-get install build-essential libaio1 libaio-dev

Then, install the sandbox tool.

sudo cpan sandbox

Create Your Sandbox.

The make_sandbox command will now install and configure a brand-new MySQL setup in a separate directory. I ran the following command.

make_sandbox mysql-5.1.72-linux-i686-glibc23.tar.gz

This installs mysql 5.1.72 into /home/nick/sandboxes/msb_5_1_72. Changing into that directory you can simply configure and spin up the server.

Configuring MySQL.

Your configuration file now lives in msb_5_1_72/my.sandbox.cnf and is ready to be edited – which wasn’t necessary as I was happy with the settings.

The only interesting directive to me was the port.

port               = 5172

Starting The Server.

The msb_5_1_72 directory comes with handy administration scripts, so within that dir I just ran the start command.

msb_5_1_72$ ./start

Using The Server.

This runs a completely isolated MySQL 5.1 instance on port 5172 while letting my 5.5 alive on the standard port! Awesome!!!

Now, to connect to that server you just have to provide the port number in your client.

Note: On Linux, you also need to provide the --host with 127.0.0.1 as described here. Don’t say I didn’t warn you.

mysqladmin -u root --host=127.0.0.1 --port=5172 
  -p msandbox password

The original root password is msandbox, so go change this. Everything else works just like your “global” installation.

And, In Rails?

My database.yml looks like this.

development:
  adapter: mysql
  database: blog
  username: "root"
  password: ""
  host: 127.0.0.1
  port: 5172

Thanks to Giuseppe Maxia for this helpful tool.