Uncategorized

Cells 4.1: Block Support, Better Collections, External Layouts!

Wednesday, May 4th, 2016

After many weeks of work, I am happy to announce the 4.1 release of Cells, the popular view model gem for Ruby. In this release we are paving the way for the upcoming, super fast Cells 5.0 by tidying up the core, and adding some amazing new features. This all happened without any public API change!

Rails Support

Since Cells 4.0 the gem is fully decoupled from Rails and provides its own rendering stack. We decided to take this one step further and move all Rails specific code into cells-rails. If you are using Rails, make sure to include that library in your Gemfile.

gem "cells-rails"

A nice side-effect: We now provide a fix for bugs with Rails’ asset helpers, out-of-the-box! Using different asset configurations in production worked fine in controllers, but caused problems in Cells. By delegating all those helpers to the controller, this is now fixed in a very clean and also simple way.

Yielding Blocks

Many Cells users (and the usage count is growing every day) have asked for the ability to yield blocks or content in views. The render method now accepts a block.

def show
  render { "Yay!" }
end

In the corresponding view, you can then yield this block.

%h3 Yield!
  = yield #=> "Yay!"

This can be used to wrap one view into another without having to pass around locals.

def show
  render { render :nested }
end

As a logical conclusion, you can also pass a block from outside in – it will be passed as the show method’s block. Check this.

cell(:comment, comment) { "Yay!" }

To process this, you now need to pass through this block.

def show(&block)
  render &block # pass through!
end

This allows injecting content into a cell without any state.

Default Show

Since 99.9% of all cells have an identical show method, this is now provided per default via ViewModel#show. The implementation looks like the code snippet above.

No need to write this yourself anymore.

External Layouts

Cells are not only great for fragments of your page, but can also completely replace the entire ActionView stack, by using a layout cell.

A layout cell?

Yes, it’s super simple. A layout cell is really just an ordinary view model with a layout view.

  class LayoutCell < Cell::ViewModel
    # show is inherited!
  end

The view is identical to the layout views you were using before, with ActionView.

!!!
%html
  %head
    %title= "Gemgem"
    = stylesheet_link_tag 'application', media: 'all'
    = javascript_include_tag 'application'
  %body
    = yield

As you can see, nothing special. To use this layout cell, you can now leverage the new “external layout” feature.

First, you have to prepare your cell classes for that.

class CommentCell < Cell::ViewModel
  include Layout::External
  # ..
end

You can then inject a layout cell from the outside when calling the actual cell.

cell(:comment, comment, layout: LayoutCell)

Internally, this will first render the content and then yield it to the instantiated LayoutCell, which will wrap it in a layout. This is faster as ActionView and avoids global leaking state.

Context Object

When dealing with layouts (outer) and nested cells (within your actual cell), a problem was that is was hard to share common data. For example, let’s say you want the current user in all those cells, you can now use the context object.

cell(:comment, comment, context: {user: current_user})

The context object is usually a hash. It’s automatically passed on to all cells involved in this rendering invocation, as the optional external layout, or internal cell calls.

Within those cells, you can simply access the context method.

def show
  puts context[:user]
end

The context object has been around since version 3.3 but was limited to Rails internals – it’s now a generic state object and will help with advanced cell nesting in 4.2.

Better Collections

We’ve had cell collection rendering for quite a bit. However, the API was a bit clumsy, especially when you had to fine-tune how the collection is rendered.

Using the :collection option will now return a Collection instance. Per default, in views, you don’t have to change anything.

Nevertheless, in tests or controllers, this very instance needs to be called just like any other cell.

cell(:comment, collection: Comment.all).() # renders collection

To invoke a different state, you now use the normal cell API.

cell(:comment, collection: Comment.all).(:display) # calls #display.

With join, you can customize the rendering of the collection.

cell(:comment, collection: Comment.all).join do |cell, i|
  i.odd? cell.(:odd, i) : cell(:even, i) 
end

Each block iteration will be automatically concatenated for you. Very convenient, isn’t it? Oh, and it’s also faster than the old implementation.

The whole documentation is to be found here.

Enjoy your rock-solid views!

Nokia 230 Review: Nice looking, but an Engineering Breakdown

Wednesday, May 4th, 2016

Being a long time Nokia feature phone user and fan, I was more than excited to get the new Nokia 230. This phone is priced at EUR 80 which is a reasonable price. I hate the “availability” that smartphones give you, and I love how feature phones take away just that. Nokia (or Microsoft) advertises the Nokia 230 as “sleek and stylish” and “more of what matters”.

Look And Feel

It’s true, it looks fantastic. For me, coming from the Nokia 110, the 2.8″ display is amazing and has sufficient size. I love the buttons, it allows you to type text messages without having to look, and the physical feedback helps. It’s very slim, light and feels great in my hand and pocket.

Battery: Yay!

Another reason I will wait for a better smartphone generation is my Nokia’s battery life. When using it for texting, some calls and internet access here and there, it doesn’t have to get charged in about 8 days. That’s great.

I don’t really use a feature phone for photos, I’ll spare the details about how “well, ok” the photo quality is.

The “App” Store: Are You Fuckin’ Serious?

You couldn’t be more wrong if you think the Nokia 230 is any helpful with “internet stuff”, though. Reminder: It is 2016. The 230 simply runs an Opera Mini browser which is basically unusable.

I can google, that’s great.

When trying to use sites like Skyscanner.com, that have lots of AJAX elements being loaded after the initial page is displayed, then, it simply doesn’t work. It’s impossible to make use of “modern” sites. When trying to fill out AJAX (dynamic) forms, the browser reloads the entire page for every form field. In other words: It’s completely rubbish and you have to ask an iPhone user for their phone.

The Facebook “app” is a fucking joke. I can’t believe this is what Nokia (or Microsoft) sells as an “app”. The Facebook page is simply loaded in mobile format. Scrolling works by moving the cursor with the phone buttons. It takes about 40 scrolls to get away from the navigation bar, notifications (that you can’t turn off) to see the feed, or someone’s profile – again, completely unoptimized for the small screen. I haven’t even attempted to use Facebook chat, since there ain’t no messenger app for me.

Whatsapp? Nope!

The “app” store is simply the Opera Mobile Store, or whatever this useless archive full of bullshit games is called. If you’re looking for apps like Whatsapp, you won’t find it.

Whatsapp simply does not work on the Nokia 230.

SMS: A Complete Engineering Breakdown.

Since Facebook is a very awkward navigation experience, and Whatsapp is not existent, you have to use the Nokia’s SMS/texting abilities to communicate. This is where I lost my patience and almost threw this piece of shit into the bin. Nokia, if you read this: Fire whoever designed the SMS components.

At first, I felt that the SMS interface has improved. It shows conversations in the modern callout bubble-style, and the “Reply” input box can be easily accessed by pressing the down button in the last message.

However, if you want to go to the first message, you can’t press down again, you have to scroll up through all messages in the conversation.

This is not a big deal, since, and now it comes, this ridiculous phone can only store 500 text messages.

That’s right, 500 text messages, then you have to start deleting conversations again.

Another reminder: It is 2016, and 64 GB memory cards are about $20. I’m not a permanent texter, but if you text, say, 50 messages a day (it is 2016), you have to start deleting after maximum 1 1/2 weeks. It’s the most annoying “feature” I’ve ever seen with a phone. Even my Nokia 110 from 19hundredsomething could store more messages.

SMS: Smoke Signals Work Better!

The Nokia 230 will allow you to reply to new SMSes without unlocking your phone, you jump into the SMS interface and start responding.

What the programmers didn’t take into account is: If one of the unreplied message is lucky number 500, you’re screwed. So you type this loooong response, say, 400 characters (again, we are in the 21st century), and you hit “Send” it will tell you: “Sending not possible, no space left!” and your entire message simply disappears. You have to go back, delete conversations, and re-type the response again, remembering all the things you’ve said in a hopefully funny way.

This is the opposite of innovative or user-friendly and makes me angry just by thinking about it.

A very interesting bug with text messages is: sometimes, the sorting doesn’t work correctly. Messages from one and the same person wouldn’t be sorted chronologically. No, there’s no timezone conflicts or whatever, it’s simply horrible programmed. Having some tolerance for a few of the above problems, a sorting bug is only ridiculous. Oh, and also might generate a weird interpretation of conversations. But, doesn’t matter, you have to delete them anyway soon, haha.

Multimedia Messages: The DO still exist!

My favorite kind of SMS is the MMS, that’s when friends attempt to send me photos embedded in a text message. First of all, the photo has to have a certain size, otherwise there will be an error message after trying to download the photo for about 2 or 3 minutes.

If you’re lucky, the photo will work. In my Nokia 110 (the super old one), it would show that actual photo in the text message, super small, but you could see it. The Nokia 230 doesn’t show any preview, you have to “download” the photo, store it on your phone, exit the SMS interface, go to Photos -> Albums -> Received, find it there, and then you can view it.

Whoever designed the SMS component – usually one of the three important phone features – has failed, big time.

GSM: The Poor Man’s Network

Also, I kept wondering why other people around me can do calls and surf the net while in tunnels in the metro. That must be because the Nokia 230 only does GSM nets. Be prepared that you won’t have any reception where others (3G, …) do.

Summary: Don’t Buy This Phone!

From the Nokia website:

The Nokia 230 has a built-in torchlight and FM radio.

Yay! Back to morse code or amateur radio hacking.

I can already hear my Apple fanboy friends: “We told you, get an iPhone!”.

The Nokia 110 was simple, cheap and useful. The Nokia 230 is the worst piece of technology I’ve ever purchased. Admittedly, I am no gagdet fan, but it looks as if I have to get one of those “real” phones now.

Nokia, sorry for all this ranting, but your Nokia 230 is a fucking piece of shit. Why? See above.

Don’t even think about buying it, you will regret it.

Cells-Hamlit: The Fastest View Engine Around.

Saturday, January 23rd, 2016

The Hamlit gem is a reimplementation of the popular Haml markup language, which unfortunately is based on a quite old, convoluted codebase. Hamlit borrows the syntax, but rewrites the entire engine code leveraging the excellent Temple gem, which is a parser, compiler and optimizer for template languages and is also used in Slim.

Wow, that’s four template gems in one paragraph, but I’m pretty sure you’ve heard of all of them, except for maybe Hamlit.

Why Hamlit?

What makes Hamlit attractive for us is a very clean code base and speed.

Hamlit refrains from monkey-patching Rails helpers. Where the original Haml gem has quite a few of controversial Rails hacks, Hamlit has zero coupling to Rails. The capture support can be simplified using the Hamlit-block gem – instead of relying on different output buffers, capture always returns the captured content directly. This is brilliant for Cells.

Speaking of: If you want to use Hamlit in Cells, we provide you the Cells-hamlit gem. This was mainly possible because of Takashi Kokubun‘s excellent work and collaboration.

Need For Speed.

View rendering with Cells and Hamlit views is very fast. It’s actually the fastest combination at the time of writing this post.

ChartGo

For a simple benchmarking I used Benchmark-ips in the Gemgem-Sinatra application, a nice sample project that shows how to use Trailblazer and Cells in Sinatra. The benchmark file simply renders an entire page using Cells, as many times as possible in 20 seconds.

The cells composing the page use Hamlit, Slim and Haml templates in different branches. The result is always this page.

Screenshot_2016-01-23_14-24-51

This is all nothing special. However, as visible in my professional 3D chart, say Hamlit was 100%, Slim’s performance is 90%, whereas Haml can perform only 80% of what Hamlit can. In other words: Hamlit is the fastest and will make your views perform better.

Note that this benchmark is in combination with Cells. Cells are generally faster than ActionView. With the upcoming Cells 4.1 we will have performance boosts of around 5-10 as compared to the ActionView framework.

Haml 5

To come to Haml’s defense: this project has notably paved the way for all modern template formats. Its popularity “skyrocketed”, to say it in DHH’s words, very early, making it hard for the core team to aggressively refactor the code base. This might happen in Haml 5 and could bring Haml-The Original™ back into pole position.

You should go and try out Hamlit today. Use it with Cells for a major boost in speed and a better view architecture.


If you want to stay up-to-date with all Trailblazer gems like Cells, Roar and Reform, sign up for our newsletter. It will give you a monthly overview of new features, cool tricks and upcoming awesomeness.

Reform 2.1 With Dry-Validation and Grouping

Wednesday, December 23rd, 2015

Just in time for Christmas, Reform 2.1 is ready for you. It has two great new additions: we now support the awesome Piotr Solnica’s dry-validation gem, and I introduced validation groups.

Reform is a form object gem that decouples validation of data from models. Its full documentation can be found on the Trailblazer website.

Validation Groups

Traditional validation gems like ActiveModel::Validations only allow a linear flow of validations. All defined validations will be run, even though in specific cases it doesn’t make sense. We get around that limitation now in Reform with validation groups.

Here’s a very simplified example.

class SessionForm < Reform::Form
  property :username
  property :email
 
  validation :default do
    validates :username, :email, presence: true
  end
 
  validation :email_format, if: :default do
    validates :email, email: true
  end
end

You can now group sets of validation and name them using validation.

Those can then be chained using :after and, in this example, :if. The second group :email_format is only executed if the :default group was valid, saving you any conditionals in the following validations.

Validation still happens by calling form.validate(params).

This opens the way to a completely new understanding of validations as predicates and results.

Dry-validation

Speaking of predicates and all those logic terms: We now support Dry-validation as another validation backend. Since this is a relatively new, fast and very strict implementation, we will use it as default in future Reforms.

class LoginForm < Reform::Form
  property :password
  property :confirm_password
 
  validation :default do
    key(:password, &:filled?) # :password required.
    key(:confirm_password) do |str|
      str.filled? & str.correct?
    end
 
    def correct?(str)
      str == form.password
    end
  end

Without going into dry-validation’s API details too much: In a validation group you can use the exact same API as in a Dry::Validation::Schema, with chaining, predicates, custom validations, and so on.

Error messages in dry-validation are generated via YAML file that can easily be extended, ending the age of ActiveModel’s translation logic madness.

Populator API

In Reform 2.1, all populators now receive one options hash, which allows using Ruby’s keyword arguments.

class SessionForm < Reform::Form
  property :user,
    populator: ->(fragment:, **) do
      self.user = fragment["id"] ? User.find(1) : User.new
    end, # ..

The old API still works, but is deprecated.

Skip!

If you ever had the need to make Reform suppress deserialization of a fragment, this is simpler now with the new skip! method.

  property :user,
    populator: ->(fragment:, **) do
      return skip! if fragment["id"]
      # more code
    end, # ..

What used to be a combination of :populator and :skip_if can now be combined. Once skip! has returned, Reform will ignore the currently processed fragment, as if it hadn’t been in the incoming hash at all.

Documentation for skip and populators is here.

Good Bye, ActiveModel!

What sounds bewildering to many of you is a consequent step in tidying up the Ruby world: We will drop support for ActiveModel::Validations in Reform 2.2. Don’t you worry, everything will still work the way it did before, we just don’t want to waste time with AM:V and its prehistoric implementation anymore.

Most trouble we had with the way AM:V computes error messages. It gets worse when those have to be translated. AM:V has an extremely complex implementation, jumps between instance and class context, and makes wild assumptions about object interfaces. Since Rails core seems uninterested in changing anything, because it might break Basecamp, for us it’s easiest to just let it be and move on with an alternative.

Also, when using validators like confirm or acceptance values in the form suddenly were changed because those implementations write to the validated object – a very wrong thing to do. You might also have a look into how AM:V finds validators: a cryptic, magic class traversal happens here and it is a nightmare to make AM:V use custom validators in a non-Rails environment.

We ended up with too many patches and hacks – very frustrating for the maintainers. Since there’s better, less constraining alternatives, we all will benefit from a better validation workflow.

Representable 2.4: How Functional Programming Speeds Up Rendering And Parsing.

Thursday, November 19th, 2015

The great thing about being unemployed is you finally get to work on Open-Source features you always wanted to do but never had the time to.

Representable 2.4 internally is completely restructured, it has lost tons of code in favor for a simpler, faster, functional approach, we got rid of internal state, and it now allows to hook into parsing and rendering with your own logic without being restricted to predefined execution paths.

And, do I really have to mention that this results in +200% speedup for both rendering and parsing?

To cut it short: This version of Representable, which backs many other gems like Roar or Reform, feels great and I’m happy to throw it at you.

Here are the outstanding changes followed by a discussion how we could achieve this using functional techniques.

Speed

Representable 2.4 is about 3.2x faster than older versions. This is for both, rendering and parsing.

I have no idea what else to say about this right now.

Defaults

Yes, you may now define defaults for your representer.

class SongRepresenter < Representable::Decorator
  defaults render_nil: true
 
  property :title # does have render_nil: true

The defaults feature, mostly written by Sadjow Leão, also allows crunching default options using a block.

class SongRepresenter < Representable::Decorator
  defaults do |name|
    { as: name.to_s.camelize }
  end
 
  property :email_address # does have as: EmailAddress

A pretty handy feature that’s been due a long time. It is fully documented on the new, beautiful website.

Unified Lambda Options

The positional arguments for option lambdas I found incredibly annoying.

Every time I used :instance or :setter I had to look up their API (my own API!) since every option had its own.

For example, :instance exposes the following API.

instance: ->(fragment, [i], args) { }

Whereas :setter comes with another signature.

setter: ->(value, args) { }

In 2.4, every dynamic option receives a hash containing all the stakeholders you might need.

setter: ->(options) { options[:fragment] }
setter: ->(options) { options[:binding] }

This works extremely well with keyword arguments in Ruby 2.1 and above.

instance: ->(fragment:, index:, **) { puts "#{fragment} @ #{index}" }

Since I’m a good person, I deprecated all options but :render_filter and :parse_filter. Running your code with 2.4 will work but print tons of deprecation warnings.

Once your code is updated, you may switch of deprecation mode and speed up the execution.

Representable.deprecations = false

Note that this will be default behavior in 2.5.

Inject Behavior

In case you had to juggle a lot with Representable’s options to achieve what you want, I have good news. You can now inject custom behavior and replace parts or the entire pipeline.

For instance, I could make Representable use my own parsing logic for a specific property. This is a bit similar to :reader but gives you full control.

class SongRepresenter < Representable::Decorator
  Upcase = ->(input, options) do
    options[:represented].title = input.upcase
  end
 
  property :title, parse_pipeline: ->(*) { Upcase }

:parse_pipeline expects a callable object. Usually, that is an instance of Representable::Pipeline with many functions lined up, but it can also be a simple proc.

Here’s what happens.

song = OpenStruct.new
 
SongRepresenter.new(song).from_hash("title"=>"Seventh Sign")
song.title #=> "SEVENTH SIGN"

Without any additional logic, you implemented a simple parser for the title property.

Skip Execution

You can also setup your own pipeline using Representable’s functions, plus the ability to stop the pipeline when emitting a special symbol.

property :title, parse_pipeline: ->(*) do
  Representable::Pipeline[
    Representable::ReadFragment,
    SkipOnNil,
    Upper,
    Representable::SetValue
  ]
end

The implementation of the two custom functions is here.

SkipOnNil = ->(input, **) { input.nil? Pipeline::Stop : input }
Upper     = ->(input, **) { input.upcase }

By emiting Stop, the execution of the pipeline stops and nothing more happens. If the input fragment is not nil, it will be uppercased and set on the represented object.

Pipeline Mechanics

Every low-level function in a pipeline receives two arguments.

SkipOnNil = ->(input, options) { "new input" }

In pipelines, the second options argument is immutable, whereas the return value of the last function becomes the input of the next function.

This really functional approach was highly inspired by my friend Piotr Solnica and his “FP-infected mind”.

The same works with :render_pipeline as well, but rendering is boring.

How We Got It Faster.

Where we had tons of procedural code, ifs and elses, many hash lookups and different implementationsf for collections and scalar properties, we now have simple pipelines.

Remember, in Representable you always define document fragments using property.

class SongRepresenter < Representable::Decorator
  property :title
end

Now, let’s say we were to parse a document using this representer.

SongRepresenter.new(Song.new).from_hash("title" => "Havasu")

In older versions, Representable will now grab the "title" value, and then traverse the following pseudo-code.

if ! fragment
  if binding[:default]
    return binding[:default]
  end
else
  if binding[:skip_parse]
    return
  else
    if binding[:typed]
      if binding[:class]
        return ..
      elsif binding[:instance]
        return ..
      end
    else
      return fragment
    end
  end

Without knowing any details here, you can see that the flow is a deeply nested, procedural mess. Basically, every step represents one of the options you might be using every day, such as :default or :class.

Not only was it incredibly hard to follow Representable’s logic, as this procedural flow is spread across many files, it was also slow!

For every property being rendered or parsed, there had to be around 20 hash lookups on the binding, often followed by evaluations of the option. For example, :class could be unset, a class constant, or a dynamic lambda.

Projecting this to realistic representers with about 50-100 properties this quickly becomes thousands of hash lookups for only one object, just to find out something that has been defined at compile time.

Static Flow

Another problem was that the flow was static, making it really hard to add custom behavior.

if ! fragment
  if fragment == nil # injected, new behavior!
    fragment = []    # change nils to empty arrays.
  end
 
  if binding[:default]
    return binding[:default]

There was no clean way to inject additional behavior without abusing dynamic options or overriding Binding classes, which was the opposite of intuitive.

It was also a physical impossibility to stop the workflow at a particular point, since you couldn’t simply inject returns into the existing code. For example, say your :class lambda already handled the entire deserialization, you still had to fight with the options that are called after :class.

What I found myself doing a lot was adding more and more code to “versatile” options like :instance since the flow couldn’t be modified at runtime.

Pipelines

Sometimes you need to take a step back and ask yourself: “What am I actually trying to do?”. You must actively cut out all those nasty little edge-cases and special requirements your code also handles to see the big picture.

Strictly speaking, when parsing a document, Representable goes through its defined schema properties and invokes parsing for every binding. Each binding, and that’s the new insight, has a pipelined workflow.

  • Grab fragment.
  • Not there? Abort.
  • Nil? Use Default if present. Abort.
  • Skip parsing? Abort.
  • If typed and :class, instantiate.
  • If typed and :instance, instantiate.
  • If typed, deserialize.
  • Return.

Instead of oddly programming that in a procedural way, each binding now uses its very own pipeline. For decorators, the pipeline is computed at compile-time. This means depending on the options used for this property, a custom pipeline is built.

  property :artist,
    skip_parse: ->(fragment:, **) { fragment == "n/a" }
    class: Artist,

The above property will be roughly translated to the following pipeline (simplified).

Pipeline[
  ReadFragment,
  SkipParse,    # because of :skip_parse
  StopOnNotFound,
  CreateObject, # because of :class.
  Decorate,     # because of :class.
  Deserialize,  # because of :class.
  SetValue
]

This pipeline is intuitively understandable. Each element is a function, a simple Ruby proc defined for serializing and deserializing.

Again, the pipeline is created once at compile-time. This means all checks like if binding[:default] are done once when building the pipeline, reducing hash lookups on the binding to a negligible handful.

The fewer options a property uses, the less functions will be in the pipeline, shortening the execution time at run-time.

A tremendous speed-up of minimum 200% is the result.

Benchmarks

In a, what we call realistic benchmark, we wrote a representer with 50 properties, where each property is a nested representer with another 50 properties.

We then rendered 100 objects using that representer. Here are the benchmarks.

4.660000   0.000000   4.660000 (  4.667668) # 2.3
1.400000   0.010000   1.410000 (  1.410015) # 2.4

As you can see, Representable is now 3.32x faster.

Looking at the top of the profiler stack, it becomes very obvious why.

%self      calls  name
 13.92  6630522   Representable::Definition#[]
  5.28   255001   Representable::Binding#initialize
  4.81  1790109   Representable::Binding#[]
  2.90   515102   Uber::Options::Value#call
  2.36   510002   Representable::Definition#typed?

This is for 2.3, where an insane amount of time is wasted on hash lookups at run-time. Imagine, for every property the “pipeline” is computed at runtime (of course, the concept of pipeline didn’t exist, yet).

For 2.4, this is slightly different.

 %self     calls  name
  4.03   255001   Representable::Hash::Binding#write
  3.00   260101   Representable::Binding#exec_context
  2.77   255000   Representable::Binding#skipable_empty_value?
  2.44   255001   Representable::Binding#render_pipeline
  0.16     5100   Representable::Function::Decorate#call
  0.16    10201   Representable::Binding#[]

The highest call count is “only” 255K, which is a method we do have to call for each property. Other than that, expensive hash lookups and option evaluations are minimized drastically, requiring less than 1% computing time.

Declarative

I also got around to finally extract all declarative logic into a gem named – surprise! – Declarative. If you now think “Oh no, not another gem!” you should have a look at it.

In former versions, we’d use Representable in other gems just to get the DSL for property and collection, etc., without using Representable’s render/parse logic, which is what makes Representable.

This is now completely decoupled and reusable without any JSON, Hash or XML dependencies.

It also implements the inheritance between modules, representers and decorators in a simpler, more understandable way.

Debugging

To learn more about how pipelines work, you should make use of the Representable::Debug feature.

SongRepresenter.new(song).extend(Representable::Debug).from_hash(..)

The output is highly interesting!

The Only Alternative to a Rails Monolith are Micro Services? Bullshit!

Saturday, September 5th, 2015

The Rails Way is wrong and has led thousands of projects to an unmaintainable state of highly coupled software assets.

In order to keep the growing complexity maintainable, and to maximize reusability, people now start to introduce “micro services”, which are physically separated, completely stand-alone applications that provide a subset of the application’s functionality via a document API.

DHH is absolutely right when criticizing this approach.

Not only does a “micro service” increase the complexity for deploys, because now, you have to roll out 17 applications and not just one, it also makes it almost impossible to test the application under real-life conditions.

The test environment will have countless mocks for “micro service” endpoints and results in half-assed pseudo images of production. The tests you write are better than no tests, but I doubt the pain for setting them up outweighs their integrity.

Now you have a fragmented system with loose coupling and a tremendous maintainance effort.

Micro Services! And Now?

People make it look as if “micro services” are the only alternative to a monolith.

They make it look as if you either have the choice between a huge pile of rubbish with many internal dependencies – your Rails Way monolith. Or your devops engineers – that you had to hire to take care of your system – have to deploy up to 17 applications every time APIs change.

This is absolute bullshit.

A monolith the way DHH leverages it is the excuse for a horrible software architecture without any encapsulation (called The Rails Way). The micro service architecture is the attempt to decouple things by enforcing physical boundaries.

Both are a nightmare.

What About Good Object Design?

Micro services are great if parts of your system are to be written in another language, or if you really need physical extraction for scaling or global reusability.

I have no intention to maintain 17 separated micro services along with my base application, not to speak of the testing apocalypse that is gonna come with that. I haven’t seen a single working, testable micro service system so far. If you have one, please invite me and change my mind.

On the other side, monolithic Rails apps are terrible, quickly become unmaintainable and testing will be a third-class citizen for the sake of “development speed”.

I don’t see what’s so hard about having a proper object design in one, monolithic Rails app?

You can have cleanly composed, separated layers with interfaces that allow reusability and simple testing and debugging.

You Can Have a Nice Service Architecture Within a Monolith.

You can have stand-alone components in your monolith, just not the Rails Way.

We have dispatching, deserialization, validations and forms, transactions and business rules, decoration and rendering, authorization and persistence, just to name a few. How on earth are we supposed to implement all that using three primitive abstraction layers?

The Rails Way is wrong. However, don’t let that mislead you to the conclusion that the only ways out of this are either micro services or, even better, switching to a new fancier language, just to do all the same architectural mistakes, again.

Decouple your logic from the actual framework, ship independent components in gems and introduce interfaces between your layers. This is only possible if you actually have abstractions, which could be service objects, endpoints, view models and higher-level abstractions.

Don’t let the monolith be an excuse for a shitty software architecture.

Wraps in Representable 2.3

Wednesday, September 2nd, 2015

Recently we rolled out Representable 2.3. The main addition here is the ability to suppress wraps.

When talking about wraps, I am not referring to deliciously rolled flat bread filled with mouth-watering vegetables, grilled chicken and chilli sauce, no, I am thinking of container tags for documents.

Wraps, y’all!

Usually, you’d define the document wrap on the representer class (or module, but my examples are using Decorator).

class SongDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :song # wrap set!
 
  property :name
end

When rendering a Song object, the document will be wrapped with "song".

song = Song.new(name: "I Want Out")
 
SongDecorator.new(song).to_hash
#=> {"song"=>{"name"=>"I Want Out"}}

Vice-versa, when parsing, the representer will only “understand” documents with the wrap present.

song = Song.new
 
SongDecorator.new(song).from_hash({"song"=>{"name"=>"I Want Out"}})

I know, this is terribly fascinating.

Nested Representers

A popular concept in Representable and Roar is to nest representers. While this can be done with inline blocks, many people prefer explicitly nesting two or more classes.

class AlbumDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :albums # wrap set!
 
  collection :songs decorator: SongDecorator
end

I reference the SongDecorator explicitly. This allows me to use it in two places.

  • To render and parse single song entities, I can use SongDecorator directly.
  • In a nested document with a list of songs, the same decorator can be used, given you desire an identical representation in the album view.

When rendering an album, however, every song is now wrapped.

album = Album.new(songs: [song, song])
AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"song"=>{"name"=>"I Want Out"}},
#       {"song"=>{"name"=>"I Want Out"}}
#   ]}}

Most probably not what you want.

I’ve seen several workarounds for this. Mostly, people maintain two decorators per entity, one with wrap, one without, where common declarations are shared using a module.

This is very clumsy and I do not understand why people take it instead of asking for a nice solution for that common problem. Maybe I’m not accessable enough.

Suppressing Wraps.

When working with Jonny on roarify, a client gem for the Shopify API and implemented using Roar, I dropped my inaccessible facade in exchange for beers and we implemented a solution: The wrap: false option.

class AlbumDecorator < Representable::Decorator
  # ..
  collection :songs decorator: SongDecorator, wrap: false # no wrap!
end

This will parse and serialize songs without wrapping them, again.

AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"name"=>"I Want Out"},
#       {"name"=>"I Want Out"}
#   ]}}

A simple enhancement with great impact – we were able to reduce representers by 38.1%.

Thanks for the beers, Jonnyboy! I miss you too!

Reform 2.0 – Form Objects for Ruby.

Monday, July 6th, 2015

A few days ago I pushed the next version of Reform: Version 2. While this is still a release candidate, it can be considered stable.

The reason I blog as if it was a major release is: I want you to test, try, and complain. Speak now or forever hold your peace! Now is the time to make me add or change features before we push the final stable 2.0

Here’s why Reform 2 was necessary, and, of course, why it’s awesome.

UPDATE: This is a release note directed to Reform users. If you want to learn more about Reform, read an introduction post.

Too Big!

There’s not a single amazing new feature in Reform 2. This is, if you quickly skim over the changes.

Of course, a lot of things have changed, but more on the inside of Reform.

Reform was getting too big. The form object was doing presentation, deserialization of incoming data, data mapping, coercion, validation, writing to persistence and saving.

For a gem author, monster objects are (or should be!) a nightmare. It is incredibly hard to follow what happens where in big objects, so I extracted a huge chunk of logic into a separate gem.

The form object now really only does validation, everything else is handled via Disposable and Representable.

The Architecture Now.

Both deserialization and mapping form data to persistence objects like ActiveRecord models is now completely decoupled.

architecture-reform-2

To cut it short: Deserializing of the params hash into a validatable object graph is done by a representer. Validation happens in the form itself. Coercion, syncing and saving all happens in the form’s twin.

Less Representable.

I removed a lot of representable-specific mapping logic, mainly because it was incredibly hard to understand. For example, you can now actually grasp what methods like #prepopulate! do by looking at the source.

This has also sped up Reform by 50%. That’s right – it is much faster now thanks to explicit, simple transformation logic.

No Rails, More Lotus!

Reform 1 used ActiveModel::Validations for validations. This still works, but you can also chuck Rails into the bin and use Lotus::Validations insteadremoving any Rails dependency from your forms.

class SongForm < Reform::Form
  include Reform::Lotus
 
  property :title
  validates :title, presence: true
end

While Reform was dragging the activemodel dependency around, this is now up to you. Reform still supports Rails but with a very low gravity.

Deserialization.

In #validate, to parse the incoming params into the object graph, an external representer is used. This could be any kind of representer and thus allows you to parse JSON, XML and other formats easily into an object graph.

Nevertheless, the representer will simply operate on the twin API to populate the form. This means, you can basically use your own deserialization logic.

form = SongForm.new(song)
 
form.title = "Madhouse"
form.band = Band.new
form.band.name = "Bombshell Rocks"
 
form.validate({})

The above example is a naive implementation of a deserializer without overriding parts of validate. You can set properties and add or removed nested objects. The twin will take care of mapping that to its object graph.

Forms and JSON

Trailblazer takes advantage of that already and allows JSON “contracts” that can deserialize and validate JSON documents.

You can do that manually, too.

class SongRepresenter < Roar::Decorator
  include JSON
  property :title
end
 
form.validate('{"title": "Melanie Banks"}') do |json|
  SongRepresenter.new(form).from_json(json)
end

This will use SongRepresenter for the deserialization. The representer will assign form.title=. After that, the form will proceed with its normal validation logic as if the form was a hash-based one.

In case I missed to make my point: This allows using forms for document APIs!

Coercion

In earlier versions, Reform implemented coercion in the deserialization representer which sometimes was kinda awkward. Coercion now happens in the twin.

form.created_at = "1/1/1998"
form.created_at #=> <DateTime 01-01-1998>

You can also override the form’s setter methods to build your own typecasting logic. Many people did that already in Reform 1, but in combination with the representer this could mess things up.

Populators

When deserializing, Reform per default tries to find the matching nested form for you. Often, there is no nested form, yet, that’s why we provide options like :populate_if_empty that will add a nested form corresponding to the particular input fragment.

Using the :populator option was a bit tedious and you needed quite some knowledge about how forms work. This has changed in Reform 2 and is super simple now.

In a populator, you can use the twin API to modify the object graph.

populator: lambda do |fragment, collection, index, options|
  collection << Song.new
end

This primitive populator will always add a new song object to the existing collection. Note how you do not have to care about adding a nested form anymore, as you used to have in Reform 1. The twin will do this for you.

Pre-populators

I’ve seen many users writing quirks to “fill out” a form before it is rendered, for example, to provide default values for visual fields or pre-selecting a radio button.

Reform 2 introduces the concept of prepopulators that can be configured per property.

property :title, prepopulator: lambda { self.title = "The title" }

Again, prepopulators can use the twin API to set up an arbitrary object graph state. They have to be run explicitly, usually before rendering, using #prepopulate!.

Hash Fields

A feature I personally love in Reform 2 is Struct. It allows to map hashes to properties.

Say you had a serialized hash column in your songs table.

class Song < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Song.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

“Working with hashes is fun!” said no one ever. Instead, let Reform map that to objects.

class SongForm < Reform::Form
  property :settings, struct: true do
    property :admin, struct: true do
      property :read
      property :write
      validates :read, inclusion: [true, false]
    end
  end
end

You can have an unlimited number of nestings in the hash. Every nesting results in a nested form twin to work with.

The Struct feature is described in this blog post in greater detail.

Syncing and Saving

The sync and save method both completely got extracted and are now implemented in Disposable.

Option Methods

A nice addition that I use a lot is option methods: you can specify dynamic options not only with a lambda, but also as a symbol referencing an instance method.

property :composer, populate_if_empty: :populate_composer! do
  # ..
end
 
def populate_composer!(fragment, options)
  Artist.new
end

This greatly cleans up forms when they become more complex. A cool side-effect: you can use inheritance better, too, and reuse option methods.

State Tracking

Since nested forms are now implemented as twins, you can use Disposable’s state tracking to follow what was going on on your form in validate.

State tracking is incredibly helpful for Imperative Callbacks and other post-processing logic.

More Documentation!

As you might have noticed, I have started to document all my gems on the new Trailblazer page.

I’d like to point you to the upcoming Trailblazer book, too. In 11 chapters, it discusses every aspect of Reform you can think of, as Reform is an essential part of this new architecture.

As a side-note: I mainly wrote this book to save myself from answering particular questions a hundred times. The Trailblazer books really talks about all my gems in great detail, and it is a nice way to support a decade of Open-Source work for you, too.

Conclusion

With Reform 2.0, my dream architecture has become true, my vision of what a form object should do and what should be abstracted in a separate layer is implemented, and I am very happy with it.

The code should be significantly easier to read and change, too. And it is faster.

It all adds up – Reform 2 is already deployed on hundreds of production sites, so update today and let me know what you think!

MiniTest::Spec, Capybara, Rails Integration Tests, and Cells: It Works!

Saturday, July 4th, 2015

I had a hard time getting MiniTest::Spec working with Capybara matchers, in combination with Rails integration tests and cells tests. I almost switched to Rspec but then finally figured out how simple it is.

Why People Use Rspec.

The reason people use Rspec is: It works. Everything popular is supported out-of-the-box provided by the hard work of the Rspec team. You don’t have to think about how integration tests may work or where that matcher comes from.

In Minitest, which is my personal favourite test gem, you have the following gems to pick from.

  • minitest-spec-rails
  • minitest-rails-capybara
  • minitest-rails
  • minitest-capybara
  • capybara_minitest_spec

There are probably more. I tried combining them but either integration tests didn’t work, matcher here didn’t work, matchers there didn’t work, the page object wasn’t available, and so on. It was a nightmare.

How it works!

Fortunately, the solution was pretty simple.

gem "minitest-rails-capybara"

The awesome minitest-rails-capybara will also install minitest-rails and minitest-capybara.

In your test_helper.rb, you add the following line.

require "minitest/rails/capybara"

Which loads all necessary files and add capybara spec matchers to the test base classes.

Integration Tests

Integration tests then I do as follows.

class CommentIntegrationTest < Capybara::Rails::TestCase
  it do
    visit "/comments"
    page.must_have_content "h1"
  end
end

It’s important to derive your test from Capybara::Rails::TestCase which is totally fine for me as I don’t like describe blocks that magically create a test class for you. Separate test classes just make me feel safer.

No Controller Tests.

I don’t write controller tests in Rails anymore because they are bullshit. They create the illusion of a well-tested system. In production, it will break. This is a result of this code.

Right, that’s 700 lines to setup a fake environment for your tested controller. 700 lines of code are 100% likely to diverge from real application state: Your tests will pass, your code in production breaks.

In the Trailblazer architecture, controller tests are taboo, you only write real integration tests, operation tests, and cell tests, which brings me to the next point.

Cell Tests

The only problem I had with this approach was that my cell tests broke.

class CommentCellTest < Cell::TestCase
  controller ThingsController
 
  it do
    html = concept("comment/cell/grid", thing).(:show)
    html.must_have_css(".comment")
  end
end

I got exceptions like the following.

NoMethodError: undefined method `assert_content' for 
  #<CommentCellTest:0xadcb284>

The solution was to include the new set of assertions into the cell tests, too. I did that in my test_helper.rb file.

Cell::TestCase.class_eval do
  include Capybara::DSL
  include Capybara::Assertions
end

It only took me a few months to figure that out. Thanks to the authors of all those great gems!

Example

I hope this will help you using the amazing MiniTest in your application. My example can be found here.

Disposable – The Missing API of ActiveRecord

Saturday, June 27th, 2015

Disposable gives you Twins. Twins are non-persistent domain objects. They know nothing about persisting things, hence the gem name.

They

  • Allow me to model object graphs that reflect my domain without restricting me to the database schema.
  • Let me work on that object graph without writing to the database. Only when syncing the graph writes to its persistent model(s).
  • Provide a declarative DSL to define schemas, schemas that can be used for other data transformations, e.g. in representers or form objects.

Some of its logic and concepts might be overlapping with the excellent ROM project. I am totally open to using ROM in future and continuously having late-night/early-morning debates with Piotr Solnica about our work.

However, I needed the functionality of twins in Reform, Roar, Representable, and Trailblazer now, and most of the concepts have evolved from the Reform gem and got extracted into Disposable.

Agnostic Front.

The title of this post is misleading on purpose: First, I know that many people will read this post because it has an offending title.

Second, it mentions ActiveRecord in a negative context even though I actually love ActiveRecord as a persistence layer (and only that).

Third, Disposable doesn’t really care about ActiveRecord. The underlying models could be from any ORM or just plain Ruby objects.

Twins

Twins are classes that declare a data schema.

class AlbumTwin < Disposable::Twin
  property :title
end

Their API is ridiculously simple. They allow reading, writing, syncing, and optional saving, and that’s it.

When initializing, properties are read from the model.

album = Album.find(1)
twin  = AlbumTwin.new(album)

Reading and writing now works on the twin. The persistence layer is not touched anymore.

# twin read
twin.title #=> "TODO: add title"
# twin write
twin.title = "Run For Cover"
 
# model read
album.title #=> "TODO: add title"
twin.title  #=> "Run For Cover"

Once you’re done with your work, use sync to write state back to the model.

twin.sync
 
album.title #=> "Run For Cover"

Optionally, you can call twin.save which invokes save on all nested models. This, of course, implies your models expose a #save method.

Objects, The Way You Want It.

Everything Disposable does could be done with ActiveRecord, in a more awkward way, though.

For example, Disposable lets you do compositions really easily – a concept well approved in Reform.

class AlbumTwin < Disposable::Twin
  include Composition
 
  property :id,      on: :album
  property :title,   on: :album
  collection :songs, on: :cd do
    property :name
  end
  property :cd_id,   on: :cd, from: :id
end

You configure which properties you want to expose and where they come from. And: you can also rename properties using :from.

The twin now exposes the new API.

twin = AlbumTwin.new(
  album: Album.find(1),
  cd:    CD.find(2)
)
twin.cd_id #=> 2

Of course, this also lets you write.

twin.songs << Song.create(name: "Thunder Rising")

As the composition user, I do not care or know about where songs comes from or go too.

All operations will be on the twin, only. Nothing is written to the models until you say sync. This is something I am totally missing in ActiveRecord. I will talk about that in a minute.

Hash Fields.

Another pretty amazing mapping tool in Disposable is Struct. This allows you to map hashes to objects.

Let’s assume your Album has a JSON column settings.

class Album < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Album.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

This is a deeply nested hash, a terrible thing to work with. Let the twin take care of it and get back to real object-oriented programming instead of fiddling with hashes.

class AlbumTwin < Disposable::Twin
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      property :write
    end
 
    property :user
  end
end

This gives you objects.

twin = AlbumTwin.new(Album.find(1))
twin.settings.admin.read #=> true
twin.settings.user #=> {destroy: false}

You can either map keys to properties (or collections!) or retrieve the real hash.

Writing works likewise.

twin.settings.admin.read = :MAYBE

As always, this is not written to the persistent model until you say sync.

album.settings[:admin][:read] #=> true
twin.settings.admin.read = :MAYBE
twin.sync
album.settings[:admin][:read] #=> :MAYBE

Working with hash structures couldn’t be easier. Note that this also works with Reform, giving you all the form power for hash fields.

class AlbumForm < Reform::Form
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      validates :read, inclusion: [true, false, :MAYBE]
    end

This opens up amazing possibilities to easily work with document databases, too. Remember: Disposable doesn’t care if it’s a hash from ActiveRecord, MongoDB or plain Ruby.

Collection Semantics

One reason I wrote twins is because the way ActiveRecord handles collections is tedious. For instance, the following operation will write to the database, even though I didn’t say so.

song = Song.new
CD.songs = [song]
song.persisted? #=> true

This is a real problem. Say you want to set up an object graph, validate it and then write it to the database. Impossible with ActiveRecord unless you use weird work-arounds like CD.songs.build which is completely counter-intuitive.

song = CD.songs.build
song.persisted? #=> false

I want normal Ruby array methods to behave like normal Ruby array methods. What if I don’t have the CD.songs reference, yet, when I instantiate the Song? Twins simply give you the collection semantics you expect.

song = Song.new
AlbumTwin.songs = [song]
 
song.persisted? #=> false
album.songs #=> []

The changes will not be written to the database until you call sync.

Deleting works analogue to writing, moving, replacing.

song_twin = twin.songs[0]
twin.songs.delete(song_twin)
 
twin.sync
album.songs #=> []

You can play with any property as much as you want, the persistence layer won’t be hit until syncing happens.

Change Tracking.

Another feature extremely helpful for post-processing logic as found in callbacks is the state tracking behavior in twins. Field changes will be tracked.

twin.changed?(:title) #=> false
twin.title = "Best Of"
twin.changed?(:title) #=> true

You can also check if a twin has changed, which is the case as soon as one or more properties were modified.

twin.changed? #=> true

This works with nested twins and collections, too.

twin.songs << Song.new
twin.songs.changed? #=> true
twin.songs[0].changed? #=> false
twin.songs[1].changed? #=> false

On collections, #added, #deleted and friends help you to monitor what has changed in particular.

twin.songs << Song.new
twin.songs.added #=> [<SongTwin ..>]

Several other goodies like persistence tracking help to write full-blown event dispatcher which I’m gonna discuss in a separate blog post. If you’re curious, chapter 8 of the Trailblazer book is about callbacks, change tracking and post-processing.

Twins and Representers.

Representers are Ruby declarations that render and parse documents. Have a look at the Roar gem to learn how they are used. Anyway, twins are the perfect match for representers: while the twin handles data modelling, the representer does the document work.

class Album::Representer < Roar::Decorator
  include Roar::JSON
 
  property :id
  property :title
 
  collection :songs, class: Song
    property :name
  end
 
  link(:self) { album_path(id) }
end

The composition twin could now be used in combination with the representer.

twin = AlbumTwin(album: Album.find(1), cd: CD.new)

Note that the CD is a brand-new, fresh and shiny instance without any songs added to it, yet.

We then use the representer to parse the incoming JSON document into Ruby objects.

json = '{"title": "Run For Cover", songs: [{"name": "Military Man"}]}'
Album::Representer.new(twin).from_json json

This will populate the twin.

twin.songs #=> [<SongTwin name: "Military Man">]

After syncing, the CD will contain songs.

twin.sync
 
cd.songs #=> [<Song id:1 name:"Military Man">]

Roar, Representable and Reform come with mechanisms to optionally find existing records instead of creating new, and so on. The topic of populators is covered in chapter 5 and 7 of the Trailblazer book.

Both twins and representers internally use declarative for managing their schemas. This means you can infer representers from twins, and vice-versa.

class Album::Representer < Roar::Decorator
  include Roar::JSON
  include Schema
 
  from AlbumTwin
 
  # add properties.
  link(:self) { album_path(id) }
end

Deserialization is a task that’s poorly covered by Rails. With twins and representers, parsing documents into object graphs becomes object-oriented and predictable. Where there was complex nested hash operations, probably involving gems like Hashie, there’s now clean, encapsulated and manageable objects that parse and populate.

Onwards!

Twins are supported in all my gems and the fundamental approach for data transformations. They are an integral part of Reform 2, where every form is a twin. The form is responsible for validation and deserialization, the twin for data mapping.

Use them, make them faster, better, enjoy the simplicity of intuitive object graphs that reflect your domain, not your database schema, and never forget: Nothing is written to the persistence layer until you call sync!