Uncategorized

Cells-Hamlit: The Fastest View Engine Around.

Saturday, January 23rd, 2016

The Hamlit gem is a reimplementation of the popular Haml markup language, which unfortunately is based on a quite old, convoluted codebase. Hamlit borrows the syntax, but rewrites the entire engine code leveraging the excellent Temple gem, which is a parser, compiler and optimizer for template languages and is also used in Slim.

Wow, that’s four template gems in one paragraph, but I’m pretty sure you’ve heard of all of them, except for maybe Hamlit.

Why Hamlit?

What makes Hamlit attractive for us is a very clean code base and speed.

Hamlit refrains from monkey-patching Rails helpers. Where the original Haml gem has quite a few of controversial Rails hacks, Hamlit has zero coupling to Rails. The capture support can be simplified using the Hamlit-block gem – instead of relying on different output buffers, capture always returns the captured content directly. This is brilliant for Cells.

Speaking of: If you want to use Hamlit in Cells, we provide you the Cells-hamlit gem. This was mainly possible because of Takashi Kokubun‘s excellent work and collaboration.

Need For Speed.

View rendering with Cells and Hamlit views is very fast. It’s actually the fastest combination at the time of writing this post.

ChartGo

For a simple benchmarking I used Benchmark-ips in the Gemgem-Sinatra application, a nice sample project that shows how to use Trailblazer and Cells in Sinatra. The benchmark file simply renders an entire page using Cells, as many times as possible in 20 seconds.

The cells composing the page use Hamlit, Slim and Haml templates in different branches. The result is always this page.

Screenshot_2016-01-23_14-24-51

This is all nothing special. However, as visible in my professional 3D chart, say Hamlit was 100%, Slim’s performance is 90%, whereas Haml can perform only 80% of what Hamlit can. In other words: Hamlit is the fastest and will make your views perform better.

Note that this benchmark is in combination with Cells. Cells are generally faster than ActionView. With the upcoming Cells 4.1 we will have performance boosts of around 5-10 as compared to the ActionView framework.

Haml 5

To come to Haml’s defense: this project has notably paved the way for all modern template formats. Its popularity “skyrocketed”, to say it in DHH’s words, very early, making it hard for the core team to aggressively refactor the code base. This might happen in Haml 5 and could bring Haml-The Original™ back into pole position.

You should go and try out Hamlit today. Use it with Cells for a major boost in speed and a better view architecture.


If you want to stay up-to-date with all Trailblazer gems like Cells, Roar and Reform, sign up for our newsletter. It will give you a monthly overview of new features, cool tricks and upcoming awesomeness.

Reform 2.1 With Dry-Validation and Grouping

Wednesday, December 23rd, 2015

Just in time for Christmas, Reform 2.1 is ready for you. It has two great new additions: we now support the awesome Piotr Solnica’s dry-validation gem, and I introduced validation groups.

Reform is a form object gem that decouples validation of data from models. Its full documentation can be found on the Trailblazer website.

Validation Groups

Traditional validation gems like ActiveModel::Validations only allow a linear flow of validations. All defined validations will be run, even though in specific cases it doesn’t make sense. We get around that limitation now in Reform with validation groups.

Here’s a very simplified example.

class SessionForm < Reform::Form
  property :username
  property :email
 
  validation :default do
    validates :username, :email, presence: true
  end
 
  validation :email_format, if: :default do
    validates :email, email: true
  end
end

You can now group sets of validation and name them using validation.

Those can then be chained using :after and, in this example, :if. The second group :email_format is only executed if the :default group was valid, saving you any conditionals in the following validations.

Validation still happens by calling form.validate(params).

This opens the way to a completely new understanding of validations as predicates and results.

Dry-validation

Speaking of predicates and all those logic terms: We now support Dry-validation as another validation backend. Since this is a relatively new, fast and very strict implementation, we will use it as default in future Reforms.

class LoginForm < Reform::Form
  property :password
  property :confirm_password
 
  validation :default do
    key(:password, &:filled?) # :password required.
    key(:confirm_password) do |str|
      str.filled? & str.correct?
    end
 
    def correct?(str)
      str == form.password
    end
  end

Without going into dry-validation’s API details too much: In a validation group you can use the exact same API as in a Dry::Validation::Schema, with chaining, predicates, custom validations, and so on.

Error messages in dry-validation are generated via YAML file that can easily be extended, ending the age of ActiveModel’s translation logic madness.

Populator API

In Reform 2.1, all populators now receive one options hash, which allows using Ruby’s keyword arguments.

class SessionForm < Reform::Form
  property :user,
    populator: ->(fragment:, **) do
      self.user = fragment["id"] ? User.find(1) : User.new
    end, # ..

The old API still works, but is deprecated.

Skip!

If you ever had the need to make Reform suppress deserialization of a fragment, this is simpler now with the new skip! method.

  property :user,
    populator: ->(fragment:, **) do
      return skip! if fragment["id"]
      # more code
    end, # ..

What used to be a combination of :populator and :skip_if can now be combined. Once skip! has returned, Reform will ignore the currently processed fragment, as if it hadn’t been in the incoming hash at all.

Documentation for skip and populators is here.

Good Bye, ActiveModel!

What sounds bewildering to many of you is a consequent step in tidying up the Ruby world: We will drop support for ActiveModel::Validations in Reform 2.2. Don’t you worry, everything will still work the way it did before, we just don’t want to waste time with AM:V and its prehistoric implementation anymore.

Most trouble we had with the way AM:V computes error messages. It gets worse when those have to be translated. AM:V has an extremely complex implementation, jumps between instance and class context, and makes wild assumptions about object interfaces. Since Rails core seems uninterested in changing anything, because it might break Basecamp, for us it’s easiest to just let it be and move on with an alternative.

Also, when using validators like confirm or acceptance values in the form suddenly were changed because those implementations write to the validated object – a very wrong thing to do. You might also have a look into how AM:V finds validators: a cryptic, magic class traversal happens here and it is a nightmare to make AM:V use custom validators in a non-Rails environment.

We ended up with too many patches and hacks – very frustrating for the maintainers. Since there’s better, less constraining alternatives, we all will benefit from a better validation workflow.

Representable 2.4: How Functional Programming Speeds Up Rendering And Parsing.

Thursday, November 19th, 2015

The great thing about being unemployed is you finally get to work on Open-Source features you always wanted to do but never had the time to.

Representable 2.4 internally is completely restructured, it has lost tons of code in favor for a simpler, faster, functional approach, we got rid of internal state, and it now allows to hook into parsing and rendering with your own logic without being restricted to predefined execution paths.

And, do I really have to mention that this results in +200% speedup for both rendering and parsing?

To cut it short: This version of Representable, which backs many other gems like Roar or Reform, feels great and I’m happy to throw it at you.

Here are the outstanding changes followed by a discussion how we could achieve this using functional techniques.

Speed

Representable 2.4 is about 3.2x faster than older versions. This is for both, rendering and parsing.

I have no idea what else to say about this right now.

Defaults

Yes, you may now define defaults for your representer.

class SongRepresenter < Representable::Decorator
  defaults render_nil: true
 
  property :title # does have render_nil: true

The defaults feature, mostly written by Sadjow Leão, also allows crunching default options using a block.

class SongRepresenter < Representable::Decorator
  defaults do |name|
    { as: name.to_s.camelize }
  end
 
  property :email_address # does have as: EmailAddress

A pretty handy feature that’s been due a long time. It is fully documented on the new, beautiful website.

Unified Lambda Options

The positional arguments for option lambdas I found incredibly annoying.

Every time I used :instance or :setter I had to look up their API (my own API!) since every option had its own.

For example, :instance exposes the following API.

instance: ->(fragment, [i], args) { }

Whereas :setter comes with another signature.

setter: ->(value, args) { }

In 2.4, every dynamic option receives a hash containing all the stakeholders you might need.

setter: ->(options) { options[:fragment] }
setter: ->(options) { options[:binding] }

This works extremely well with keyword arguments in Ruby 2.1 and above.

instance: ->(fragment:, index:, **) { puts "#{fragment} @ #{index}" }

Since I’m a good person, I deprecated all options but :render_filter and :parse_filter. Running your code with 2.4 will work but print tons of deprecation warnings.

Once your code is updated, you may switch of deprecation mode and speed up the execution.

Representable.deprecations = false

Note that this will be default behavior in 2.5.

Inject Behavior

In case you had to juggle a lot with Representable’s options to achieve what you want, I have good news. You can now inject custom behavior and replace parts or the entire pipeline.

For instance, I could make Representable use my own parsing logic for a specific property. This is a bit similar to :reader but gives you full control.

class SongRepresenter < Representable::Decorator
  Upcase = ->(input, options) do
    options[:represented].title = input.upcase
  end
 
  property :title, parse_pipeline: ->(*) { Upcase }

:parse_pipeline expects a callable object. Usually, that is an instance of Representable::Pipeline with many functions lined up, but it can also be a simple proc.

Here’s what happens.

song = OpenStruct.new
 
SongRepresenter.new(song).from_hash("title"=>"Seventh Sign")
song.title #=> "SEVENTH SIGN"

Without any additional logic, you implemented a simple parser for the title property.

Skip Execution

You can also setup your own pipeline using Representable’s functions, plus the ability to stop the pipeline when emitting a special symbol.

property :title, parse_pipeline: ->(*) do
  Representable::Pipeline[
    Representable::ReadFragment,
    SkipOnNil,
    Upper,
    Representable::SetValue
  ]
end

The implementation of the two custom functions is here.

SkipOnNil = ->(input, **) { input.nil? Pipeline::Stop : input }
Upper     = ->(input, **) { input.upcase }

By emiting Stop, the execution of the pipeline stops and nothing more happens. If the input fragment is not nil, it will be uppercased and set on the represented object.

Pipeline Mechanics

Every low-level function in a pipeline receives two arguments.

SkipOnNil = ->(input, options) { "new input" }

In pipelines, the second options argument is immutable, whereas the return value of the last function becomes the input of the next function.

This really functional approach was highly inspired by my friend Piotr Solnica and his “FP-infected mind”.

The same works with :render_pipeline as well, but rendering is boring.

How We Got It Faster.

Where we had tons of procedural code, ifs and elses, many hash lookups and different implementationsf for collections and scalar properties, we now have simple pipelines.

Remember, in Representable you always define document fragments using property.

class SongRepresenter < Representable::Decorator
  property :title
end

Now, let’s say we were to parse a document using this representer.

SongRepresenter.new(Song.new).from_hash("title" => "Havasu")

In older versions, Representable will now grab the "title" value, and then traverse the following pseudo-code.

if ! fragment
  if binding[:default]
    return binding[:default]
  end
else
  if binding[:skip_parse]
    return
  else
    if binding[:typed]
      if binding[:class]
        return ..
      elsif binding[:instance]
        return ..
      end
    else
      return fragment
    end
  end

Without knowing any details here, you can see that the flow is a deeply nested, procedural mess. Basically, every step represents one of the options you might be using every day, such as :default or :class.

Not only was it incredibly hard to follow Representable’s logic, as this procedural flow is spread across many files, it was also slow!

For every property being rendered or parsed, there had to be around 20 hash lookups on the binding, often followed by evaluations of the option. For example, :class could be unset, a class constant, or a dynamic lambda.

Projecting this to realistic representers with about 50-100 properties this quickly becomes thousands of hash lookups for only one object, just to find out something that has been defined at compile time.

Static Flow

Another problem was that the flow was static, making it really hard to add custom behavior.

if ! fragment
  if fragment == nil # injected, new behavior!
    fragment = []    # change nils to empty arrays.
  end
 
  if binding[:default]
    return binding[:default]

There was no clean way to inject additional behavior without abusing dynamic options or overriding Binding classes, which was the opposite of intuitive.

It was also a physical impossibility to stop the workflow at a particular point, since you couldn’t simply inject returns into the existing code. For example, say your :class lambda already handled the entire deserialization, you still had to fight with the options that are called after :class.

What I found myself doing a lot was adding more and more code to “versatile” options like :instance since the flow couldn’t be modified at runtime.

Pipelines

Sometimes you need to take a step back and ask yourself: “What am I actually trying to do?”. You must actively cut out all those nasty little edge-cases and special requirements your code also handles to see the big picture.

Strictly speaking, when parsing a document, Representable goes through its defined schema properties and invokes parsing for every binding. Each binding, and that’s the new insight, has a pipelined workflow.

  • Grab fragment.
  • Not there? Abort.
  • Nil? Use Default if present. Abort.
  • Skip parsing? Abort.
  • If typed and :class, instantiate.
  • If typed and :instance, instantiate.
  • If typed, deserialize.
  • Return.

Instead of oddly programming that in a procedural way, each binding now uses its very own pipeline. For decorators, the pipeline is computed at compile-time. This means depending on the options used for this property, a custom pipeline is built.

  property :artist,
    skip_parse: ->(fragment:, **) { fragment == "n/a" }
    class: Artist,

The above property will be roughly translated to the following pipeline (simplified).

Pipeline[
  ReadFragment,
  SkipParse,    # because of :skip_parse
  StopOnNotFound,
  CreateObject, # because of :class.
  Decorate,     # because of :class.
  Deserialize,  # because of :class.
  SetValue
]

This pipeline is intuitively understandable. Each element is a function, a simple Ruby proc defined for serializing and deserializing.

Again, the pipeline is created once at compile-time. This means all checks like if binding[:default] are done once when building the pipeline, reducing hash lookups on the binding to a negligible handful.

The fewer options a property uses, the less functions will be in the pipeline, shortening the execution time at run-time.

A tremendous speed-up of minimum 200% is the result.

Benchmarks

In a, what we call realistic benchmark, we wrote a representer with 50 properties, where each property is a nested representer with another 50 properties.

We then rendered 100 objects using that representer. Here are the benchmarks.

4.660000   0.000000   4.660000 (  4.667668) # 2.3
1.400000   0.010000   1.410000 (  1.410015) # 2.4

As you can see, Representable is now 3.32x faster.

Looking at the top of the profiler stack, it becomes very obvious why.

%self      calls  name
 13.92  6630522   Representable::Definition#[]
  5.28   255001   Representable::Binding#initialize
  4.81  1790109   Representable::Binding#[]
  2.90   515102   Uber::Options::Value#call
  2.36   510002   Representable::Definition#typed?

This is for 2.3, where an insane amount of time is wasted on hash lookups at run-time. Imagine, for every property the “pipeline” is computed at runtime (of course, the concept of pipeline didn’t exist, yet).

For 2.4, this is slightly different.

 %self     calls  name
  4.03   255001   Representable::Hash::Binding#write
  3.00   260101   Representable::Binding#exec_context
  2.77   255000   Representable::Binding#skipable_empty_value?
  2.44   255001   Representable::Binding#render_pipeline
  0.16     5100   Representable::Function::Decorate#call
  0.16    10201   Representable::Binding#[]

The highest call count is “only” 255K, which is a method we do have to call for each property. Other than that, expensive hash lookups and option evaluations are minimized drastically, requiring less than 1% computing time.

Declarative

I also got around to finally extract all declarative logic into a gem named – surprise! – Declarative. If you now think “Oh no, not another gem!” you should have a look at it.

In former versions, we’d use Representable in other gems just to get the DSL for property and collection, etc., without using Representable’s render/parse logic, which is what makes Representable.

This is now completely decoupled and reusable without any JSON, Hash or XML dependencies.

It also implements the inheritance between modules, representers and decorators in a simpler, more understandable way.

Debugging

To learn more about how pipelines work, you should make use of the Representable::Debug feature.

SongRepresenter.new(song).extend(Representable::Debug).from_hash(..)

The output is highly interesting!

The Only Alternative to a Rails Monolith are Micro Services? Bullshit!

Saturday, September 5th, 2015

The Rails Way is wrong and has led thousands of projects to an unmaintainable state of highly coupled software assets.

In order to keep the growing complexity maintainable, and to maximize reusability, people now start to introduce “micro services”, which are physically separated, completely stand-alone applications that provide a subset of the application’s functionality via a document API.

DHH is absolutely right when criticizing this approach.

Not only does a “micro service” increase the complexity for deploys, because now, you have to roll out 17 applications and not just one, it also makes it almost impossible to test the application under real-life conditions.

The test environment will have countless mocks for “micro service” endpoints and results in half-assed pseudo images of production. The tests you write are better than no tests, but I doubt the pain for setting them up outweighs their integrity.

Now you have a fragmented system with loose coupling and a tremendous maintainance effort.

Micro Services! And Now?

People make it look as if “micro services” are the only alternative to a monolith.

They make it look as if you either have the choice between a huge pile of rubbish with many internal dependencies – your Rails Way monolith. Or your devops engineers – that you had to hire to take care of your system – have to deploy up to 17 applications every time APIs change.

This is absolute bullshit.

A monolith the way DHH leverages it is the excuse for a horrible software architecture without any encapsulation (called The Rails Way). The micro service architecture is the attempt to decouple things by enforcing physical boundaries.

Both are a nightmare.

What About Good Object Design?

Micro services are great if parts of your system are to be written in another language, or if you really need physical extraction for scaling or global reusability.

I have no intention to maintain 17 separated micro services along with my base application, not to speak of the testing apocalypse that is gonna come with that. I haven’t seen a single working, testable micro service system so far. If you have one, please invite me and change my mind.

On the other side, monolithic Rails apps are terrible, quickly become unmaintainable and testing will be a third-class citizen for the sake of “development speed”.

I don’t see what’s so hard about having a proper object design in one, monolithic Rails app?

You can have cleanly composed, separated layers with interfaces that allow reusability and simple testing and debugging.

You Can Have a Nice Service Architecture Within a Monolith.

You can have stand-alone components in your monolith, just not the Rails Way.

We have dispatching, deserialization, validations and forms, transactions and business rules, decoration and rendering, authorization and persistence, just to name a few. How on earth are we supposed to implement all that using three primitive abstraction layers?

The Rails Way is wrong. However, don’t let that mislead you to the conclusion that the only ways out of this are either micro services or, even better, switching to a new fancier language, just to do all the same architectural mistakes, again.

Decouple your logic from the actual framework, ship independent components in gems and introduce interfaces between your layers. This is only possible if you actually have abstractions, which could be service objects, endpoints, view models and higher-level abstractions.

Don’t let the monolith be an excuse for a shitty software architecture.

Wraps in Representable 2.3

Wednesday, September 2nd, 2015

Recently we rolled out Representable 2.3. The main addition here is the ability to suppress wraps.

When talking about wraps, I am not referring to deliciously rolled flat bread filled with mouth-watering vegetables, grilled chicken and chilli sauce, no, I am thinking of container tags for documents.

Wraps, y’all!

Usually, you’d define the document wrap on the representer class (or module, but my examples are using Decorator).

class SongDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :song # wrap set!
 
  property :name
end

When rendering a Song object, the document will be wrapped with "song".

song = Song.new(name: "I Want Out")
 
SongDecorator.new(song).to_hash
#=> {"song"=>{"name"=>"I Want Out"}}

Vice-versa, when parsing, the representer will only “understand” documents with the wrap present.

song = Song.new
 
SongDecorator.new(song).from_hash({"song"=>{"name"=>"I Want Out"}})

I know, this is terribly fascinating.

Nested Representers

A popular concept in Representable and Roar is to nest representers. While this can be done with inline blocks, many people prefer explicitly nesting two or more classes.

class AlbumDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :albums # wrap set!
 
  collection :songs decorator: SongDecorator
end

I reference the SongDecorator explicitly. This allows me to use it in two places.

  • To render and parse single song entities, I can use SongDecorator directly.
  • In a nested document with a list of songs, the same decorator can be used, given you desire an identical representation in the album view.

When rendering an album, however, every song is now wrapped.

album = Album.new(songs: [song, song])
AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"song"=>{"name"=>"I Want Out"}},
#       {"song"=>{"name"=>"I Want Out"}}
#   ]}}

Most probably not what you want.

I’ve seen several workarounds for this. Mostly, people maintain two decorators per entity, one with wrap, one without, where common declarations are shared using a module.

This is very clumsy and I do not understand why people take it instead of asking for a nice solution for that common problem. Maybe I’m not accessable enough.

Suppressing Wraps.

When working with Jonny on roarify, a client gem for the Shopify API and implemented using Roar, I dropped my inaccessible facade in exchange for beers and we implemented a solution: The wrap: false option.

class AlbumDecorator < Representable::Decorator
  # ..
  collection :songs decorator: SongDecorator, wrap: false # no wrap!
end

This will parse and serialize songs without wrapping them, again.

AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"name"=>"I Want Out"},
#       {"name"=>"I Want Out"}
#   ]}}

A simple enhancement with great impact – we were able to reduce representers by 38.1%.

Thanks for the beers, Jonnyboy! I miss you too!

Reform 2.0 – Form Objects for Ruby.

Monday, July 6th, 2015

A few days ago I pushed the next version of Reform: Version 2. While this is still a release candidate, it can be considered stable.

The reason I blog as if it was a major release is: I want you to test, try, and complain. Speak now or forever hold your peace! Now is the time to make me add or change features before we push the final stable 2.0

Here’s why Reform 2 was necessary, and, of course, why it’s awesome.

UPDATE: This is a release note directed to Reform users. If you want to learn more about Reform, read an introduction post.

Too Big!

There’s not a single amazing new feature in Reform 2. This is, if you quickly skim over the changes.

Of course, a lot of things have changed, but more on the inside of Reform.

Reform was getting too big. The form object was doing presentation, deserialization of incoming data, data mapping, coercion, validation, writing to persistence and saving.

For a gem author, monster objects are (or should be!) a nightmare. It is incredibly hard to follow what happens where in big objects, so I extracted a huge chunk of logic into a separate gem.

The form object now really only does validation, everything else is handled via Disposable and Representable.

The Architecture Now.

Both deserialization and mapping form data to persistence objects like ActiveRecord models is now completely decoupled.

architecture-reform-2

To cut it short: Deserializing of the params hash into a validatable object graph is done by a representer. Validation happens in the form itself. Coercion, syncing and saving all happens in the form’s twin.

Less Representable.

I removed a lot of representable-specific mapping logic, mainly because it was incredibly hard to understand. For example, you can now actually grasp what methods like #prepopulate! do by looking at the source.

This has also sped up Reform by 50%. That’s right – it is much faster now thanks to explicit, simple transformation logic.

No Rails, More Lotus!

Reform 1 used ActiveModel::Validations for validations. This still works, but you can also chuck Rails into the bin and use Lotus::Validations insteadremoving any Rails dependency from your forms.

class SongForm < Reform::Form
  include Reform::Lotus
 
  property :title
  validates :title, presence: true
end

While Reform was dragging the activemodel dependency around, this is now up to you. Reform still supports Rails but with a very low gravity.

Deserialization.

In #validate, to parse the incoming params into the object graph, an external representer is used. This could be any kind of representer and thus allows you to parse JSON, XML and other formats easily into an object graph.

Nevertheless, the representer will simply operate on the twin API to populate the form. This means, you can basically use your own deserialization logic.

form = SongForm.new(song)
 
form.title = "Madhouse"
form.band = Band.new
form.band.name = "Bombshell Rocks"
 
form.validate({})

The above example is a naive implementation of a deserializer without overriding parts of validate. You can set properties and add or removed nested objects. The twin will take care of mapping that to its object graph.

Forms and JSON

Trailblazer takes advantage of that already and allows JSON “contracts” that can deserialize and validate JSON documents.

You can do that manually, too.

class SongRepresenter < Roar::Decorator
  include JSON
  property :title
end
 
form.validate('{"title": "Melanie Banks"}') do |json|
  SongRepresenter.new(form).from_json(json)
end

This will use SongRepresenter for the deserialization. The representer will assign form.title=. After that, the form will proceed with its normal validation logic as if the form was a hash-based one.

In case I missed to make my point: This allows using forms for document APIs!

Coercion

In earlier versions, Reform implemented coercion in the deserialization representer which sometimes was kinda awkward. Coercion now happens in the twin.

form.created_at = "1/1/1998"
form.created_at #=> <DateTime 01-01-1998>

You can also override the form’s setter methods to build your own typecasting logic. Many people did that already in Reform 1, but in combination with the representer this could mess things up.

Populators

When deserializing, Reform per default tries to find the matching nested form for you. Often, there is no nested form, yet, that’s why we provide options like :populate_if_empty that will add a nested form corresponding to the particular input fragment.

Using the :populator option was a bit tedious and you needed quite some knowledge about how forms work. This has changed in Reform 2 and is super simple now.

In a populator, you can use the twin API to modify the object graph.

populator: lambda do |fragment, collection, index, options|
  collection << Song.new
end

This primitive populator will always add a new song object to the existing collection. Note how you do not have to care about adding a nested form anymore, as you used to have in Reform 1. The twin will do this for you.

Pre-populators

I’ve seen many users writing quirks to “fill out” a form before it is rendered, for example, to provide default values for visual fields or pre-selecting a radio button.

Reform 2 introduces the concept of prepopulators that can be configured per property.

property :title, prepopulator: lambda { self.title = "The title" }

Again, prepopulators can use the twin API to set up an arbitrary object graph state. They have to be run explicitly, usually before rendering, using #prepopulate!.

Hash Fields

A feature I personally love in Reform 2 is Struct. It allows to map hashes to properties.

Say you had a serialized hash column in your songs table.

class Song < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Song.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

“Working with hashes is fun!” said no one ever. Instead, let Reform map that to objects.

class SongForm < Reform::Form
  property :settings, struct: true do
    property :admin, struct: true do
      property :read
      property :write
      validates :read, inclusion: [true, false]
    end
  end
end

You can have an unlimited number of nestings in the hash. Every nesting results in a nested form twin to work with.

The Struct feature is described in this blog post in greater detail.

Syncing and Saving

The sync and save method both completely got extracted and are now implemented in Disposable.

Option Methods

A nice addition that I use a lot is option methods: you can specify dynamic options not only with a lambda, but also as a symbol referencing an instance method.

property :composer, populate_if_empty: :populate_composer! do
  # ..
end
 
def populate_composer!(fragment, options)
  Artist.new
end

This greatly cleans up forms when they become more complex. A cool side-effect: you can use inheritance better, too, and reuse option methods.

State Tracking

Since nested forms are now implemented as twins, you can use Disposable’s state tracking to follow what was going on on your form in validate.

State tracking is incredibly helpful for Imperative Callbacks and other post-processing logic.

More Documentation!

As you might have noticed, I have started to document all my gems on the new Trailblazer page.

I’d like to point you to the upcoming Trailblazer book, too. In 11 chapters, it discusses every aspect of Reform you can think of, as Reform is an essential part of this new architecture.

As a side-note: I mainly wrote this book to save myself from answering particular questions a hundred times. The Trailblazer books really talks about all my gems in great detail, and it is a nice way to support a decade of Open-Source work for you, too.

Conclusion

With Reform 2.0, my dream architecture has become true, my vision of what a form object should do and what should be abstracted in a separate layer is implemented, and I am very happy with it.

The code should be significantly easier to read and change, too. And it is faster.

It all adds up – Reform 2 is already deployed on hundreds of production sites, so update today and let me know what you think!

MiniTest::Spec, Capybara, Rails Integration Tests, and Cells: It Works!

Saturday, July 4th, 2015

I had a hard time getting MiniTest::Spec working with Capybara matchers, in combination with Rails integration tests and cells tests. I almost switched to Rspec but then finally figured out how simple it is.

Why People Use Rspec.

The reason people use Rspec is: It works. Everything popular is supported out-of-the-box provided by the hard work of the Rspec team. You don’t have to think about how integration tests may work or where that matcher comes from.

In Minitest, which is my personal favourite test gem, you have the following gems to pick from.

  • minitest-spec-rails
  • minitest-rails-capybara
  • minitest-rails
  • minitest-capybara
  • capybara_minitest_spec

There are probably more. I tried combining them but either integration tests didn’t work, matcher here didn’t work, matchers there didn’t work, the page object wasn’t available, and so on. It was a nightmare.

How it works!

Fortunately, the solution was pretty simple.

gem "minitest-rails-capybara"

The awesome minitest-rails-capybara will also install minitest-rails and minitest-capybara.

In your test_helper.rb, you add the following line.

require "minitest/rails/capybara"

Which loads all necessary files and add capybara spec matchers to the test base classes.

Integration Tests

Integration tests then I do as follows.

class CommentIntegrationTest < Capybara::Rails::TestCase
  it do
    visit "/comments"
    page.must_have_content "h1"
  end
end

It’s important to derive your test from Capybara::Rails::TestCase which is totally fine for me as I don’t like describe blocks that magically create a test class for you. Separate test classes just make me feel safer.

No Controller Tests.

I don’t write controller tests in Rails anymore because they are bullshit. They create the illusion of a well-tested system. In production, it will break. This is a result of this code.

Right, that’s 700 lines to setup a fake environment for your tested controller. 700 lines of code are 100% likely to diverge from real application state: Your tests will pass, your code in production breaks.

In the Trailblazer architecture, controller tests are taboo, you only write real integration tests, operation tests, and cell tests, which brings me to the next point.

Cell Tests

The only problem I had with this approach was that my cell tests broke.

class CommentCellTest < Cell::TestCase
  controller ThingsController
 
  it do
    html = concept("comment/cell/grid", thing).(:show)
    html.must_have_css(".comment")
  end
end

I got exceptions like the following.

NoMethodError: undefined method `assert_content' for 
  #<CommentCellTest:0xadcb284>

The solution was to include the new set of assertions into the cell tests, too. I did that in my test_helper.rb file.

Cell::TestCase.class_eval do
  include Capybara::DSL
  include Capybara::Assertions
end

It only took me a few months to figure that out. Thanks to the authors of all those great gems!

Example

I hope this will help you using the amazing MiniTest in your application. My example can be found here.

Disposable – The Missing API of ActiveRecord

Saturday, June 27th, 2015

Disposable gives you Twins. Twins are non-persistent domain objects. They know nothing about persisting things, hence the gem name.

They

  • Allow me to model object graphs that reflect my domain without restricting me to the database schema.
  • Let me work on that object graph without writing to the database. Only when syncing the graph writes to its persistent model(s).
  • Provide a declarative DSL to define schemas, schemas that can be used for other data transformations, e.g. in representers or form objects.

Some of its logic and concepts might be overlapping with the excellent ROM project. I am totally open to using ROM in future and continuously having late-night/early-morning debates with Piotr Solnica about our work.

However, I needed the functionality of twins in Reform, Roar, Representable, and Trailblazer now, and most of the concepts have evolved from the Reform gem and got extracted into Disposable.

Agnostic Front.

The title of this post is misleading on purpose: First, I know that many people will read this post because it has an offending title.

Second, it mentions ActiveRecord in a negative context even though I actually love ActiveRecord as a persistence layer (and only that).

Third, Disposable doesn’t really care about ActiveRecord. The underlying models could be from any ORM or just plain Ruby objects.

Twins

Twins are classes that declare a data schema.

class AlbumTwin < Disposable::Twin
  property :title
end

Their API is ridiculously simple. They allow reading, writing, syncing, and optional saving, and that’s it.

When initializing, properties are read from the model.

album = Album.find(1)
twin  = AlbumTwin.new(album)

Reading and writing now works on the twin. The persistence layer is not touched anymore.

# twin read
twin.title #=> "TODO: add title"
# twin write
twin.title = "Run For Cover"
 
# model read
album.title #=> "TODO: add title"
twin.title  #=> "Run For Cover"

Once you’re done with your work, use sync to write state back to the model.

twin.sync
 
album.title #=> "Run For Cover"

Optionally, you can call twin.save which invokes save on all nested models. This, of course, implies your models expose a #save method.

Objects, The Way You Want It.

Everything Disposable does could be done with ActiveRecord, in a more awkward way, though.

For example, Disposable lets you do compositions really easily – a concept well approved in Reform.

class AlbumTwin < Disposable::Twin
  include Composition
 
  property :id,      on: :album
  property :title,   on: :album
  collection :songs, on: :cd do
    property :name
  end
  property :cd_id,   on: :cd, from: :id
end

You configure which properties you want to expose and where they come from. And: you can also rename properties using :from.

The twin now exposes the new API.

twin = AlbumTwin.new(
  album: Album.find(1),
  cd:    CD.find(2)
)
twin.cd_id #=> 2

Of course, this also lets you write.

twin.songs << Song.create(name: "Thunder Rising")

As the composition user, I do not care or know about where songs comes from or go too.

All operations will be on the twin, only. Nothing is written to the models until you say sync. This is something I am totally missing in ActiveRecord. I will talk about that in a minute.

Hash Fields.

Another pretty amazing mapping tool in Disposable is Struct. This allows you to map hashes to objects.

Let’s assume your Album has a JSON column settings.

class Album < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Album.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

This is a deeply nested hash, a terrible thing to work with. Let the twin take care of it and get back to real object-oriented programming instead of fiddling with hashes.

class AlbumTwin < Disposable::Twin
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      property :write
    end
 
    property :user
  end
end

This gives you objects.

twin = AlbumTwin.new(Album.find(1))
twin.settings.admin.read #=> true
twin.settings.user #=> {destroy: false}

You can either map keys to properties (or collections!) or retrieve the real hash.

Writing works likewise.

twin.settings.admin.read = :MAYBE

As always, this is not written to the persistent model until you say sync.

album.settings[:admin][:read] #=> true
twin.settings.admin.read = :MAYBE
twin.sync
album.settings[:admin][:read] #=> :MAYBE

Working with hash structures couldn’t be easier. Note that this also works with Reform, giving you all the form power for hash fields.

class AlbumForm < Reform::Form
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      validates :read, inclusion: [true, false, :MAYBE]
    end

This opens up amazing possibilities to easily work with document databases, too. Remember: Disposable doesn’t care if it’s a hash from ActiveRecord, MongoDB or plain Ruby.

Collection Semantics

One reason I wrote twins is because the way ActiveRecord handles collections is tedious. For instance, the following operation will write to the database, even though I didn’t say so.

song = Song.new
CD.songs = [song]
song.persisted? #=> true

This is a real problem. Say you want to set up an object graph, validate it and then write it to the database. Impossible with ActiveRecord unless you use weird work-arounds like CD.songs.build which is completely counter-intuitive.

song = CD.songs.build
song.persisted? #=> false

I want normal Ruby array methods to behave like normal Ruby array methods. What if I don’t have the CD.songs reference, yet, when I instantiate the Song? Twins simply give you the collection semantics you expect.

song = Song.new
AlbumTwin.songs = [song]
 
song.persisted? #=> false
album.songs #=> []

The changes will not be written to the database until you call sync.

Deleting works analogue to writing, moving, replacing.

song_twin = twin.songs[0]
twin.songs.delete(song_twin)
 
twin.sync
album.songs #=> []

You can play with any property as much as you want, the persistence layer won’t be hit until syncing happens.

Change Tracking.

Another feature extremely helpful for post-processing logic as found in callbacks is the state tracking behavior in twins. Field changes will be tracked.

twin.changed?(:title) #=> false
twin.title = "Best Of"
twin.changed?(:title) #=> true

You can also check if a twin has changed, which is the case as soon as one or more properties were modified.

twin.changed? #=> true

This works with nested twins and collections, too.

twin.songs << Song.new
twin.songs.changed? #=> true
twin.songs[0].changed? #=> false
twin.songs[1].changed? #=> false

On collections, #added, #deleted and friends help you to monitor what has changed in particular.

twin.songs << Song.new
twin.songs.added #=> [<SongTwin ..>]

Several other goodies like persistence tracking help to write full-blown event dispatcher which I’m gonna discuss in a separate blog post. If you’re curious, chapter 8 of the Trailblazer book is about callbacks, change tracking and post-processing.

Twins and Representers.

Representers are Ruby declarations that render and parse documents. Have a look at the Roar gem to learn how they are used. Anyway, twins are the perfect match for representers: while the twin handles data modelling, the representer does the document work.

class Album::Representer < Roar::Decorator
  include Roar::JSON
 
  property :id
  property :title
 
  collection :songs, class: Song
    property :name
  end
 
  link(:self) { album_path(id) }
end

The composition twin could now be used in combination with the representer.

twin = AlbumTwin(album: Album.find(1), cd: CD.new)

Note that the CD is a brand-new, fresh and shiny instance without any songs added to it, yet.

We then use the representer to parse the incoming JSON document into Ruby objects.

json = '{"title": "Run For Cover", songs: [{"name": "Military Man"}]}'
Album::Representer.new(twin).from_json json

This will populate the twin.

twin.songs #=> [<SongTwin name: "Military Man">]

After syncing, the CD will contain songs.

twin.sync
 
cd.songs #=> [<Song id:1 name:"Military Man">]

Roar, Representable and Reform come with mechanisms to optionally find existing records instead of creating new, and so on. The topic of populators is covered in chapter 5 and 7 of the Trailblazer book.

Both twins and representers internally use declarative for managing their schemas. This means you can infer representers from twins, and vice-versa.

class Album::Representer < Roar::Decorator
  include Roar::JSON
  include Schema
 
  from AlbumTwin
 
  # add properties.
  link(:self) { album_path(id) }
end

Deserialization is a task that’s poorly covered by Rails. With twins and representers, parsing documents into object graphs becomes object-oriented and predictable. Where there was complex nested hash operations, probably involving gems like Hashie, there’s now clean, encapsulated and manageable objects that parse and populate.

Onwards!

Twins are supported in all my gems and the fundamental approach for data transformations. They are an integral part of Reform 2, where every form is a twin. The form is responsible for validation and deserialization, the twin for data mapping.

Use them, make them faster, better, enjoy the simplicity of intuitive object graphs that reflect your domain, not your database schema, and never forget: Nothing is written to the persistence layer until you call sync!

Cells 4.0 – Goodbye Rails! Hello Ruby!

Monday, June 8th, 2015

The Cells gem has helped many developers to re-structure and re-think their view layer in Rails. It provides view models that embrace parts of your UI into self-contained widgets.

What was partials, filters, helpers and controller code is now moved into a separate class. View models are plain Ruby and use OOP features like inheritance while benefiting from encapsulation. The times of global view namespace and lack of interfaces in views are over.

class CommentCell < Cell::ViewModel
  def show
    render
  end
 
private
  def author_link
    link_to model.author
  end
end

Cells can render their own views which sit in a private directory.

Logicless Views.

In views, we try to gently enforce simplicity: When calling a method in the view, it is called on the cell instance. The view is always executed in cells context. There is no concept of “helpers” and data being copied between controller and view anymore.

<%= model.body %>
Written by <%= author_link %>

Every method called in the view has to be defined on the cell. Every helper you intend to use has to be included into the class – remember: everything is an instance method.

No Rails!

What makes me really happy about Cells 4 are the following two lines of code that have substantially changed how Cells does its work. Those lines represent the end of a painful era for Cells: they completely decouple the gem from Rails.

module Cell
  class ViewModel # < AbstractController::Base

That’s right, Cells does no longer inherits anything from AbstractController. With our own implementation for rendering templates, we don’t need this dependency anymore. In earlier version this was mainly done to import Rails’ #render and the rabbit hole of dependencies coming with this.

spec.add_dependency "uber", "~> 0.0.9"
spec.add_dependency 'tilt', ">= 1.4", "< 3"
# s.add_dependency "actionpack",  ">= 3.0"

We also removed the dependency to actionpack, and, in turn, to actionview. ActionView is no longer used in Cells, except for helpers, which brings me to the next point.

Long live Rails!

Hey, hey, don’t you cry. Cells still supports Rails and works exactly as it did before in Rails apps. It still provides Rails’ (actually, not-existing) view “API” and allows you to use helpers, form builders, simple_form_for and all the good guys.

The difference here is you have to include those helpers into your cell class. This might end up in quite a number of includes, as the following snippet illustrates.

class CommentCell < Cell::ViewModel
  include ActionView::RecordIdentifier
  include ActionView::Helpers::FormHelper
  include SimpleForm::ActionViewExtensions::FormHelper

This is not Cells’ fault, though.

Helpers Are Shit.

What gets unrevealed now is how horribly helpers are implemented in Rails. Not only do they all exist as global methods in one namespace, also do they all depend on each other without including the respective modules.

Helpers in Rails simply assume that all the other 250 helper functions are available.

It is now your task to properly include required helper modules yourself. Maybe this will spark an impulse in Rails core to properly decouple helpers, and use more object-orientation and composition instead of the current global PHP functions.

Anyway, most helpers are reported (and tested) to be working in Cells.

Performance. You asked for it.

In the 4.0 release we got rid of many many lines of code. We also got rid of ActionView. Replacing this jurassic gem with our own 30 lines rendering code has sped up rendering about 25%.

Performance gains could also be achieved by only escaping defined properties of a cell. Where Rails literally escapes every string several times per request, which leads to a significant performance decrease, Cells does this once, and only where you want it.

I need to remark that not any performance-relevant work has been done in Cells 4, yet. Path execution improvements will make this even faster in future versions.

View Models

Render cells works virtually from anywhere. In controllers and views, Cells brings in a helper to make it straight-forward.

Although this sounds like a contradiction – “didn’t you just say helpers are shit? – in fact this acts as a single entry point to invoke cells.

<%= cell(:comment, comment).(:show) %>

The new call style allows to work with the cell instance before rendering. And: you can have as many rendering methods (“states”, as we call them) as you want per cell class.

Testing

The same API can be used in tests. Cells comes with UnitTest/MiniTest support out of the box, and Rspec can be pulled via the rspec-cells gem.

it "renders nicely"
  html = cell(:comment, comment).()
  expect(html).to have_content "Hello!"
end

Isolated view rendering tests are inevitable when writing rock-solid components that are resuable across your application.

Upgrading from Cells 3

View models have been around in Cells 3, too, but not as fast. Anyway, if you’re upgrading, you might want to peek inside the upgrading guide. Let us know if you find anything missing in there.

One thing I need to mention: You don’t need to rewrite all your cells – you can still use instance variables and the old-style calling – it’s just not encouraged anymore.

Another point you shouldn’t miss is to include the respective template engine support into your projects. Please read the installation instructions to learn about cells-haml and friends.

Cells Everywhere

With the removal of the Rails dependencies Cells work in any Ruby environment. You can implement your view models in Lotus or Sinatra, or in plain Ruby scripts.

Many users do that already. I hear that Cells and Roda, a framework I really want to check out, do a great job together.

Outside of Rails, the only thing that needs configuration is where to find the views.

class SongCell < Cell::ViewModel
  self.view_paths = "lib/views"

After defining the view_paths, cells can be rendered anywhere in your application.

SongCell.(song).(:show)

This will instantiate the cell and render the show state. Examples for how to use advanced features like caching and view inheritance can be found in my cells-examples repository.

Mailers, Rake Tasks, Here Comes Cells!

Cells have been used in Rails for many things: In mailers, in rake tasks to compile views, directly hooked to routes to bypass ActionController, and so on.

This is even simpler now as there is no dependency to drag around anymore. You simply instantiate and render your view model. However, some helpers still insist on a controller instance to operate properly. For example, they might need the config object.

Pass the controller into a cell for that. Being a special dependency in a Rails environment, this will delegate all known controller methods to the real controller.

SongCell.(song, controller: controller).(:show)

We’ve been using this “technique” in Cells for years without major problems cough.

Engines and the Asset Pipeline

Cells can be bundled into gems and Rails engines and allow you to distribute them as proper widgets to other applications.

Nothing really changes, you simply chuck them into your gems and they become renderable in the importing application. If you’re having problems, it’s all documented on the new (and still under construction) Trailblazer website.

Another great thing is: you can bundle assets right into your cell’s views directory and include them in your asset pipeline.

├── cells
│   ├── comment_cell.rb
│   ├── comment
│   │   ├── show.haml
│   │   ├── comment.css
│   │   ├── comment.coffee

Using the asset pipeline is documented here.

View Inheritance

Cells can inherit views from their parents. If a view is not found in the local directory, it is looked up in the parent’s directories.

class PostCell < CommentCell
end
 
PostCell.prefixes #=> ["app/cells/post", "app/cells/comment"]

The new explicit version of prefixes makes it really simple to understand where views come from.

And we have another awesome feature planned for upcoming Cells versions: Block inheritance. That’s right: Block inheritance. This means you can define overridable parts directly in your view, without the need to implement that in a separate file.

Make sure to read the documentation about view inheritance and check out the Trailblazer book which will explain this nifty topic in great detail.

From Here.

Cells 4 is clean and fast. Go through the code base, and you will see how incredibly simple it is. There will be problems with certain helpers or gems, but I am confident we can fix them together.

From the very beginning I put a lot of effort into communication with the different template engines teams, for example the fine peeps from Haml. My dream is to have a unified interface for capturing and helpers, so markup languages don’t need to get patched by Rails, or patch Rails, or both.

The next big step is evaluating how much of ActionView we can strip and replace with the learnings from 10 years of Cells without changing Rails’ API, dear DHH. I am currently experimenting with Rafael França in a top secret mission, so don’t tell anyone about this.

On Rails 5, Presenters And Form Objects.

Thursday, May 21st, 2015

My original plan to not blog about conceptual problems in Rails for the next months has failed.

Too many discussions about presenters, decorators, object-oriented helpers and “oh-so-awesome-and-new” form objects I had to overhear. With Aaron’s great keynote and many comments on the aforementioned concepts, I feel the urge to clarify what’s a presenter, a view model, and a form object.

Presenters

I completely agree with tenderlove when he says there doesn’t have to be a presenter library. Presenters (or decorators?) are usually composition objects that add presentation logic to attributes.

However, what many people ask for is the ability to map widgets, or fragments, or parts of their UI, to something in Rails. And this something has turned out to be a mix of controller code, before_filter, partials and helpers. And many people are not happy with this, as their widget is not encapsulated and not reusable across their app.

To summarize: what people want in order to implement widgets is

  • A place in the file system for code and templates.
  • An asset where to put that Ruby code.
  • The ability to render partials in order to present their widget.

Especially the latter one is important and is what makes the difference between presenters and widgets: I want to render templates in order to present an arbitrary object in my UI. And I don’t want to hack ActionView in any way to achieve that.

View Models

This was my motivation to write Cells a good while ago. Instead of cluttering widget logic across the entire framework, there’s a new abstraction layer to solve this. It gives you a view model class where you put presentation logic, but it also lets you render templates.

However, these are not global templates but views that sit in an encapsulated directory, just like the view model’s code is isolated in the cell and doesn’t have global access. Likewise, JavaScript and CSS code can be bundled with the cell. This makes a cell reusable across many controllers, or even apps.

app
├── cells
│   ├── comment
│   │   ├── cell.rb
│   │   ├── show.haml
│   │   ├── grid.haml
│   │   ├── comment.coffee

I am not gonna argue any more whether or not you need Cells. Some people like it, some prefer using POROs and hack Rails’ rendering into that object to achieve what Cells does.

My point is that Cells view models give developers a defined structure and standard how to implement view fragments (not to speak about how Cells handles view inheritance, polymorphic views, caching, and more, hahahaha).

So next time you talk about presenters, ask yourself: Am I talking about a strictly attributes-decorating thing? Then that’s a decorator. As soon as this involves rendering of views, you might want to checkout view models.

Form Objects

The other thing I need to clarify is form objects.

We all know that the way validations are handled in Rails models is a mess. It breaks down as soon as you need to use a model in two different contexts, for two different forms. Everyone reading this blog post has felt the pain with accepts_nested_attributes, which is supposed to handle deserialization of nested forms.

And this brings me to the point of this section. One job of a form object is validating an object graph (e.g. an album composed of songs with artists) and collect validation errors in the top object.

The other job is the deserialization of the incoming hash. And this is completely underestimated by Rails core. Deserialization is the actual problem of forms. Validating and bubbling up errors is easy.

How do I parse a hash into an object? Where do I attach this object? Do I create a new object for that hash fragment, or do I need to find an associated object in the database? How do I handle additional semantics like deleting objects and save? And, how do I prevent the persistence layer to get involved until validation is done?

Reform

This is the real issue a form object (the way we expect it) has to solve. Again, I’d like to point you to another gem called Reform. In Reform, a separate class takes care of that. You define validations and properties in a new class.

class AlbumForm < Reform::Form
  property :name
  validates :name, presence: true
 
  collection :songs do
    property :title
    validates :title, presence: true
  end
end

Deserialization and validation are done with separated entities. While a representer internally takes care of deserializing the incoming hash, validation is handled by the form itself. Usually, you don’t have to worry about this as it happens automatically.

The upcoming Reform 2.0 is doing this in a very neat way, where you can use your Roar representer for parsing, and the Reform object for validation, making it reusable for both document APIs and UI forms. It’s possible to completely replace deserialization with your own code without losing the decoupling from the persistence that Reform gives you.

This is the result of years of work, running into problems, taking a step back, reconsidering, collecting feedback from hundreds of use cases, and so on.

Please don’t brand a form object as a validation, only. There’s more to it to solve the actual problem we have.

And, yes, ActiveForm started as a pure copy of Reform, and then got “re-implemented”. Let’s not fight.

Rails 5: Stillstand

One last thing: Rails 5 comes with a new “render anywhere” feature where ApplicationController.render lets you render partials from virtually everywhere. While this might look startling first, this is the exact wrong way to go.

A globally accessable renderer is the lowest-level tool you can give a developer. Instead of providing new, object-oriented, abstraction layers to solve problems, Rails core resists the idea of introducing new concepts for the sake of Basecamp-compati.. sorry, backward-compatibility.

The result will be render calls from models, hundreds of different “presenter” implementations across Rails projects, and confused people who don’t know where to put their code.

Conclusion

I hope I managed to point out what’s a presenter, a view model and a form object.

My message is: There is gems to help you solving a lot of problems that have been around since Rails’ inception. These solutions are mature and used in thousands of production apps. Many people have put a lot of work into them.

The fact that Rails core now, after almost 10 years, slowly starts to pick up ideas like form objects, is a good sign. However, I am skeptical if view models and real form objects will ever make it into Rails core. Luckily, we got gems to fill the gap.