Recently we rolled out Representable 2.3. The main addition here is the ability to suppress wraps.

When talking about wraps, I am not referring to deliciously rolled flat bread filled with mouth-watering vegetables, grilled chicken and chilli sauce, no, I am thinking of container tags for documents.

Wraps, y’all!

Usually, you’d define the document wrap on the representer class (or module, but my examples are using Decorator).

class SongDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :song # wrap set!
 
  property :name
end

When rendering a Song object, the document will be wrapped with "song".

song = Song.new(name: "I Want Out")
 
SongDecorator.new(song).to_hash
#=> {"song"=>{"name"=>"I Want Out"}}

Vice-versa, when parsing, the representer will only “understand” documents with the wrap present.

song = Song.new
 
SongDecorator.new(song).from_hash({"song"=>{"name"=>"I Want Out"}})

I know, this is terribly fascinating.

Nested Representers

A popular concept in Representable and Roar is to nest representers. While this can be done with inline blocks, many people prefer explicitly nesting two or more classes.

class AlbumDecorator < Representable::Decorator
  include Representable::Hash
  self.representation_wrap = :albums # wrap set!
 
  collection :songs decorator: SongDecorator
end

I reference the SongDecorator explicitly. This allows me to use it in two places.

  • To render and parse single song entities, I can use SongDecorator directly.
  • In a nested document with a list of songs, the same decorator can be used, given you desire an identical representation in the album view.

When rendering an album, however, every song is now wrapped.

album = Album.new(songs: [song, song])
AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"song"=>{"name"=>"I Want Out"}},
#       {"song"=>{"name"=>"I Want Out"}}
#   ]}}

Most probably not what you want.

I’ve seen several workarounds for this. Mostly, people maintain two decorators per entity, one with wrap, one without, where common declarations are shared using a module.

This is very clumsy and I do not understand why people take it instead of asking for a nice solution for that common problem. Maybe I’m not accessable enough.

Suppressing Wraps.

When working with Jonny on roarify, a client gem for the Shopify API and implemented using Roar, I dropped my inaccessible facade in exchange for beers and we implemented a solution: The wrap: false option.

class AlbumDecorator < Representable::Decorator
  # ..
  collection :songs decorator: SongDecorator, wrap: false # no wrap!
end

This will parse and serialize songs without wrapping them, again.

AlbumDecorator.new(album).to_hash
#=> {"albums"=>
#     {"songs"=>[
#       {"name"=>"I Want Out"},
#       {"name"=>"I Want Out"}
#   ]}}

A simple enhancement with great impact – we were able to reduce representers by 38.1%.

Thanks for the beers, Jonnyboy! I miss you too!

A few days ago I pushed the next version of Reform: Version 2. While this is still a release candidate, it can be considered stable.

The reason I blog as if it was a major release is: I want you to test, try, and complain. Speak now or forever hold your peace! Now is the time to make me add or change features before we push the final stable 2.0

Here’s why Reform 2 was necessary, and, of course, why it’s awesome.

UPDATE: This is a release note directed to Reform users. If you want to learn more about Reform, read an introduction post.

Too Big!

There’s not a single amazing new feature in Reform 2. This is, if you quickly skim over the changes.

Of course, a lot of things have changed, but more on the inside of Reform.

Reform was getting too big. The form object was doing presentation, deserialization of incoming data, data mapping, coercion, validation, writing to persistence and saving.

For a gem author, monster objects are (or should be!) a nightmare. It is incredibly hard to follow what happens where in big objects, so I extracted a huge chunk of logic into a separate gem.

The form object now really only does validation, everything else is handled via Disposable and Representable.

The Architecture Now.

Both deserialization and mapping form data to persistence objects like ActiveRecord models is now completely decoupled.

architecture-reform-2

To cut it short: Deserializing of the params hash into a validatable object graph is done by a representer. Validation happens in the form itself. Coercion, syncing and saving all happens in the form’s twin.

Less Representable.

I removed a lot of representable-specific mapping logic, mainly because it was incredibly hard to understand. For example, you can now actually grasp what methods like #prepopulate! do by looking at the source.

This has also sped up Reform by 50%. That’s right – it is much faster now thanks to explicit, simple transformation logic.

No Rails, More Lotus!

Reform 1 used ActiveModel::Validations for validations. This still works, but you can also chuck Rails into the bin and use Lotus::Validations insteadremoving any Rails dependency from your forms.

class SongForm < Reform::Form
  include Reform::Lotus
 
  property :title
  validates :title, presence: true
end

While Reform was dragging the activemodel dependency around, this is now up to you. Reform still supports Rails but with a very low gravity.

Deserialization.

In #validate, to parse the incoming params into the object graph, an external representer is used. This could be any kind of representer and thus allows you to parse JSON, XML and other formats easily into an object graph.

Nevertheless, the representer will simply operate on the twin API to populate the form. This means, you can basically use your own deserialization logic.

form = SongForm.new(song)
 
form.title = "Madhouse"
form.band = Band.new
form.band.name = "Bombshell Rocks"
 
form.validate({})

The above example is a naive implementation of a deserializer without overriding parts of validate. You can set properties and add or removed nested objects. The twin will take care of mapping that to its object graph.

Forms and JSON

Trailblazer takes advantage of that already and allows JSON “contracts” that can deserialize and validate JSON documents.

You can do that manually, too.

class SongRepresenter < Roar::Decorator
  include JSON
  property :title
end
 
form.validate('{"title": "Melanie Banks"}') do |json|
  SongRepresenter.new(form).from_json(json)
end

This will use SongRepresenter for the deserialization. The representer will assign form.title=. After that, the form will proceed with its normal validation logic as if the form was a hash-based one.

In case I missed to make my point: This allows using forms for document APIs!

Coercion

In earlier versions, Reform implemented coercion in the deserialization representer which sometimes was kinda awkward. Coercion now happens in the twin.

form.created_at = "1/1/1998"
form.created_at #=> <DateTime 01-01-1998>

You can also override the form’s setter methods to build your own typecasting logic. Many people did that already in Reform 1, but in combination with the representer this could mess things up.

Populators

When deserializing, Reform per default tries to find the matching nested form for you. Often, there is no nested form, yet, that’s why we provide options like :populate_if_empty that will add a nested form corresponding to the particular input fragment.

Using the :populator option was a bit tedious and you needed quite some knowledge about how forms work. This has changed in Reform 2 and is super simple now.

In a populator, you can use the twin API to modify the object graph.

populator: lambda do |fragment, collection, index, options|
  collection << Song.new
end

This primitive populator will always add a new song object to the existing collection. Note how you do not have to care about adding a nested form anymore, as you used to have in Reform 1. The twin will do this for you.

Pre-populators

I’ve seen many users writing quirks to “fill out” a form before it is rendered, for example, to provide default values for visual fields or pre-selecting a radio button.

Reform 2 introduces the concept of prepopulators that can be configured per property.

property :title, prepopulator: lambda { self.title = "The title" }

Again, prepopulators can use the twin API to set up an arbitrary object graph state. They have to be run explicitly, usually before rendering, using #prepopulate!.

Hash Fields

A feature I personally love in Reform 2 is Struct. It allows to map hashes to properties.

Say you had a serialized hash column in your songs table.

class Song < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Song.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

“Working with hashes is fun!” said no one ever. Instead, let Reform map that to objects.

class SongForm < Reform::Form
  property :settings, struct: true do
    property :admin, struct: true do
      property :read
      property :write
      validates :read, inclusion: [true, false]
    end
  end
end

You can have an unlimited number of nestings in the hash. Every nesting results in a nested form twin to work with.

The Struct feature is described in this blog post in greater detail.

Syncing and Saving

The sync and save method both completely got extracted and are now implemented in Disposable.

Option Methods

A nice addition that I use a lot is option methods: you can specify dynamic options not only with a lambda, but also as a symbol referencing an instance method.

property :composer, populate_if_empty: :populate_composer! do
  # ..
end
 
def populate_composer!(fragment, options)
  Artist.new
end

This greatly cleans up forms when they become more complex. A cool side-effect: you can use inheritance better, too, and reuse option methods.

State Tracking

Since nested forms are now implemented as twins, you can use Disposable’s state tracking to follow what was going on on your form in validate.

State tracking is incredibly helpful for Imperative Callbacks and other post-processing logic.

More Documentation!

As you might have noticed, I have started to document all my gems on the new Trailblazer page.

I’d like to point you to the upcoming Trailblazer book, too. In 11 chapters, it discusses every aspect of Reform you can think of, as Reform is an essential part of this new architecture.

As a side-note: I mainly wrote this book to save myself from answering particular questions a hundred times. The Trailblazer books really talks about all my gems in great detail, and it is a nice way to support a decade of Open-Source work for you, too.

Conclusion

With Reform 2.0, my dream architecture has become true, my vision of what a form object should do and what should be abstracted in a separate layer is implemented, and I am very happy with it.

The code should be significantly easier to read and change, too. And it is faster.

It all adds up – Reform 2 is already deployed on hundreds of production sites, so update today and let me know what you think!

I had a hard time getting MiniTest::Spec working with Capybara matchers, in combination with Rails integration tests and cells tests. I almost switched to Rspec but then finally figured out how simple it is.

Why People Use Rspec.

The reason people use Rspec is: It works. Everything popular is supported out-of-the-box provided by the hard work of the Rspec team. You don’t have to think about how integration tests may work or where that matcher comes from.

In Minitest, which is my personal favourite test gem, you have the following gems to pick from.

  • minitest-spec-rails
  • minitest-rails-capybara
  • minitest-rails
  • minitest-capybara
  • capybara_minitest_spec

There are probably more. I tried combining them but either integration tests didn’t work, matcher here didn’t work, matchers there didn’t work, the page object wasn’t available, and so on. It was a nightmare.

How it works!

Fortunately, the solution was pretty simple.

gem "minitest-rails-capybara"

The awesome minitest-rails-capybara will also install minitest-rails and minitest-capybara.

In your test_helper.rb, you add the following line.

require "minitest/rails/capybara"

Which loads all necessary files and add capybara spec matchers to the test base classes.

Integration Tests

Integration tests then I do as follows.

class CommentIntegrationTest < Capybara::Rails::TestCase
  it do
    visit "/comments"
    page.must_have_content "h1"
  end
end

It’s important to derive your test from Capybara::Rails::TestCase which is totally fine for me as I don’t like describe blocks that magically create a test class for you. Separate test classes just make me feel safer.

No Controller Tests.

I don’t write controller tests in Rails anymore because they are bullshit. They create the illusion of a well-tested system. In production, it will break. This is a result of this code.

Right, that’s 700 lines to setup a fake environment for your tested controller. 700 lines of code are 100% likely to diverge from real application state: Your tests will pass, your code in production breaks.

In the Trailblazer architecture, controller tests are taboo, you only write real integration tests, operation tests, and cell tests, which brings me to the next point.

Cell Tests

The only problem I had with this approach was that my cell tests broke.

class CommentCellTest < Cell::TestCase
  controller ThingsController
 
  it do
    html = concept("comment/cell/grid", thing).(:show)
    html.must_have_css(".comment")
  end
end

I got exceptions like the following.

NoMethodError: undefined method `assert_content' for 
  #<CommentCellTest:0xadcb284>

The solution was to include the new set of assertions into the cell tests, too. I did that in my test_helper.rb file.

Cell::TestCase.class_eval do
  include Capybara::DSL
  include Capybara::Assertions
end

It only took me a few months to figure that out. Thanks to the authors of all those great gems!

Example

I hope this will help you using the amazing MiniTest in your application. My example can be found here.

Disposable gives you Twins. Twins are non-persistent domain objects. They know nothing about persisting things, hence the gem name.

They

  • Allow me to model object graphs that reflect my domain without restricting me to the database schema.
  • Let me work on that object graph without writing to the database. Only when syncing the graph writes to its persistent model(s).
  • Provide a declarative DSL to define schemas, schemas that can be used for other data transformations, e.g. in representers or form objects.

Some of its logic and concepts might be overlapping with the excellent ROM project. I am totally open to using ROM in future and continuously having late-night/early-morning debates with Piotr Solnica about our work.

However, I needed the functionality of twins in Reform, Roar, Representable, and Trailblazer now, and most of the concepts have evolved from the Reform gem and got extracted into Disposable.

Agnostic Front.

The title of this post is misleading on purpose: First, I know that many people will read this post because it has an offending title.

Second, it mentions ActiveRecord in a negative context even though I actually love ActiveRecord as a persistence layer (and only that).

Third, Disposable doesn’t really care about ActiveRecord. The underlying models could be from any ORM or just plain Ruby objects.

Twins

Twins are classes that declare a data schema.

class AlbumTwin < Disposable::Twin
  property :title
end

Their API is ridiculously simple. They allow reading, writing, syncing, and optional saving, and that’s it.

When initializing, properties are read from the model.

album = Album.find(1)
twin  = AlbumTwin.new(album)

Reading and writing now works on the twin. The persistence layer is not touched anymore.

# twin read
twin.title #=> "TODO: add title"
# twin write
twin.title = "Run For Cover"
 
# model read
album.title #=> "TODO: add title"
twin.title  #=> "Run For Cover"

Once you’re done with your work, use sync to write state back to the model.

twin.sync
 
album.title #=> "Run For Cover"

Optionally, you can call twin.save which invokes save on all nested models. This, of course, implies your models expose a #save method.

Objects, The Way You Want It.

Everything Disposable does could be done with ActiveRecord, in a more awkward way, though.

For example, Disposable lets you do compositions really easily – a concept well approved in Reform.

class AlbumTwin < Disposable::Twin
  include Composition
 
  property :id,      on: :album
  property :title,   on: :album
  collection :songs, on: :cd do
    property :name
  end
  property :cd_id,   on: :cd, from: :id
end

You configure which properties you want to expose and where they come from. And: you can also rename properties using :from.

The twin now exposes the new API.

twin = AlbumTwin.new(
  album: Album.find(1),
  cd:    CD.find(2)
)
twin.cd_id #=> 2

Of course, this also lets you write.

twin.songs << Song.create(name: "Thunder Rising")

As the composition user, I do not care or know about where songs comes from or go too.

All operations will be on the twin, only. Nothing is written to the models until you say sync. This is something I am totally missing in ActiveRecord. I will talk about that in a minute.

Hash Fields.

Another pretty amazing mapping tool in Disposable is Struct. This allows you to map hashes to objects.

Let’s assume your Album has a JSON column settings.

class Album < ActiveRecord::Base
  serialize :settings # JSON column.
end
 
Album.find(1).settings 
#=> {admin: {read: true, write: true}, user: {destroy: false}}

This is a deeply nested hash, a terrible thing to work with. Let the twin take care of it and get back to real object-oriented programming instead of fiddling with hashes.

class AlbumTwin < Disposable::Twin
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      property :write
    end
 
    property :user
  end
end

This gives you objects.

twin = AlbumTwin.new(Album.find(1))
twin.settings.admin.read #=> true
twin.settings.user #=> {destroy: false}

You can either map keys to properties (or collections!) or retrieve the real hash.

Writing works likewise.

twin.settings.admin.read = :MAYBE

As always, this is not written to the persistent model until you say sync.

album.settings[:admin][:read] #=> true
twin.settings.admin.read = :MAYBE
twin.sync
album.settings[:admin][:read] #=> :MAYBE

Working with hash structures couldn’t be easier. Note that this also works with Reform, giving you all the form power for hash fields.

class AlbumForm < Reform::Form
  property :settings do
    include Struct
    property :admin do
      include Struct
      property :read
      validates :read, inclusion: [true, false, :MAYBE]
    end

This opens up amazing possibilities to easily work with document databases, too. Remember: Disposable doesn’t care if it’s a hash from ActiveRecord, MongoDB or plain Ruby.

Collection Semantics

One reason I wrote twins is because the way ActiveRecord handles collections is tedious. For instance, the following operation will write to the database, even though I didn’t say so.

song = Song.new
CD.songs = [song]
song.persisted? #=> true

This is a real problem. Say you want to set up an object graph, validate it and then write it to the database. Impossible with ActiveRecord unless you use weird work-arounds like CD.songs.build which is completely counter-intuitive.

song = CD.songs.build
song.persisted? #=> false

I want normal Ruby array methods to behave like normal Ruby array methods. What if I don’t have the CD.songs reference, yet, when I instantiate the Song? Twins simply give you the collection semantics you expect.

song = Song.new
AlbumTwin.songs = [song]
 
song.persisted? #=> false
album.songs #=> []

The changes will not be written to the database until you call sync.

Deleting works analogue to writing, moving, replacing.

song_twin = twin.songs[0]
twin.songs.delete(song_twin)
 
twin.sync
album.songs #=> []

You can play with any property as much as you want, the persistence layer won’t be hit until syncing happens.

Change Tracking.

Another feature extremely helpful for post-processing logic as found in callbacks is the state tracking behavior in twins. Field changes will be tracked.

twin.changed?(:title) #=> false
twin.title = "Best Of"
twin.changed?(:title) #=> true

You can also check if a twin has changed, which is the case as soon as one or more properties were modified.

twin.changed? #=> true

This works with nested twins and collections, too.

twin.songs << Song.new
twin.songs.changed? #=> true
twin.songs[0].changed? #=> false
twin.songs[1].changed? #=> false

On collections, #added, #deleted and friends help you to monitor what has changed in particular.

twin.songs << Song.new
twin.songs.added #=> [<SongTwin ..>]

Several other goodies like persistence tracking help to write full-blown event dispatcher which I’m gonna discuss in a separate blog post. If you’re curious, chapter 8 of the Trailblazer book is about callbacks, change tracking and post-processing.

Twins and Representers.

Representers are Ruby declarations that render and parse documents. Have a look at the Roar gem to learn how they are used. Anyway, twins are the perfect match for representers: while the twin handles data modelling, the representer does the document work.

class Album::Representer < Roar::Decorator
  include Roar::JSON
 
  property :id
  property :title
 
  collection :songs, class: Song
    property :name
  end
 
  link(:self) { album_path(id) }
end

The composition twin could now be used in combination with the representer.

twin = AlbumTwin(album: Album.find(1), cd: CD.new)

Note that the CD is a brand-new, fresh and shiny instance without any songs added to it, yet.

We then use the representer to parse the incoming JSON document into Ruby objects.

json = '{"title": "Run For Cover", songs: [{"name": "Military Man"}]}'
Album::Representer.new(twin).from_json json

This will populate the twin.

twin.songs #=> [<SongTwin name: "Military Man">]

After syncing, the CD will contain songs.

twin.sync
 
cd.songs #=> [<Song id:1 name:"Military Man">]

Roar, Representable and Reform come with mechanisms to optionally find existing records instead of creating new, and so on. The topic of populators is covered in chapter 5 and 7 of the Trailblazer book.

Both twins and representers internally use declarative for managing their schemas. This means you can infer representers from twins, and vice-versa.

class Album::Representer < Roar::Decorator
  include Roar::JSON
  include Schema
 
  from AlbumTwin
 
  # add properties.
  link(:self) { album_path(id) }
end

Deserialization is a task that’s poorly covered by Rails. With twins and representers, parsing documents into object graphs becomes object-oriented and predictable. Where there was complex nested hash operations, probably involving gems like Hashie, there’s now clean, encapsulated and manageable objects that parse and populate.

Onwards!

Twins are supported in all my gems and the fundamental approach for data transformations. They are an integral part of Reform 2, where every form is a twin. The form is responsible for validation and deserialization, the twin for data mapping.

Use them, make them faster, better, enjoy the simplicity of intuitive object graphs that reflect your domain, not your database schema, and never forget: Nothing is written to the persistence layer until you call sync!

The Cells gem has helped many developers to re-structure and re-think their view layer in Rails. It provides view models that embrace parts of your UI into self-contained widgets.

What was partials, filters, helpers and controller code is now moved into a separate class. View models are plain Ruby and use OOP features like inheritance while benefiting from encapsulation. The times of global view namespace and lack of interfaces in views are over.

class CommentCell < Cell::ViewModel
  def show
    render
  end
 
private
  def author_link
    link_to model.author
  end
end

Cells can render their own views which sit in a private directory.

Logicless Views.

In views, we try to gently enforce simplicity: When calling a method in the view, it is called on the cell instance. The view is always executed in cells context. There is no concept of “helpers” and data being copied between controller and view anymore.

<%= model.body %>
Written by <%= author_link %>

Every method called in the view has to be defined on the cell. Every helper you intend to use has to be included into the class – remember: everything is an instance method.

No Rails!

What makes me really happy about Cells 4 are the following two lines of code that have substantially changed how Cells does its work. Those lines represent the end of a painful era for Cells: they completely decouple the gem from Rails.

module Cell
  class ViewModel # < AbstractController::Base

That’s right, Cells does no longer inherits anything from AbstractController. With our own implementation for rendering templates, we don’t need this dependency anymore. In earlier version this was mainly done to import Rails’ #render and the rabbit hole of dependencies coming with this.

spec.add_dependency "uber", "~> 0.0.9"
spec.add_dependency 'tilt', ">= 1.4", "< 3"
# s.add_dependency "actionpack",  ">= 3.0"

We also removed the dependency to actionpack, and, in turn, to actionview. ActionView is no longer used in Cells, except for helpers, which brings me to the next point.

Long live Rails!

Hey, hey, don’t you cry. Cells still supports Rails and works exactly as it did before in Rails apps. It still provides Rails’ (actually, not-existing) view “API” and allows you to use helpers, form builders, simple_form_for and all the good guys.

The difference here is you have to include those helpers into your cell class. This might end up in quite a number of includes, as the following snippet illustrates.

class CommentCell < Cell::ViewModel
  include ActionView::RecordIdentifier
  include ActionView::Helpers::FormHelper
  include SimpleForm::ActionViewExtensions::FormHelper

This is not Cells’ fault, though.

Helpers Are Shit.

What gets unrevealed now is how horribly helpers are implemented in Rails. Not only do they all exist as global methods in one namespace, also do they all depend on each other without including the respective modules.

Helpers in Rails simply assume that all the other 250 helper functions are available.

It is now your task to properly include required helper modules yourself. Maybe this will spark an impulse in Rails core to properly decouple helpers, and use more object-orientation and composition instead of the current global PHP functions.

Anyway, most helpers are reported (and tested) to be working in Cells.

Performance. You asked for it.

In the 4.0 release we got rid of many many lines of code. We also got rid of ActionView. Replacing this jurassic gem with our own 30 lines rendering code has sped up rendering about 25%.

Performance gains could also be achieved by only escaping defined properties of a cell. Where Rails literally escapes every string several times per request, which leads to a significant performance decrease, Cells does this once, and only where you want it.

I need to remark that not any performance-relevant work has been done in Cells 4, yet. Path execution improvements will make this even faster in future versions.

View Models

Render cells works virtually from anywhere. In controllers and views, Cells brings in a helper to make it straight-forward.

Although this sounds like a contradiction – “didn’t you just say helpers are shit? – in fact this acts as a single entry point to invoke cells.

<%= cell(:comment, comment).(:show) %>

The new call style allows to work with the cell instance before rendering. And: you can have as many rendering methods (“states”, as we call them) as you want per cell class.

Testing

The same API can be used in tests. Cells comes with UnitTest/MiniTest support out of the box, and Rspec can be pulled via the rspec-cells gem.

it "renders nicely"
  html = cell(:comment, comment).()
  expect(html).to have_content "Hello!"
end

Isolated view rendering tests are inevitable when writing rock-solid components that are resuable across your application.

Upgrading from Cells 3

View models have been around in Cells 3, too, but not as fast. Anyway, if you’re upgrading, you might want to peek inside the upgrading guide. Let us know if you find anything missing in there.

One thing I need to mention: You don’t need to rewrite all your cells – you can still use instance variables and the old-style calling – it’s just not encouraged anymore.

Another point you shouldn’t miss is to include the respective template engine support into your projects. Please read the installation instructions to learn about cells-haml and friends.

Cells Everywhere

With the removal of the Rails dependencies Cells work in any Ruby environment. You can implement your view models in Lotus or Sinatra, or in plain Ruby scripts.

Many users do that already. I hear that Cells and Roda, a framework I really want to check out, do a great job together.

Outside of Rails, the only thing that needs configuration is where to find the views.

class SongCell < Cell::ViewModel
  self.view_paths = "lib/views"

After defining the view_paths, cells can be rendered anywhere in your application.

SongCell.(song).(:show)

This will instantiate the cell and render the show state. Examples for how to use advanced features like caching and view inheritance can be found in my cells-examples repository.

Mailers, Rake Tasks, Here Comes Cells!

Cells have been used in Rails for many things: In mailers, in rake tasks to compile views, directly hooked to routes to bypass ActionController, and so on.

This is even simpler now as there is no dependency to drag around anymore. You simply instantiate and render your view model. However, some helpers still insist on a controller instance to operate properly. For example, they might need the config object.

Pass the controller into a cell for that. Being a special dependency in a Rails environment, this will delegate all known controller methods to the real controller.

SongCell.(song, controller: controller).(:show)

We’ve been using this “technique” in Cells for years without major problems cough.

Engines and the Asset Pipeline

Cells can be bundled into gems and Rails engines and allow you to distribute them as proper widgets to other applications.

Nothing really changes, you simply chuck them into your gems and they become renderable in the importing application. If you’re having problems, it’s all documented on the new (and still under construction) Trailblazer website.

Another great thing is: you can bundle assets right into your cell’s views directory and include them in your asset pipeline.

├── cells
│   ├── comment_cell.rb
│   ├── comment
│   │   ├── show.haml
│   │   ├── comment.css
│   │   ├── comment.coffee

Using the asset pipeline is documented here.

View Inheritance

Cells can inherit views from their parents. If a view is not found in the local directory, it is looked up in the parent’s directories.

class PostCell < CommentCell
end
 
PostCell.prefixes #=> ["app/cells/post", "app/cells/comment"]

The new explicit version of prefixes makes it really simple to understand where views come from.

And we have another awesome feature planned for upcoming Cells versions: Block inheritance. That’s right: Block inheritance. This means you can define overridable parts directly in your view, without the need to implement that in a separate file.

Make sure to read the documentation about view inheritance and check out the Trailblazer book which will explain this nifty topic in great detail.

From Here.

Cells 4 is clean and fast. Go through the code base, and you will see how incredibly simple it is. There will be problems with certain helpers or gems, but I am confident we can fix them together.

From the very beginning I put a lot of effort into communication with the different template engines teams, for example the fine peeps from Haml. My dream is to have a unified interface for capturing and helpers, so markup languages don’t need to get patched by Rails, or patch Rails, or both.

The next big step is evaluating how much of ActionView we can strip and replace with the learnings from 10 years of Cells without changing Rails’ API, dear DHH. I am currently experimenting with Rafael França in a top secret mission, so don’t tell anyone about this.

My original plan to not blog about conceptual problems in Rails for the next months has failed.

Too many discussions about presenters, decorators, object-oriented helpers and “oh-so-awesome-and-new” form objects I had to overhear. With Aaron’s great keynote and many comments on the aforementioned concepts, I feel the urge to clarify what’s a presenter, a view model, and a form object.

Presenters

I completely agree with tenderlove when he says there doesn’t have to be a presenter library. Presenters (or decorators?) are usually composition objects that add presentation logic to attributes.

However, what many people ask for is the ability to map widgets, or fragments, or parts of their UI, to something in Rails. And this something has turned out to be a mix of controller code, before_filter, partials and helpers. And many people are not happy with this, as their widget is not encapsulated and not reusable across their app.

To summarize: what people want in order to implement widgets is

  • A place in the file system for code and templates.
  • An asset where to put that Ruby code.
  • The ability to render partials in order to present their widget.

Especially the latter one is important and is what makes the difference between presenters and widgets: I want to render templates in order to present an arbitrary object in my UI. And I don’t want to hack ActionView in any way to achieve that.

View Models

This was my motivation to write Cells a good while ago. Instead of cluttering widget logic across the entire framework, there’s a new abstraction layer to solve this. It gives you a view model class where you put presentation logic, but it also lets you render templates.

However, these are not global templates but views that sit in an encapsulated directory, just like the view model’s code is isolated in the cell and doesn’t have global access. Likewise, JavaScript and CSS code can be bundled with the cell. This makes a cell reusable across many controllers, or even apps.

app
├── cells
│   ├── comment
│   │   ├── cell.rb
│   │   ├── show.haml
│   │   ├── grid.haml
│   │   ├── comment.coffee

I am not gonna argue any more whether or not you need Cells. Some people like it, some prefer using POROs and hack Rails’ rendering into that object to achieve what Cells does.

My point is that Cells view models give developers a defined structure and standard how to implement view fragments (not to speak about how Cells handles view inheritance, polymorphic views, caching, and more, hahahaha).

So next time you talk about presenters, ask yourself: Am I talking about a strictly attributes-decorating thing? Then that’s a decorator. As soon as this involves rendering of views, you might want to checkout view models.

Form Objects

The other thing I need to clarify is form objects.

We all know that the way validations are handled in Rails models is a mess. It breaks down as soon as you need to use a model in two different contexts, for two different forms. Everyone reading this blog post has felt the pain with accepts_nested_attributes, which is supposed to handle deserialization of nested forms.

And this brings me to the point of this section. One job of a form object is validating an object graph (e.g. an album composed of songs with artists) and collect validation errors in the top object.

The other job is the deserialization of the incoming hash. And this is completely underestimated by Rails core. Deserialization is the actual problem of forms. Validating and bubbling up errors is easy.

How do I parse a hash into an object? Where do I attach this object? Do I create a new object for that hash fragment, or do I need to find an associated object in the database? How do I handle additional semantics like deleting objects and save? And, how do I prevent the persistence layer to get involved until validation is done?

Reform

This is the real issue a form object (the way we expect it) has to solve. Again, I’d like to point you to another gem called Reform. In Reform, a separate class takes care of that. You define validations and properties in a new class.

class AlbumForm < Reform::Form
  property :name
  validates :name, presence: true
 
  collection :songs do
    property :title
    validates :title, presence: true
  end
end

Deserialization and validation are done with separated entities. While a representer internally takes care of deserializing the incoming hash, validation is handled by the form itself. Usually, you don’t have to worry about this as it happens automatically.

The upcoming Reform 2.0 is doing this in a very neat way, where you can use your Roar representer for parsing, and the Reform object for validation, making it reusable for both document APIs and UI forms. It’s possible to completely replace deserialization with your own code without losing the decoupling from the persistence that Reform gives you.

This is the result of years of work, running into problems, taking a step back, reconsidering, collecting feedback from hundreds of use cases, and so on.

Please don’t brand a form object as a validation, only. There’s more to it to solve the actual problem we have.

And, yes, ActiveForm started as a pure copy of Reform, and then got “re-implemented”. Let’s not fight.

Rails 5: Stillstand

One last thing: Rails 5 comes with a new “render anywhere” feature where ApplicationController.render lets you render partials from virtually everywhere. While this might look startling first, this is the exact wrong way to go.

A globally accessable renderer is the lowest-level tool you can give a developer. Instead of providing new, object-oriented, abstraction layers to solve problems, Rails core resists the idea of introducing new concepts for the sake of Basecamp-compati.. sorry, backward-compatibility.

The result will be render calls from models, hundreds of different “presenter” implementations across Rails projects, and confused people who don’t know where to put their code.

Conclusion

I hope I managed to point out what’s a presenter, a view model and a form object.

My message is: There is gems to help you solving a lot of problems that have been around since Rails’ inception. These solutions are mature and used in thousands of production apps. Many people have put a lot of work into them.

The fact that Rails core now, after almost 10 years, slowly starts to pick up ideas like form objects, is a good sign. However, I am skeptical if view models and real form objects will ever make it into Rails core. Luckily, we got gems to fill the gap.

After almost two weeks of intense hacking, I’m happy to announce Representable 2.2. This is a pure performance update. I removed some very private concepts and methods, hence the minor version bump. Anyway, with the 2.2 version you can expect a 50% and more speed-up for both rendering and parsing.

This is, for Representable itself, but also for Roar, Disposable and Reform, that all use Representable internally for data transformations.

The public API hasn’t changed at all, so you’re safe to update.

To get a quick overview about the code changes, have a look here. Right, that’s only a handful lines of code that have changed.

Profiling: How We Did It.

It all started with a benchmark my friend Max was running to render a nested object graph. Here’s the structure of the document.

class FoosRepresentation < Representable::Decorator
  include Representable::JSON
 
  collection :foos, class: Foo do
    property :value
 
    property :bar, class: Bar do
      property :value
    end
  end
end

As you can see, this is a collection of foos. Every foo contains a nested bar object with a value property.

So he set up a profiling test with 10,000 foos containing one bar each. You can find his profiling repository here. For Representable, that basically means “Iterate 10,000 foo objects and 10,000 bar objects and serialize them.”, making it 20,001 objects to represent in total.

Benchmarks, Slow.

With Representable 2.1.8, to serialize this tree took about 3.1 seconds on my work machine.

Total: 3.138545
 
 %self   total   calls    name
 30.17   0.947   20001    Module#name
  7.01   0.220   790039   Representable::Definition#[]
  5.71   0.308   460025   Representable::Binding#[]
  1.68   0.135   100009   Class#new

As you can see, Module#name is called many times, and more than 100,000 objects are instantiated.

Here’s the same benchmark for deserializing a document with 200,001 objects.

Total: 3.045645
 
 %self  calls    name
 31.78   20001   Module#name
  6.07  710029   Representable::Definition#[]
  5.48  480021   Representable::Binding#[]
  2.46  160008   Class#new

It’s interesting to observe that parsing a document into an object graph takes about the same time as rendering it. Anyway, many objects are created and lot of time is wasted

Benchmarks, Faster!

I applied some simple implementation fixes and a structural change, resulting in the following benchmarks. We’ll discuss how I achieved that in a second.

First, we ran the rendering benchmark.

Total: 1.141275
 
 %self   calls   name
  6.83   310100  Representable::Definition#[]
  2.96   50007   Uber::Options::Value#evaluate
  2.78   40001   Repr..::Deserializer::Prepare#prepare!

Wow, that’s not 3.1 but 1.1 seconds to render a deeply nested object graph. No unnecessary classes are instantiated anymore and the time-consuming Module#name call has vanished.

The exact same I could achieve for parsing time.

Total: 1.173969
 
 %self    calls    name
  7.37    330092   Representable::Definition#[]
  3.33    70007    Uber::Options::Value#evaluate
  3.27    100029   Representable::Binding#[]
  2.09    20001    Representable#representation_wrap

What I didn’t measure, yet, is the memory foot-print which should be dramatically (yes, dramatically) smaller as the amount of objects we need to parse or render object graphs has minimized. I bet you want to know now how this 50-75% speedup was possible, and here we go.

Don’t Ask When You’re Not Interested.

The first thing I tackled was to get rid of the Module#name call. This resulted from computing the wrap for every represented object that was being serialized or parsed. Every represented object. Even though most objects don’t need a wrap, and the default case is to not have wrapping.

I moved the name query into an optional block and things got faster.

Now, Module#name is only being called when we actually want to know the wrap.

Too Many Bindings.

However, this was just one step to increasing the performance.

Another issue clearly visible in the ruby-prof outputs was that we created a Binding for every object. Every represented object. Bindings are a concept in Representable that handle rendering and parsing for one property.

In case this was a nested property, this binding would create a representer instance, again, which in turn would create bindings again, for every represented property.

bindings-before-2.2

My beautiful diagram, which makes me really proud, illustrates that: Every Foo instance in the collection will create a representer and a binding instance, even though they are identical.

When writing Representable a few years ago I had a feeling that this might become a bottle-neck one day, but being focused on designing I “forgot” about it, and no one ever complained.

Reuse Bindings.

The solution is dead simple. I introduced Representable::Cached that will save the Bindings for later reuse.

By making the Binding stateless we won’t have any trouble with stale data. The binding used to save a lot of run-time information like the represented object, and more. This now has to be passed from the outside for every run, making it reusable.

I know, you love my diagrams. Check out how the object graph has changed.

bindings-in-2.2

Say we were representing one Foo object, its decorator will now cache the binding for the Bar property and reuse it. This results in a handful of objects needed to be created.

To give you some figures, in the aforementioned benchmark setup, instead of having to instantiate 200,000 bindings all we need to do is to create four! One for the collection of foos, on for value in every foo, one for a Bar object and the last to represent the value in a bar.

Cached: An Optional Feature!

While I fully trust my changes (would be bad if I wouldn’t) I decided to add this as an optional feature in 2.2. You need to activate it manually on the top-level representer.

class FoosRepresentation < Representable::Decorator
  include Representable::JSON
  feature Representable::Cached
 
  collection :foos, class: Foo do
    # and so on

It’s a bit late to mention that, but Cached only works with the Decorator representers. It also works with modules, but it will unnecessarily pollute your models. Please don’t do that.

Reusing Decorators Across Requests

An interesting new option is caching of representers between requests. This will boost up rendering and parsing documents in your API code many times – not to speak about the reduced memory footprint.

Once a representer’s done its job, it can be reused using the #update! method.

decorator = SongRepresenter.new(song)
decorator.to_json # first request.
 
decorator.update!(better_song) # second request.
decorator.to_json

This is enough to reuse a representer, even a deeply nested graph.

Once I find some time, I will implement this in roar-rails. Caching and reusing representers across requests will give a significantly performance boost for many Roar/Representable apps out there.

Using Ruby-Prof For Tests

One last thing I want to mention is how I use the ruby-prof output for tests.

RubyProf.start
  representer.to_hash
# ..
 
# a total of 4 properties in the object graph.
data.must_match "4   Representable::Binding#initialize"

What might look like a weird crazy bullshit is a fantastic way of asserting that your speed-up actually works. Since we cannot test for speed of test runs (every machine and run is different) I simply test for object creations.

In the Cached tests, I setup a simple, but complex enough object graph and render and parse it. I let ruby-prof track object creation and method invocations. Afterwards, I make sure that really only four bindings were created (and other instantiation counts, of course).

And: this works!

No. I’m not gonna use Rspec’s mocking and expectations for that. First of all, I’ve managed not to use mocking in any of my gems’ test suites and want to keep it that way. Second, the more test code I add, the more I will miss and will go wrong.

Letting a super low-level tool like ruby-prof track method calls is a bullet-proof way to test your speed-up. I love it and was surprised by its accuracy.

There Was More.

A few more improvements went into 2.2, and a lot of lookups can still be improved. I feel like I’ve talked enough for today.

Instead of throwing more words at you, I’d like to thank Max Güntner for setting up most of the benchmark code and encouraging me to work on performance. And, special thanks go out to Charlie Somerville who turned out to be a Ruby-internal-monster and was a great help in explaining the depths of the Ruby VM to me and why attr_reader is faster than your own method, and so on.

Incredibly what you still can learn after having hacked Ruby for more than 10 years.

Many people use the Representable gem to render documents from Ruby objects, or to parse incoming documents into an object graph.

This is great for implementing document APIs with JSON or XML. Since Representable does both ways, rendering and deserializing, it gives you a great tool to cover huge parts of your API code.

What many people do not know is that Representable is also useful when transforming objects to other objects. This is particularly marvelous when decorating object graphs or customizing objects.

For example, we do that a lot in Reform, since a form object is mainly data transformations and pushing objects in different representations back and fourth.

Transforming Objects.

In versions before 1.2.6, we used to first transform the source object to a hash, and then apply that hash to the target object.

Consider the following object graph.

source = OpenStruct.new(
  name: "30 Years Live", 
  songs: [
    OpenStruct.new(id: 1, title: "Dear Beloved"), 
    OpenStruct.new(id: 2, title: "Fuck Armageddon")
])

For simplicity, I’m using OpenStructs to implement an album composed of songs. Let’s assume we need to translate this object graph into objects that look like this.

Album = Struct.new(:name, :songs)
Song = Struct.new(:title)

It is needless to say that the target classes could be ActiveRecord derivates or whatever you fancy. Here, Struct will help us to focus on what we need to do: Transform a graph of OpenStructs into Structs.

The Clumsy Way. Oh, and Slow.

In older versions, transformation to a differing object graph worked via an intermediate hash, using a representer.

class AlbumRepresenter < Representable::Decorator
  include Representable::Hash
 
  property :name
  collection :songs, class: Song do
    property :title
  end
end

Here’s the transformation.

target = Album.new
hash   = AlbumRepresenter.new(source).to_hash
AlbumRepresenter.new(target).from_hash(hash)

This will transform the OpenStruct graph into a tree of an Album instance holding two Song instances. Of course, this transformation doesn’t really make sense, but I hope to proof my point: This is incredible clumsy and slow, as this need two representers.

Representable::Object Helps!

I am surprised that I didn’t come up with that solution earlier, but here’s how it works now.

target = Album.new
AlbumRepresenter.new(target).from_object(source)

Just one call to from_object is required. Speaking of requirements: here’s how the representer changes.

require "representable/object"
 
class AlbumRepresenter < Representable::Decorator
  include Representable::Object
 
  # .. the same as above.
end

Note how the Hash engine of Representable got replaced with Object. And now, check out the simple transformation.

When running the representer, the exact same thing as above will happen, resulting in a target object graph as follows.

#<struct Album
 name="30 Years Live",
 songs=
  [#<struct Song title="Dear Beloved">,
   #<struct Song title="Fuck Armageddon">]>

The representer will simply traverse the source object (the OpenStructs), deserialize necessary composite objects and map (“copy”) properties to the target instance.

This Is Not The End.

My example was simple, probably too simple, but please keep in mind that the transformation can use all kinds of options as :instance, renaming properties using :as and allows an unlimited number of nestings. Also, runtime options like :exclude and friends will work as well.

The new Object representer engine’s a great tool and I started using it heavily in Reform and Disposable as it simplifies the code and speeds up about 20%. If you want to play with it, here’s the above code.
:exclude

Happy new year, dear friends! I hope your dreams come true this year, finally! Releasing Roar 1.0 has been a dream of mine and that’s how we kick off 2015.

Shorter Namespaces.

Releasing Roar 1.0 was a good occassion to introduce brief namespaces and constant names in Roar. Unsure about what Roar’s gonna be, I initially started with constant names such as Roar::Representer::Feature::Hypermedia a few years back.

Major version bumps allow you to change things without deprecating it – yay! Since the concept of a representer is found throughout the Roar gem we ditched the Representer namespace. The same happened to Feature. It is nonsense to prefix a feature module – modules are always features.

JSON-API Support.

Roar 1.0 comes with full JSON-API support. This is both, rendering and parsing JSON-API documents. Roar is the only gem presently that does both ways – all other gems are either pure client gems or can only render JSON-API documents, like ActiveModel::Serializer (AMS).

I am mentioning that because Roar constantly gets compared to AMS. And this is simply wrong. AMS is nothing more but an object-oriented rendering engine. Roar is a document framework that uses the same definition to render and to deserialise documents for further processing. This is a bit like comparing Haml with the JSON gem.

A Minimal JSON-API Representer.

Let’s start with the simplest representer for a JSON-API document. In this example, I use a Roar decorator, nevertheless, you are free to use a module representer in case you fancy the extend approach.

class SongDecorator < Roar::Decorator
  include Roar::JSON::JSONAPI
  type :songs
 
  property :id
end

By mixing JSONAPI into the representer you import semantics and DSL for this hypermedia format.

Rendering JSON-API.

Given you had a Song instance at hand, here’s how you render a JSON-API document.

SongDecorator.prepare(song).to_json
#=> "{"songs":{"id":"1"}}"

This is a singular document, representing an individual entity. JSON-API differentiates between singular and collection documents.

Personally, I dislike this decision as it makes it harder for both server and clients to handle documents. They always have to check whether it’s a singular or a collection document.

Anyway, here’s how you would render a collection of songs.

songs = [song, song2]
SongDecorator.for_collection.prepare(songs).to_json
#=> "{"songs":[{"id":"1"},{"id":"2"}]}"

The for_collection class method will return the collection representer. That one only accepts a collection of songs and renders a JSON array.

Parsing JSON-API.

As already noted, the reason I created Roar is to provide a framework to handle both ways of dealing with representational documents. Here’s how to parse a JSON-API document to a Ruby object.

song = Song.new
json = '{"songs":{"id":"1"}}'
 
SongRepresenter.prepare(song).from_json(json)
 
song.id #=> 1

Roar deserialises the properties back to a Ruby object. This happens by using public setter methods on the represented model, only.

The same works with a collection. Here, you need to provide a collection of new (or existing) songs to update, exactly as we did a minute ago.

Simple Attributes.

You can add as many resource attributes as you want using property.

class SongDecorator < Roar::Decorator
  #..
 
  property :id
  property :title
  property :track_number

By defining properties, Roar knows what to render and what to parse from incoming documents.

Relationship Links.

JSONAPI allows to globally link to related resources in a document. In Roar, you use link blocks to specify those relationships.

class SongDecorator < Roar::Decorator
  #..
  link "songs.album" do
    {
      type: "album",
      href: "http://example.com/albums/{songs.album}"
    }
  end

This will render global links into the document.

"songs" => {
  "id" => "1",
},
"links" => {
  "songs.album"=> {
    "href"=>"http://example.com/albums/{songs.album}", 
    "type"=>"album"
  }
},

Note that the DSL is not final, yet, as we’re still collecting user input.

To-One Relationships.

Representing associations for one object is called To-One relation in JSON-API. You can define that per document in Roar.

class SongDecorator < Roar::Decorator
  #..
  has_one :composer
  has_many :listeners

As you can see, Roar’s JSON-API implementation lets you define associations using has_one and has_many.

This will add links section to each represented object in the document.

{
  "songs" => {
  "id" => "1",
  "links" => {
    "composer"=>"10",
    "listeners"=>["8"]
  }
},

Depending on the type of association it renders an ID or an array of IDs. There is no magic to that: Roar simply calls song.composer and collects the IDs from each object.

Compound Documents.

The JSONAPI media format also allows embedding parts of other, associated resources into the document. This is called a compound document.

In Roar, the compound block acts like a sub-representer to specify the nested documents.

class SongDecorator < Roar::Decorator
  #..
 
  compound do
    property :album do
      property :title
    end
 
    collection :musicians do
      property :name
    end
  end

Again, this is pure Roar DSL and works exactly the way you nest representers in Roar/Representable using inline representers.

This renders associated documents into the linked section.

"songs" => {
  "id" => "1",
 
  "linked" => {
    "album"=> [{"title"=>"Eruption"}],
    "musicians"=> [
      {"name"=>"Eddie Van Halen"},
      {"name"=>"Greg Howe"}
     ]
  }
}

The implementation of JSONAPI in Roar is relatively simple and reuses a lot of existing mechanisms.

More Features? Of course!

The JSONAPI module also allows adding meta data and more. Check out the README for the complete DSL.

The way media formats are supported in Roar makes it straight-forward to try out different specifications without too much change in the representer. We support HAL-JSON and JSON-API out-of-the-box.

Please give the new JSON-API implementation a go and let me know what else you need!

Reform 1.2

Today I released Reform 1.2 – the bug-free edition™. I am extremely excited about it. This release doesn’t break any existing code (hence the minor bump) but brings a bunch of new features that I already use all across my apps.

As always, a complete list of the CHANGES can be found in the CHANGES file.

Non-ActiveModel Models

Reform has been supporting non-ActiveRecord objects (“POROs”) ever since: this was one of the reasons we wrote it. However, in Rails apps we automatically included methods to help the Rails form builder infer field types. This didn’t go well if your model wasn’t an ActiveRecord one.

To allow form helpers like simple_form to access your form’s model for type interrogation you need to activate it manually now.

class SongForm < Reform::Form
  include ModelReflections

Now a TEXT column will be displayed as a textarea, and so on.

Skipping Deserialisation

Often a form needs to skip or ignore data from an incoming submission. For example, when all fields of a nested property are empty, you don’t want to process this item. In Rails, this is known as reject_if in the nested-attributes code.

You can do so now in Reform using :skip_if.

class SongForm < Reform::Form
  property :title, 
    skip_if: lambda { |v, *| v == "Bad song" }

Now, consider the following validation.

form.validate(title: "Bad song")

Given that very parameter hash, this ignores the incoming title property as if it wasn’t present in the hash. The title is not updated on the form or model, later.

This works for both properties and nested forms.

To ignore blank nested forms you can use a macro we provide.

class SongForm < Reform::Form
  property :band, skip_if: :all_blank do
    property :name
    property :label
  end

This does exactly what you think it does! And of course, this works with collections as well.

Dirty Tracker!

In older versions, when syncing the form to the model, Reform used to call every setter for every property – regardless whether they’d actually changed or not. Now, Reform tracks what field has changed.

form.validate(title: "Violins")
 
form.changed?(:title) #=> true

In order to only update those fields that have changed you need to include Sync::SkipUnchanged into your form.

class SongForm < Reform::Form
  include Sync::SkipUnchanged
 
  property :title
  property :genre

In #sync, the #title= writer on the model is only called when the title has actually changed. This is extremely helpful for doing advanced form processing with file uploads, etc.

Performance Speed-Up

I could achieve a speed-up of about 85% with an extremely simple trick. Reform internally uses Representable for all kinds of data transformations. It used to create and configure arbitrary representer classes at run-time, which was costly. Those representer classes are now computed once on the form class level resulting in an incredible speed-up and probably less memory footprint.

Dynamic Syncing

The way #sync pushes all attributes back to the model is very generic. Generic code is a good thing. Generic code gets even better when it’s easily extendible. And this is what the new #sync API offers you.

form.sync(
  title: lambda { |v, opts| form.model.title = sanitize(value) }
)

You can now add a lambda per property which is then called when syncing only if that property was changed. As you can see, the block is called in the caller’s context and allows you to access the form itself, the environment, the processed value and more.

This is a great way to dynamically process a property at run-time.

Dynamic Saving

While the dynamic syncing might smash many problems you sometimes need to run code after the model was saved, for instance, to include the model’s ID in your workflow.

No problem with Reform 1.2!

form.save(
  image: lambda { |v, opts| upload!(v, form.model.id) }
)

Opening Reform’s API for those two steps makes the form object a perfect extension for a Trailblazer operation and allows separating form logic from persistence – one of the key concepts of Trailblazer.

The ActiveForm Drama

Before letting you run to try out all those new things, I quickly want to comment on the “ActiveForm drama” that got averted before it even took off.

Rails recently included active_form into their main repository. I got a bit offended by that since ActiveForm clearly started as a clone of Reform and then got “rebranded” or “re-implemented”. I explicitly had to remind Rails core to add an attribution to this project which copies my README almost identically, and also recycles my DSL, API and all concepts like nesting or defining form fields explicitly instead of guessing.

While I’m cool with that in general, I’m not entirely cool when Rails does that. Those who’ve been with me for the last decade might know why.

Anyway, the Rails core team acted exemplary, apologised for the lack of attributions, removed the debatable repository from the core account, and more.

I’m not sure what they’re gonna do but my blood temperature is back to semi-hot and I don’t mind ActiveForm anymore. At least, the concept of “forms” has finally arrived in Rails core!

Also, I am pretty impressed by the Rails community and how this “accident” was handled on both sides. <3