Android MVI First Impressions

Editor’s note: This post on MVI was written in August 2017, however we felt appropriate to share it as we will be posting a sequel to it about Flux, Redux and MVVM. It’s a work in progress and we wanted to give you the whole picture of our journey with these code organization concepts on the Android side of our mobile development team.

Android MVI First Impressions

The beginning

Through the last 2 months we were experimenting with Kotlin on Android and tried out the Conductor library on a small project. In the meantime we upgraded from the first version of RxJava to v2. Even though everything was a bit unfamiliar, we managed to adopt the thing, furthermore we thought if everything is new let’s try a new architecture as well.

After searching across various libraries online, we came across the MVI Architecture by Hannes Dorfmann. We went through his blog and realized that most of the main features it had, our previously used arch lacked (like problems with state restore, complex interfaces sometimes resulting in unreadable code, etc.), and as we used his Mosby library in the past, the code itself was not foreign to us.

We also liked that the library is built on reactive extensions, which we use in our apps. It has an extension for the Conductor library, too so everything was ready for merging.

Android MVI library

The benefits

  • Reactiveness

The basic concept of the MVI architecture is that user interactions are converted into Observables. These are exposed to the presenter through the View’s interface. The presenter calls the bindIntents method, where all the Observables are flatmapped to the targeted Business logic calls. These observables are later mapped to the View’s ViewStates through a reduce method, where we handle the state changes. Finally the new ViewState is invoked on the exposed View interface’s render() method. As we can see, the whole unidirectional data flow is built upon the reactive extension, it provides tons of built-in methods to transform the desired data. Furthermore, we can handle UI related problems like debouncing easily with Rx.

The library provides a base class for presenters (MviBasePresenter) which handles the lifecycle of the Disposables so you don’t have to worry about disposing everything manually. It also has an internal relay implementation built upon Rx Subjects, this way View states are handled by a BehaviorSubject, so when you reattach the presenter to the View, it wiil recieve the last ViewState from the subject. One more important thing to mention is that the presenter should never run into an onError action, all the bound observables have to handle the onError action and transform it into a View State which represents the corresponding failure.


  • Clean code

The approach will produce a clean exposed View interface, having one render() method, and some Observable<…> functions which represent the user interactions and all the events that change your Model and presumably your View’s state.

The presenters will only contain the transformation logics written with the help of reactive extensions (Intent → flatMap to Business Logic → map to ViewStateChange → reduce to new ViewState → render() on View). This way we can get rid of all the exposed void calls which were present in the MVP architecture, and also with the help of MviBasePresenter (or a custom BasePresenter implementation) we don’t need to worry about the subscribed intent disposables, the base will take care of them if needed.

On the View you have to transform all the user interactions and events to observables (which will be your Intents from the presenter side). Here we can utilize the RxBinding library by Jake Wharton, and you can implement some limitations regarding debouncing and throttling, to reduce resource usage.

As the example shows almost all of the logic is eliminated from the View’s implementation. The only logic that needs to be implemented is the ViewState’s processing – it can be further simplified if you use DataBinding, but that’s a different story.

Even more complex logic can be merged into a simple intent, let’s check out an example of this.

  • Less boilerplate

In the past, using MVP-like libraries, you always had to keep an eye out for disposing all the subscribtions before the presenter went down, or sooner or later your app crashed. This architecture strongly depends on the RxJava2 library, which is used to create and manage the Model-View-Intent data flow, and using the provided MviBasePresenter implementation you don’t have to worry about disposing anything, the parent class will handle it for you. Of course if you like to use custom implementation for the base Presenter class that doesn’t extend the MviBasePresenter, you have to take care of the disposing logic.

Another aspect which comes into play is the rendering phase, where the MVI architecture produces less boilerplate code in my opinion, than the Mosby MVP library or other MVP-based implementations. Furthermore, the developers can eliminate even more boilerplate code if they use Android DataBinding. The View itself has only one render() function with one parameter describing the whole ViewState. Therefore, you only need to bind the UI to the actual ViewState instance.

Simplify complex logics

In this example, we assume that we have an API behind our ContentRepositor, which supports pagination. The API response contains a list of items (the exact type doesn’t count, it only affects the RecyclerAdapter implementation), and the request should contain the starting item position and the number of items that the response should contain. The logic contains two parts, the first part loads the top X items of the remote list on screen startup, and the second part loads more items if the user is scrolling close to the bottom of the currently present list. If we can achive to merge the two logic parts into one intent the View’s interface will be as simple as this:

The returned observable emits the next requested item’s position.

Let’s assume that the data is well formed for rendering to keep the example code as clean as possible. (In the other cases the presenter should handle the data mapping part, etc…)

As we can see we only implemented one data flow for pre-loading and load more.

And finally lets check what happens in the View’s implementation.

The loadMore flow is initiated as we scroll almost to the bottom of the list. It is detected using the RxBindings scrollEvents() observable, but we filter the emitted scrollevents until one indicates that we are near the end of the list. Finally we map the event into an Integer which represents the item after the last current item (which is the first one in the next paging block). The preload stage can be represented as a startWith(0) call, which will emit the 0 position for the first loading, therefore initiating the preload from the 0th position.


Everything went well, until we faced the first onLowMemory kill of the app. The Conductor library handled the backstack and view restoration well, but the presenter didn’t know anything about its previous state, which made the model and the view go out of sync. So we checked how we should propagate the last known state to the presenter in a restoring state use case, which is part of the 6. blog post (linked below). We made our ViewStates Parcelable, but then we faced two other problems. First, the ViewState can grow really large (thinking of a paginated list, etc…) which can hit the Parcelable’s memory limit. Second, what should we do if the View was in a transient state (loading started, but never got the network response)?

Both can be worked out using some kind of Disk caching (like nytimes/store library) also suggested by 6th part of the Architecture’s blog, but in some cases this can lead to very complex State Change logics. On the other hand, if the UI is separated well according to the actual possible use cases, this complexity can be managed, but the separation will produce more presenter classes.

A cacheless implementation could look like something like the code below, but this implementation has the drawbacks of the Parcelable memory limitation.

Final thoughts

This is just a small insight for the capabilities of the architecture and the library. I was really suprised, how this toolchain simplified our code. I hope you found this little review useful. If you’re intreseted in the internal mechanisms and the detailed implementation guide please read the full blog about the MVI architecture at Hannes Dorfmann’s blog.


As always, if you have questions, don’t hesitate to reach out to us in the comment section and for more similar articles, follow us on our Facebook or Twitter pages.

Tamás Agócs

Tamás Agócs

Mobile Application Developer at Wanari
Agócs is our very own shepherd for the Android team. He's committed to push the team's technological knowledge to the limits and beyond....