Flutter accessibility: Introduction (I)

Photo by Daniel Ali on Unsplash

When we talk about accessibility and accessible apps, we mean building applications that are “responsive” regardless of any user specific needs.

In this context, “responsive” means that our app is suitable for all users, regardless they have a visual, a hearing impairment or even a physical handicap. In the end, it’s all about developing inclusive mobile apps.

Why should we care about accessibility…?

There are several good reasons to build accessible apps, ranging from personal values to financial motivations. Examples would be:

Still not convinced…? Check the following infography:

Accessibility statistics

How do we implement accessible apps…?
P.O.U.R. principles to the rescue!

Luckily for us, the accessibility topic has been discussed for several years, specially when applying it into the digital world.

As a result, we do not have to reinvent the wheel: most of the practices already stablished in the web are valid for mobile applications.

All these accessibility directives & guidelines are bundled into the P.O.U.R. principles, standing for:

  • (P)erceivable. The data displayed in our app should be presented in a way that our users can “get it” easily.
    Example: adding a text description to an image, so the users with visual impairment know the contents depicted by the image.
  • (O)perable. Users must be able to perform specific actions while using our app and also navigate from one screen to another.
    Example: adding support for different kind of inputs or even voice commands.
  • (U)nderstandable. We should assist our users with tips and clear directions so they can perform a task with no hassle.
    Example: form with hints and proper error messages.
  • (R)obust. Building compatible solutions based on industry standards, so we guarantee those will continue working in the future.

Accessibility with P.O.U.R.

Accessibility API in Flutter

If you’re reading this, you probably heard the “Everything is a widget” Flutter motto. So… how do we make our Flutter apps accessible? As you’ve probably guessed, by adding more widgets!

Flutter API includes specific widgets for accessibility, such as:

For instance, we can wrap a widget with “Semantics” and manage its alternative text description:

@override
  Widget build(BuildContext context) {
    return Semantics(
      value: "Welcome to the accessible counter app in Flutter",
      child: MaterialApp(
        title: 'Accessibility Demo',
        ...
        home: const MyHomePage(),
      ),
    );
  }

Moreover, most common widgets include default accessibility properties. The “Text” widget, for instance, contains the “semanticsLabel” property:

Text(
   "Some title",
   semanticsLabel: "Some accessibility description",
)

In terms of accessibility, the previous snippet would be equivalent to:

Semantics(
   value: "Some accessibility description",
   child: Text("Some title")
)

By using these semantics widgets, we can add text alternatives or provide assistance to users easily, complying with some of the accessibility guidelines. Cross-platform accessibility out-of- the-box! Well, not there yet, but certainly is a good start 🙂

The semantics tree

You probably know all about the widget tree, so you may be wondering… does it handle semantics too? Not exactly…

Turns out that there is another tree in the Flutter forest… framework: meet the semantics tree!

The semantics tree basically stores the accessibility data of our app. Every time we wrap a component with a “Semantics” widget or we use some of the semantics properties, we add a new node to the semantics tree, containing the description of the original widget. Eventually this information is retrieved by screen readers and other accessibility tools so the contents of our app reaches all audiences.

Take into account that Flutter favours composition, and we build our UI by nesting widgets. Sometimes we may have code like:

Padding(
   ...   
   child: Center(
      child: Container(
         margin: EdgeInsets.all(8.0),
         child: Text(
            "Some text", 
            semanticsLabel: "Some text description"
         )     
      )
   )
)   

In this example, we’re displaying some fancy text with styling, using a group of widgets to set its alignment, paddings and margins. But the only important widget in terms of accessibility is the “Text” widget.

That’s why these 4 widgets in the widget tree will have only 1 node in the semantics tree. So the binding between the widget and the semantics tree is not a 1-to-1 relationship. This means that several nodes in the widget tree may map to a single semantics node.

Relationship between widget and semantics tree

Coming up next…

In the next article, we will see different ways of implementing accessibility in a sample Flutter app. So write you next time!

References

https://docs.flutter.dev/development/accessibility-and-localization/accessibility

Source code

https://github.com/begomez/Flutter-Accessibility

Property-based testing in Flutter

Photo by Todd Mittens on Unsplash

Although property-based testing is not one of the most “common” testing techniques, it’s been around for a while. Since it can be applied to almost any programming language, including dart (and Flutter), it’s certainly a tool that may come in handy sometimes. Let’s see how it works starting with a simple example.

Initial example

Suppose we’re checking the user legal age in our program and we have the following implementation:

class AgeManager {

  bool checkLegalAge(int year, int month, int day) =>
    DateTime.now().millisecondsSinceEpoch >=  getLegalAgeTime(DateTime.parse("$year$month$day")).millisecondsSinceEpoch;

  DateTime getLegalAgeTime(DateTime birthday) {
    return birthday.add(Duration(days: 365 * 18));
  }
}

Forget about the code structure and the “smells” it shows, like not wrapping the DateTime attributes, dealing with milliseconds everywhere or having magic numbers all around over the place. Let’s focus on the time-related calculations…

Basically, we take the birthday, add some days over it and compare the resulting date with the current one to decide if someone can use our app.

Everything looks fine, but we throw some JUnit tests to make sure, since we want to check what happens:

  • when dealing with boundary values, like 1970 or Y2K
  • if the user has a legal age
  • if the user has not a legal age
test('When user was born on boundary then check() returns true', () async {
    final mgr = AgeManager();

    final actual = mgr.checkLegalAge(1970, 1, 1);
    const expected = true;

    expect(actual, expected);
  }
);

test('When user is old enough then check() returns true', () async {
    final mgr = AgeManager();

    final actual = mgr.checkLegalAge(2004, 1, 1);
    const expected = true;

    expect(actual, expected);
  }
);

test('When user is NOT old enough then check() returns false', () async {
    final mgr = AgeManager();

    final actual = mgr.checkLegalAge(2010, 1, 1);
    const expected = false;

    expect(actual, expected);
  }
);

All tests pass and our coverage is 100%. So we can call it a day and go home … right?

Code coverage on AgeManager class

Unfortunately, when it comes down to testing, we can tell if we have a bug, but never say otherwise. So the only thing we know for sure is that we’ve found no bugs…

Nevertheless, using property-based testing, we could’ve stressed the previous code, running it with several random birthday inputs. Then sooner or later we would’ve realised that we did not take into account… leap years! So our implementation is a little bit buggy.

What is property-based testing?

When checking the behaviour of a program, it is virtually impossible to explore all testing scenarios and/or input combinations.

Let’s say we have a function that receives a number and performs some math transformation over it: if we want to be thorough, we should test the method with every integer available.

Since exhaustive input validation is not feasible at all, we end up picking a closed set of example-based input values for our tests and move forward.

But, as we saw in the initial example, this approach may be misleading, since even when our tests pass we may have some “undercover“ bugs.

What if we didn’t have to pick the inputs for our tests, but chose a feature of our program instead? Then we sit down and let the testing framework do all the heavy-lifting regarding the inputs. How does this sound…?

That’s precisely the principle behind property-based testing, which allow us to exercise the program under test more intensely by automating the input generation and execution of tests.

Focus on the inputs Focus on the properties

In general, any application:

  • executes a given contract: when provided with valid inputs, the program will return the corresponding outputs
  • satisfies certain invariants, that is, conditions that are always true in the system.

Both contracts and invariants are often referred to as “properties”. These generic characteristics are the target of property-based testing, which leaves aside input generation and focus on the behaviour and the assumptions we can state about our program.

In general, properties can either be implicit or explicit:

  • explicit properties usually have a direct match in our code, so they’re mapped to a method or attribute on some class.
class User {
  int age; //XXX: explicit property here

  …

  bool hasLegalAge() => return …;
}
  • implicit properties may be harder to find, since they have no direct match with the underlying code. Sometimes they correspond to a group of attributes and methods that perform some operation together. In other cases, they may be derived data obtained after transforming the main data of our domain.
class WareHouse {

   …
  
  //XXX: set of methods working over the same prop 
  OrderStatus order(String itemName, int quantity) {
    if (inStock(itemName)) {
      takeFromStock(itemName, quantity);  
      return OrderStatus("ok", itemName, quantity);
    } else {
      ...
    }
  }
}

Either way, the goal of this type of testing is “breaking” the program on behalf of a given property: meaning, finding a set of input values that make the property evaluate to false.

Once a breaking input is found, the system modifies it automatically looking for its minimal expression: we want to have the counterexample on its most comprehensive form, so we can analyse it easily. This simplification process is usually called “shrinking“.

Using input generators

Although we don’t have to think about specific inputs for our tests, we must define their domain (meaning its generic traits). For instance, if our program works with numbers, we should ask:

  • Shout the number be positive?
  • … negative?
  • Is zero allowed?
  • Should it manage numbers with decimals?
  • Which math notation do we use to represent it?

Any time we have to create inputs in a certain range or even custom input models (such as instances of a custom “User“ class) we must define some methods that provide those objects. These type of functions are often called generators and they’re invoked automatically when running our property-based tests.

For instance, in the previous birthday example, we’ll need to create random days of the month, so an integer generator that provides values in the range [1-31] will suffice.

Shrinkable getRandomDay(Random r, int i) {
  return Shrinkable(r.nextInt(31) + 1);
}

Advantages and disadvantages of property-based testing

By automating the input generation and putting the focus on the properties of our system, property-based testing fills an important gap in the testing tools providing:

  • large input coverage
  • high feature compliance

Since property-based tests use abstractions as inputs, they can be easier to read and maintain (as opposed to example-based tests, that rely on hand-picked particular inputs).

On the other hand, property-based tests may be harder to write at first, specially when being used to writing example-based tests. Analysing a system in order to identify its properties and formulate some expectations about it is an exercise that requires effort, specially in legacy systems or programs with no clear separation of concerns. When property-tests cannot be written ”because I do not see any props in the system…“ we may have a bigger problem regarding the application architecture.

How does property-testing work?

In order to carry out property-based tests, we basically need:

  • a test harness environment that allows us to specify the input values we want to use
  • a process to slightly modify (when needed) the inputs provided to tests, so we can perform shrinking
  • some automatic mechanism to iterate over the tests applying different combinations of random inputs

Since the implementation of these features from scratch would be expensive, property-based testing frameworks may come in handy. There’s a list of available libraries at the end of this article.

Property-based testing frameworks features

Regarding of the programming language they’re implemented on, all 3rd party libraries for property-based testing:

  • generate a large sets of random inputs automatically
  • execute multiple times our tests
  • programatically shrink any set of counterexamples found
  • report the inputs that make the program fail, so we can check and fix the bug

Workflow

  1. Create a test for each property in our system we want to test
  2. If needed, create a generator function that will provide random inputs for the previous test
  3. Specify assertions and/or expectations about the property under test
  4. Run the test to check the behaviour of the program
  5. Check provided test report
  6. If required, capture any input that made the program fail and analyse it further

Property-testing the initial example

The following snippet contains a property-based test for the birthday example using the glados library (hence some class names…):

g.Glados3(getRandomYear,getRandomMonth,getRandomDay)
.test('When checking birthday then both values in the same month', (int year, int month, int day) {
  final mgr = AgeManager();
  DateTime birthday = 
  DateTime.parse("$year$month$day}");

  final futureBirthday = 
    mgr.getLegalAgeTime(birthday);

  expect(futureBirthday.month, birthday.month);
  expect(futureBirthday.day, birthday.day);
});

The test uses several generators (for years, months and days of the month) and then passes as parameters the random set of values obtained to the current test.

In this case, the property under test is the ”legal age” check. What do we know about it? Which assumptions can we state? Well, to start with, we know for sure that:

  • day of the month must be the same on both birthday timestamp and 18th anniversary
  • same goes for the month of the year

So we can start by using these to check the behaviour of the program by converting them into test assertions.

After trying a few iterations we bump into a counterexample that breaks the program behaviour:

Test report on our failing test

In fact, there is no need for the 2nd assertion in the test, since by only applying the 1st one we already get to break the program.

As expected, the framework reports back the failing inputs so we can use them to do some digging. In this case, there is no input shrinking, since the date components are already simplified.

Some final notes

  • Although it comes from the functional-programming paradigm, it can also be applied to object-oriented programming.
  • Property-based testing frameworks are “clever enough” to generate boundary values (null, 0, ““, [] and so on) and use them as inputs in the automated tests.
  • This type of testing is not a substitute for traditional unit tests. In fact, both approaches are usually used together to increase the level of confidence in our code.
  • Since the property definition carries some abstraction, literature about this topic sometimes simplifies it by saying that properties are just “parametrised tests“.
  • Any time we have a set of inputs that break the program, we should turn it into a specific JUnit test using them. This way we make sure the error will not appear again when doing regression testing.
  • Property-based testing motto was coined by John Hughes: “don’t write tests… generate them!

A few available frameworks

Sample repository

The following repository contains different examples of property-based testing:

https://github.com/begomez/warehouse_prop_testing

Flutter: testing method channels

Photo by Charles Forerunner on Unsplash

Introduction

As we saw on the previous entry, method channels allow us to invoke native code from a Flutter app.

From the testing point of view, method channels may look a challenge at first: how do we break the strong dependency on the native platform in order to get our code into test harness?

Luckily for us, Flutter includes some built-in features that make testing method channels a no-brainer!

Sample feature using a method channel

Let’s say we have a helper class in our project that opens the corresponding app stores so the user can update the app.

The redirection to the store is implemented as a web link, managed by the url_launcher flutter plugin. Under the hood, this plugin uses method channels to handle the links at native platform level.

A very simple implementation for this class would be something like:

import 'package:url_launcher/url_launcher.dart';

class AppStoreLauncherException implements Exception {
  final String msg;
  const AppStoreLauncherException(this.msg) : super();
}

class AppStoreLauncher {
  const AppStoreLauncher() : super();

  Future<void> launchWebStore({bool isAndroid = true}) async {
    String url = 'https://apps.apple.com/...';
    if (isAndroid) {
      url = 'https://play.google.com/store/apps/details?id=...';
    }

    if (await canLaunch(url)) {
      await launch(url);
    } else {
      throw AppStoreLauncherException('Could not launch $url');
    }
  }
}

Note that the “canLaunch()” and “launch()” methods are the ones provided by the plugin. If we want to test this class, we’ll have to mock the values they return. Let’s see the way to do it…

Workflow

In order to “mock” a method channel:

  1. Create a “fake” method channel using the same unique name
  2. Register a handler so the calls to native code are intercepted
  3. Stub the values returned by the calls on our “fake” method channel
  4. Add some optional “sensing variables”

1. Creating a fake channel

Instantiate a new object passing as parameter the name of the channel we want to mock. Since names must be unique, they usually look like an inversed package name. In the current example, we must use the url launcher plugin name:

MethodChannel mockChannel = const MethodChannel('plugins.flutter.io/url_launcher');

2. Registering the handler

All the information exchanged between Flutter and the native platform is sent as a message. If we want to control the values exchanged, we must use the “TestDefaultBinaryMessengerBinding” mixin.

The previous class delegates the messaging features to a property called “defaultBinaryMessenger“: so that’s the object we have to use in order to get control over the messages exchanged.

TestDefaultBinaryMessenger” API allow us to mock the native code invocations by using “setMockMethodCallHandler()“:

TestDefaultBinaryMessengerBinding.instance!.defaultBinaryMessenger.setMockMethodCallHandler(mockChannel, handler);

This method receives as arguments:

  • the “fake” method channel
  • a function handler that performs the actual stubbing for the values returned. It simply checks the method invoked (passed as parameter) and returns the value we prefer for that call

So putting it all together:

Future<bool>? handler(MethodCall methodCall) async {
      if (methodCall.method == "canLaunch") {
        return true;
      }
      return false;
    }

TestDefaultBinaryMessengerBinding.instance!.defaultBinaryMessenger.setMockMethodCallHandler(mockChannel, handler);

3. Stubbing values

Since the previous approach is not very flexible, we can wrap the code in a custom method that receives as optional parameters the return values we want to use and registers the handler with them. This way, we can control the “fake” channel at runtime:

void _mockChannelValues({bool canLaunch = true, bool launch = true}) {
   TestDefaultBinaryMessengerBinding.instance!.defaultBinaryMessenger
     .setMockMethodCallHandler(
        mockChannel,
        (MethodCall methodCall) async {
          if (methodCall.method == "canLaunch") {
            return canLaunch;
          } else if (methodCall.method == "launch") {
            return launch;
          }
          return false;
       }
     );
  }

4. Adding optional “sensing variables”

Sensing variables” are basically redundant properties that give you a better insight over a snippet of code. Although they’re usually added in production code, we can use them in test code as well, in order to find out what’s happening under the hood.

In this case, we can log/register every call invoked in our “fake” method channel with a sensing variable. Later we can check these logs to make sure everything went as expected and perform some assertions.

After all, we only have to declare a new variable:

late List<MethodCall> fakeLog;

and modify it every time a method is invoked:

void _mockChannelValues({bool canLaunch = true, bool launch = true}) {
   TestDefaultBinaryMessengerBinding.instance!.defaultBinaryMessenger
     .setMockMethodCallHandler(
        mockChannel,
        (MethodCall methodCall) async {
           fakeLog.add(methodCall);
           
           //XXX: more code here...
       }
     );
  }

Later we can check its contents, looking for a specific method or a given number of invocations:

expect(fakeLog.length, 2);
expect(
  fakeLog.map((e) => e.method), equals(<String>[
     'canLaunch',
     'launch',
  ]));

Bonus: example unit test

Testing if the Android PlayStore is actually launched when using our helper class:

  test('When android store both canLaunch and launch are invoked', () async {
    _mockChannelValues(canLaunch: true, launch: true);

    await launcher.launchWebStore(isAndroid: true);

    expect(fakeLog.length, 2);
    expect(
        fakeLog.map((e) => e.method),
        equals(<String>[
          'canLaunch',
          'launch',
        ]));
  });

Troubleshotting

Since we’re simulating the usage of 3rd party libraries, native code, etc, we must make sure that the Flutter testing environment is properly configured before running the tests, otherwise we will run into some nasty error.

So make sure you invoke:

TestWidgetsFlutterBinding.ensureInitialized();

before running your tests

Sample code

As usual, source code is available here.

Write you next time!

Flutter: deep dive into the native world using method channels

Foto de Joseph Barrientos en Unsplash

Introduction

Flutter includes a built-in way to communicate with the underlying platform: the so called method channels.

But… wait a second… isn’t Flutter a cross-platform framework? Why should we need to go down into the “native” world then?

Well, sometimes we want to take advantage of a given HW feature, like the camera or the geolocation, and we can’t find a plugin or 3rd party library that suits our needs. Then we will have to do all the heavy-lifting ourselves and access the feature natively by using method channels.

General overview

Method channels are just another type of objects provided by the Flutter API. Each channel is identified by its name, so every “bridge” of communication between Flutter and the native platform must have a unique identifier.

That’s why we usually set up names by combining the package identifier and some suffix, for example:

static const String PACKAGE =
"com.bgomez.flutter_weather";

static const String SUFFIX = "/openweather";

Data exchange

Method channels can be seen as a direct stream of data with the native operating system, allowing us to invoke methods and send or retrieve information.

All data exchanged through method channels is sent as messages: a message is just a bundle with some key-value pairs serialized and deserialized automatically by the platform.

All messages exchanged are sent asynchronously.

System architecture

Flutter method channel architecture

When using method channels, the Flutter framework and its underlying platform follow a client/server architecture.

The most frequent scenario is when:

  • on the Flutter side, we request resources (so Flutter acts as client)
  • on the native side, we perform the operations required and serve the result (so native side acts as server)

However, we can also configure the channel to exchange the roles played by each side.

Project set-up

Apart from the dart files in our application, when using method channels, we will have to add Android and/or ios native code as well, each one on its corresponding folder.

Workflow

  1. On the native side, implement the required code
  2. On the native side, config the native “entry point” class
  3. On the Flutter side, create a method channel
  4. On the Flutter side, use the previous object to invoke the native operation

The steps required to get a method channel up & running is the same for both Android and iOS, but implementation details change on the native side.

Following sections take the Android native implementation as example, implementing a method channel to retrieve data about the weather forecast.

1. Native side: implementation

To begin with, we will define a class named “OpenWeatherService“, responsible of retrieving the weather forecast from a remote server. The class implementation uses Kotlin coroutines:

object OpenWeatherService {
    val URL = "api.openweathermap.org"

    suspend fun getForecast(
appId: String, 
lat: Double, 
lon: Double) : String {

... 
//XXX: hit some API and get weather data...

}
        

2. Native side: config the entry point

After that, we will “hook” the previous service class, so its methods can be accessed from the native Android app.

In order to do that, we must:

  • register the name of the operations we want to access
  • link each one of these names to the corresponding operation implementation inside the class

In Flutter, the entry point for the underlying Android project is the “MainActivity.configureFlutterEngine()” method. So both registration and linking must be performed inside that method:

private val CHANNEL = "com.bgomez.flutter_weather"
private val SUFFIX = "/openweather"
private val GET_CURRENT = "getCurrentWeather"
private val GET_FORECAST = "getForecast"
 
override fun configureFlutterEngine(
   @NonNull flutterEngine: FlutterEngine) {
   super.configureFlutterEngine(flutterEngine)         
 
   MethodChannel(
      binaryMessenger, 
      "$CHANNEL$SUFFIX")
      .setMethodCallHandler { call, result ->

      // CURRENT WEATHER
      if (call.method == GET_CURRENT) {
         val res = 
            OpenWeatherService
               .getCurrentWeather(appId, lat, lon)
         result.success(res)
                   
      // 24H FORECAST
      } else if (call.method == GET_FORECAST) {    
         val res= 
            OpenWeatherService
               .getForecast(appId, lat, lon)
         result.success(res)
      }
   }
}

Method channel invocation must be performed on the Android UI thread, so we must wrap the previous snippet with some code for thread handling:

override fun configureFlutterEngine(
   @NonNull flutterEngine: FlutterEngine) {
   super.configureFlutterEngine(flutterEngine)

   // Force invocation on UI thread     
   android.os.Handler(
      android.os.Looper.getMainLooper()).post {
         //XXX: prev method channel code goes here 
      })
}

3. Flutter side: create a method channel

As mentioned before, the Flutter framework includes the “MethodChannel” data type. Instances of this class represent a bridge of communication between Flutter and native:

A channel is created directly by invoking the class constructor and passing its name as parameter. We can wrap the operation in a factory method:

  static const String PACKAGE ="...";
  static const String SUFFIX = "/weather";

  MethodChannel create() {
    return MethodChannel("$PACKAGE$SUFFIX");
  }

After that, we use the previous function to create our instance:

 final channel = create();

4. Flutter side: invocation of native method using the MethodChannel

Last but not least, we must invoke operations on the underlying platform using the “MethodChannel.invokeMethod()“, which takes as parameters:

  • the native method name we want to execute
  • the message we want to pass to the native side. This is just a JSON object containing key-value pairs with the values required to perform the operation
final json = 
await channel
.invokeMethod(
      "getForecast", 
      {
        "lat": settings.city.geo.lat,
        "lon": settings.city.geo.lon,
        "appId": settings.appId
      });

And that would be all! Our method channel is now ready to communicate with the underlying platform.

Sample code

As usual, check this repository to access the source code. Write you next time!

https://github.com/begomez/FlutterForecast

References:

https://flutter.dev/docs/development/platform-integration/platform-channels

Flutter: test coverage

Photo by Fabe collage on Unsplash

Introduction

Test coverage is a code metric that gives us some useful insight about a given project. Depending on the coverage level, our confidence in a source set may increase or decrease.

However, our code will never be bug-free, even when having a high coverage level. But coverage is an important point to consider when designing a testing strategy, for instance.

Project configuration

Dependencies

Flutter framework is shipped with some useful tools when it comes down to testing.

In order to enable testing in our project, we just need the following library:

dev_dependencies:
  flutter_test:
    sdk: flutter

Running tests

Once we have configured the project, tests are launched by executing:

flutter test

The previous command executes all the test files contained in the “/test” directory.

Once it’s done, we get a summary of successful/erroneous tests:

Flutter test summary

The test command has several options, but in this case we will focus on the coverage property:

flutter test --coverage

This option basically keeps track of the lines of code executed (aka “covered”) when running our tests. By the way, a complete command reference for the command can be found here.

When executing our tests with the coverage option, the resulting output is a “lcov.info” file. Problem here is that this format stores encoded data, so it’s not very user friendly…

lcov format

Introduction

Lcov is a graphical tool that collects raw data about tests (generated in the previous step) and transforms it into a set of structured HTML pages containing coverage information. It also contains some handy command line interface.

Installation

To install LCOV (on Mac), open a new command line and run:

brew install lcov

That would be all! More details available at the lcov homepage.

Execution

After installing lcov, we can execute a new set of commands related to test coverage, like the generate html command:

genhtml <source_file.info> -o <target_file.html>

Where do we get the “*.info” source file…? Well, we got it out-of-the-box when invoking the flutter test command with the coverage property. Under the hood, when calling flutter test with the coverage option, we’re just invoking the geninfo command from the lcov library.

Skipping components when covering

Lcov also allows us to “ignore” certain files or complete directories when keeping track of the coverage in our project. When generating the raw data, any file or directory ignored will not be taken into account.

This is useful, for instance, if we store the autogenerated classes of our project in a certain folder. We can also ignore, for instance, a simple directory containing only enumerations and constants (no logic to test there).

In order to customise the directories we want to cover, invoke:

lcov --remove <input_file.info> <files_or_directories_to_ignore> -o <output_file.info>

When ignoring some elements, we can use the * wildcard to build regular expressions. For instance, we can ignore the autogenerated files with the following pattern:

*.g.dart

Scripting for automation

We can use some automation tool like make in order to group all the previous commands into a useful task:

flutter test --coverage

lcov --remove coverage/lcov.info "***/constants*/**" "**/*.g.dart" -o coverage/lcov_cleaned.info

genhtml coverage/lcov_cleaned.info -o coverage/html

open coverage/html/index.html

Plugins

Additionally, there are several plugins that allow us to integrate the test coverage option directly into our IDE. For instance:

https://marketplace.visualstudio.com/items?itemName=Flutterando.flutter-coverage

The plugin adds a new panel into our workspace can be used to review the coverage data, either grouped (by package) or individually (by file).

Some other plugins go even further and actually embed the coverage data inside the text editor, so every line of code is highlighted on different colors, depending on whether it is actually covered or not.

Code sample

As usual, check the following repo for the complete source code:

https://github.com/begomez/Flutter-Arch-Template

References

https://pub.dev/packages/test_coverage

Flutter: Test Automation (III)

Photo by Daniele Levis Pelusi on Unsplash

Introduction

Once reviewed the remote configuration and the native setup, we’ll finish this series by checking the CI/CD pipeline configuration required for test automation. So let’s go!

Quick recap

Automation workflow

  1. Create Firebase Project
  2. Configure project by adding an account service
  3. Enable Cloud Tools Result
  4. Update underlying native Android project
  5. Configure underlying native Android Test Runner
  6. Set up Codemagic pipeline

6. Set up Codemagic pipeline

Introduction

Codemagic from Nevercode is a CI/CD tool designed initially for Flutter. Nowadays though, it offers support for almost any native or cross-platform mobile framework as well.

Codemagic automates the process of building, testing and delivering apps. Furthermore, all the build operations take place on their servers, so we don’t need any infrastructure on our side.

Once we register on their site, we can easily link the source code repositories we want to integrate into our pipeline. Futhermore, this tool offers both free and paid solutions, so it’s suitable for a large range of projects.

Building workflows

On Codemagic, build workflows can be configured in 2 ways:

“codemagic.yaml” local file
  • remotely, interacting directly with the editor available on the web
Remote workflow editor on codemagic site

Overview of “codemagic.yaml”

For this project, we’ll use the “codemagic.yaml” file to customize our build mechanism.

When this file is added to our project repository, it gets detected by default, so every time we launch a build then configuration is fetched from it automatically.

The following screenshot shows the complete structure of our configuration file:

“codemagic.yaml” build workflow

Structure of “codemagic.yaml”

The root component of the file is the “workflows” section. A single file can include many flows (for instance, one for Android and another one for iOS), and each one of them can have its own settings.

Specific workflow settings are defined using different headings or subsections, such as:

  • general props: sets both name and tools versioning.
  • environment: declares secrets or keys required for the build.
  • scripts: specifies the step-by-step build process.
  • artifacts: lists all the components generated after building (.ipa, .apk)
  • publishing: states the channels used to distribute our app.

General props

This block is mainly used for:

  • give the workflow a descriptive name
  • specify the type of machine we want to use when building
  • set the build duration time

Environment

Any credentials, API keys or secrets can be defined here, either as encryped data or plain text (not recommended). Once declared, these values can be used on any section of the file.

Aditionally, variables can be gathered into groups, so we can manage them as a whole. As a result, when importing a group, we’ll get immediate access to all of its values.

For the sample project, we have defined:

  • a group named “google_credentials”, holding variables for the GCloud configuration file
  • a single variable called “FIREBASE_PROJECT”, with the unique identifier of our Firebase site

Scripts

This section is the core of our build pipeline, since it contains the commands we want to execute in order to build our app.

For instance, if we need to update the project dependencies, we can add the following instruction:

name: Get packages
script: |
   cd . && flutter packages pub get

When defining the build, order is important, since the script blocks are executed sequentially, one after another.

In order to run our integration tests, we must add a script to build a test apk (apart from the standard apk):

name: Create both debug and test APK...  
script: |
   set -ex
   cd android
   ./gradlew app:assembleAndroidTest
   ./gradlew app:assembleDebug -Ptarget="$FCI_BUILD_DIR/integration_test/app_test.dart"

After that, we have to include another script so the generated apk is uploaded to Firebase Remote Test Lab:

name: Upload to Firebase...  
script: |
   set -ex
   echo $GCLOUD_KEY_FILE | base64 --decode >   ./gcloud_key_file.json
   gcloud auth activate-service-account --key-file=gcloud_key_file.json
   
   gcloud --quiet config set project $FIREBASE_PROJECT
   gcloud firebase test android run \
      --type instrumentation \
      --app build/app/outputs/apk/debug/app-debug.apk \
      --test build/app/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
      --timeout 5

As shown in the snippet, here we’re using the variables/groups and the apk’s generated on previous step:

  • we retrieve the variable $GCLOUD_KEY_FILE, decode its contents and save the result to a local file. When building, this file will be used to activate the service account
  • we retrieve the $FIREBASE_PROJECT to set the project identifier
  • we specify the apk and the test apk used when running our tests

Artifacts

Nothing fancy here, since it only contains the directory for the output files created after building.

Publishings

Codemagic supports different options when it comes down to distributing the build results.

Apart from email notifications, it also offers webhook compatibility, so it can be connected to messaging apps such as Slack or Discord.

Bonus: robo and instrumentation tests

By slightly changing the build script, we can execute the different types of tests on Firebase Remote Test Lab.

Running instrumentation tests

gcloud firebase test android run \
   --type instrumentation \
   --app build/app/outputs/apk/debug/app-debug.apk \
   --test build/app/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
   --timeout 5m

Running robo tests

gcloud firebase test android run \
   --type robo \
   --app build/app/outputs/apk/debug/app-debug.apk \
   --timeout 5m

Further configuration

The gcloud firebase test command defines optional properties that allow us to add custom tailored customization.

For instance, we can declare the devices we want to use when running our tests by adding:

gcloud firebase test android run \
   --type instrumentation \
   --device=model=NexusLowRes \
   --...

A complete reference for this command and all its properties can be found here.

Summary

On this series, we have defined an automation workflow using free tools such as Firebase and Codemagic. As you can see, is not as complicated as it may seemed at the beginning.

Moreover, automation has a lot of advantages over manual testing, so I think is “the way” to go. Give it a try and feel free to share your thoughts and results!

Sample project

As usual, you can check out the following repository for the source code:

https://github.com/begomez/Flutter-Test-Automation

Write you next time!

References

https://blog.codemagic.io/getting-started-with-codemagic/

https://github.com/codemagic-ci-cd/codemagic-sample-projects/tree/main/flutter/flutter-integration-tests-demo-project

Flutter: Test Automation (II)

Photo by Minku Kang on Unsplash

Introduction

In the previous article, we defined a workflow to automate our test suite using Firebase and Codemagic.

On this article, we are going to review the native configuration required on the underlying Android project.

Next time, we will check the CI/CD pipeline configuration and the results after running the integration tests. So let’s get started!

Quick recap

Automation workflow

  1. Create Firebase Project
  2. Configure project by adding an account service
  3. Enable Cloud Tools Result
  4. Update underlying native Android project
  5. Configure underlying native Android Test Runner
  6. Set up Codemagic pipeline

As said before, we already reviewed the first 3 items, so let’s continue from item number 4.

4. Update native Android project

Configuration required in the “android” folder under our Flutter project is quite straightforward. Only 2 changes are required:

  • Bring up-to-date the Gradle version
  • Add the Google Services dependency

Updating Gradle

Gradle is the underlying build mechanism used in Android. When it comes down to building and running some Android app, Gradle does all the heavy-lifting for us.

Gradle is distributed in 2 formats:

  • the main build tool itself (aka “gradle”)
  • the helper script used when gradle is not available (aka “wrapper”)

Some errors may be thrown when using older versions of Gradle, so we have to update it by changing following items:

  • file “android/build.gradle” (external, at project level)
Main gradle configuration file
  • file “android/gradle/wrapper/gradle-wrapper.properties
Gradle helper script set-up file

On both files, only thing we have to do is set at least the version depicted on the screencaps.

Adding Google Services dependency

Since gradle is the native build mechanism for Android, it also takes cares of 3rd party libraries and dependencies used in our project.

To get our tests running automatically, we must include the following dependency in our native Android project:

classpath “com.google.gms:google-services:4.2.0”

After that, open a terminal, change from your current directory to the Android folder and then run:

./gradlew build

to force a rebuild.

5. Configure native Android Test Runner

Under the hood, our Flutter integration tests will be executed on the underlying testing platforms. So, in Android, our Flutter integration tests will become Android Instrumentation tests. On iOS, they would be XCUI tests, and they would required some configuration too.

2 changes are required here as well to get our tests up and running on Android:

  • configuring the gradle file
  • adding a main activity file for test

Gradle configuration

Android Instrumentation tests are run on a separate environment and use a special object, called Instrumentation, that grants access to the application context and allows the execution of actions over the app under test.

The default Android project shipped on a Flutter project does not contain the configuration for Android Instrumentation, so we have to modify:

file “android/app/build.gradle” (internal, at module level)

by setting manually the testInstrumentation property in the defaultConfig block:

Default config on build.gradle

and also adding the following libraries in the dependencies block:

Test dependencies on build.gradle

Adding a test Main Activity

The instrumentation tests will require a specific main entry point, as any other automatic task.

In order to provide it, first we create the following directory hierachy:

/android/app/src/androidTest

After that, we have to reply the package structure created in the “main” subset. On this project, that would be:

java.com.example.fooderlich

so, when adding them both, we end up with something like:

/android/app/src/androidTest/java/com/example/fooderlich

Finally, once we have created the folder structure, we have to add a main test activity file. The code contained on this file is common to any project, so it can be used as a template: the only thing we have to change is the project package, as shown in the following screencap:

Sample for main activity test file

Recap

As you can see, the underlying configuration for our test automation is quite simple! On the last article of this series, we’ll check the Codemagic setup so we complete the whole workflow.

Sample project

Check out the following repository containing the complete project:

https://github.com/begomez/Flutter-Test-Automation

Write you next time!

Flutter: Test Automation (I)

Photo by EttiAmos on iStock

Introduction

In this article, we will review the configuration required to automate our test suite using a remote test lab and a CI/CD pipeline.

Starting project

The app…

Our application used for automated testing is forked from the fooderlich project, a social recipe app available at raywenderlich site.

Lately, Ana Polo has turned it into a mixture of recipes and cats app. Although the recipes are neither for cats nor include them as the main course…

Either way, the app uses a bottom navigation bar as primary navigation mechanism and it contains 3 modules:

  • random
  • main
  • profile

…and its test suite

Fooderlich app also contains some integration tests, stored under the “integration_test/app_test.dart” directory.

By the way, if you want a test refresher, you can check the previous article about golden testing.

More info about the project architecture, project structure and tests can be found here.

How will we automate our tests?

Main components

Our test automation strategy will rely on the following tools:

  • A remote test lab, the environment the tests will be run on
  • A pipeline that will take care of building, running and testing the app

Remote test lab

Firebase Test Lab is a remote testing environment that allows us to run tests on a wide range of devices.

It supports different types of tests, mainly:

  • robo: tests that are exploratory and help us uncover bugs by performing certain random actions on the app. The actions performed and/or the number of screens “explored” can be easily configured using the console.
  • instrumentation: tests run over real devices or emulators that check certain flow or features in the app. Although they are expensive in terms of resources, they provide more accurate data than other types of tests, since we’re operating directly with the underlying mobile platform (we are not “mocking” the native SDK like we do with unit tests)

Pipeline

On the other hand, Codemagic is a CI/CD tools that makes it easy to build, run, test and distribute mobile apps.

It was built initially as an Flutter-only tools, so it is custom-tailored for Flutter apps.

After building and creating the executable file of the app, Codemagic will connect with Firebase in order to upload it and run our tests automatically.

Let’s see how we can put all these moving pieces together!

Workflow

These are the major steps required to automate our testing pipeline:

  1. Create a new Firebase Project
  2. Configure the Firebase project by adding an account service
  3. Enable Cloud Tool Results on GCloud
  4. Update the underlying native Android project
  5. Configure the underlying native Android Test Runner
  6. Set up a build workflow on Codemagic

As you can see, the first steps require some remote configuration, whereas the last involve local configuration (inside our project).

1. Create a new Firebase Project

Add a new project in the Firebase Console as described here.

Since we’ll use the new project only as a container over which the remote tests are performed, there is no need to add neither Android nor iOS apps. In fact, no further configuration is required either.

Firebase console overview

However, Android or iOS projects would be necessary if our app used any other Firebase feature, such as 3rd party authentication.

2. Adding a Service Account

Service accounts are used to identify a particular feature inside our Firebase project and grant or deny access to it. Simply explained, think of them as authorization elements.

In fact, service accounts are derived from “Identity and Access Management (IAM)” and they allow us to define the type of access (a.k.a. role) that somebody or something (a.k.a. identity) has over a given resource.

As previously discussed, our integration tests will be executed on Firebase Remote Lab, but they will be launched from the Codemagic CI/CD pipeline. So we will need a service account to grant access permissions.

To create a new service account in Firebase, go to

Home -> Project config -> Service accounts

and then click on the “<n> service accounts” item, as shown below:

Service account on Firebase Console

Then you’ll be redirected to the Google Cloud Console. On the left menu, select:

Service accounts -> Create service account

Creating a service account on Google Cloud

When creating the account:

  • on step 1, enter both name and description
  • on step 2, select the function

    Basic -> Editor
  • on step 3, you can click “Skip” for now

More info about account creation can be found here.

Once created, the service account key can be downloaded as a JSON file by selecting:

Service account -> Context menu -> Manage keys -> Add new key

The contents of this file will be needed when configuring the CI/CD pipeline, so for now just keep in mind the folder you stored it on.

One more thing! Before we can use the service account key, we must convert it to base64 by running the following command:

cat /<path_to_download_directory>/gcloud_key_file.json | base64 | pbcopy

The last command (pbcopy) only saves the generated content to our clipboard so we can easily copy-paste elsewhere.

3. Enabling Cloud Tools Result

Cloud Tools Result API is required too in order to allow the communication between the different processes on the pipeline and the remote test lab.

It can be activated through GCloud, using the following link. Just click the button and then we’re good to go.

Firebase and GCloud compatibility

Coming up…

On the next article, we will review the remaining steps of the previous workflow and we’ll also check the reports generated once our test suite is run.

Sample project

Until then, check out the following repository containing the complete project:

https://github.com/begomez/Flutter-Test-Automation

Write you next time!

Flutter: Golden testing

Photo by Lucas Benjamin on Unsplash

Introduction

Flutter offers a wide range of automatic tests, such as:

  • unit testing, suitable for individual methods and/or classes
  • widget testing, handy when checking the visual components of our app
  • integration testing, useful in order to review end-to-end flows or features
  • golden testing, for pixel-perfect tests

So golden tests are available by default in the Flutter framework, but their usage is not as frequent as the other types of testing. What are they exactly? When should we use them? Are they worth it? Let’s find out!

What are golden tests?

Golden tests are basically automatic UI regression tests. If we split this definition into different parts:

  • they are automatic, so they can be scheduled and included on any testing pipeline
  • they are focused on the UI, checking the Look&Feel rather than the logic of our app
  • they belong to the regression test family, preventing us from making unintended visual changes when modifying a widget

Motivation

How many times have you messed up the alignment of a Column, shrinking all its contents by accident…? Do you remember playing around with some Opacity or Visibility widget that ended up hiding the contents of the app…?

Golden tests can detect these changes easily and provide useful feedback to solve them when required.

How do they work?

Golden tests are based on a series of screenshots of our app called “golden files“. These files are basically bitmaps that depict the desired visual appearance of the application, so they are considered the “one and only” source of truth when it comes down to any UI related matter.

When testing our app, Flutter compares the current appearance of the application with these templates and reports back any differences found as errors/failures.

Since Flutter uses composition and every application is basically a large tree composed of several nodes, each separate widget can have its own golden test based on a particular golden file.

Workflow

Adding golden tests to any Flutter app is quite simple! We only have to:

  1. Implement the golden test
  2. Create the required golden file
  3. Run the golden test
  4. If errors are reported back, perform a “Red-Green-Refactor” until all errors are solved

The following sections describe each one of these steps in detail.

1. Implementing a golden test

Golden tests are run in the environment provided by the default flutter_test library, so their entry point is the “testWidgets()” method. This method takes as parameters:

  • a string describing the test
  • a code snippet containing the test itself
void main() { 
   testWidgets('The 1st golden test...', (WidgetTester tester) async { 

      // 1)
      const GOLDEN_FILE_NAME = "my_app.png";

      // 2)
      await tester.pumpWidget(const MyApp());

      // 4)
      await expectLater( 
         find.byType(MyApp), // comparison is asynchronous    
         // 3)
         matchesGoldenFile(GOLDEN_FILE_NAME)); 
   });
}

As we can see in the previous example, the test:

  • 1) specifies the name of the golden file the test is going to be compared against
  • 2) pumps the widget under test
  • 3) uses a specific matcher (“matchesGoldenFile()“) that takes the golden file name as parameter
  • 4) it finally compares the former golden file with the current widget and returns a boolean value. This operation is performed asynchronously, so we use the “expectLater()” assertion

Piece of cake!

2. Creating the golden files

The golden files templates are created by running:

flutter test --update-goldens

This command generates or updates (when already created) all the golden files specified in our test suite.

Generated files are included by default in the “test” directory of our app.

The generated .png file seems a sketchy wireframe of the app, but it contains all the data required in order to accept or reject the test.

Main golden file for the counter app

3. Running tests

As usual, these tests are launched with the command:

flutter test

4. Error reporting

While testing the app, errors are reported back on the command line.

Additionally, each error is assigned a deviation percentage, representing the “amount of difference” between the current layout of the app and its corresponding golden file.

Each golden report contains a diff percentage

Furthermore, a “test/failures” directory is created. This folder includes the following screenshots:

  • <widget_name>_masterImage.png: the ideal appearance for the widget under test, according to its golden file
  • <widget_name>_testImage.png: the current appearance for the given widget
Test failures generate screenshots depicting the differences found

Golden tests under the hood

Golden tests basically do some image comparison in order to determine if the test passes or not. The Flutter framework “captures” the current UI and then compares it with the associated golden file at byte level. For each byte, both the pixel represented and its encoding metadata are checked.

The former comparison is performed using an instance of the “GoldenFileComparator” class by default.

Additionally, we can use our custom comparator used when running our tests. In order to do so, we have to:

  1. create a comparator subclass extending the comparator super class
class CustomGoldenComparator extends GoldenFileComparator {
  @override
  Future<bool> compare(Uint8List imageBytes, Uri golden) async {
    await Future.delayed(const Duration(seconds: 5));

    //TODO: add proper implementation, performing byte comparison
    return true;
  }

  @override
  Future<void> update(Uri golden, Uint8List imageBytes) {
    //TODO: add proper implementation, performing byte comparison
    throw UnimplementedError();
  }
}
  1. specify our comparator in the code, setting the goldenFileComparator property in the test file:

We can also specify a comparator using the command line:

flutter test goldenFileComparator=<path_to_custom_comparator>

Other features

As mentioned before, golden tests are another type of test automation, so they can be included on a CI/CD pipeline.
Many integration tools like Codemagic offer support for golden testing.

Flutter packages for golden tests

Although golden test are included in the Flutter framework by default, the Flutter community have created some libraries that extend the default behaviour and offer additional functionalities.

The golden_toolkit package, for instance, adds extra features that may come in handy when working with accessibility or widget variations.

Code repository

You can find a golden test sample in the following link:

https://github.com/begomez/FlutterGoldens

References

https://flutter.dev/docs/cookbook/testing/widget/introduction

https://www.youtube.com/watch?v=_G6GuxJF44Q

Flutter: simple type-writer effect on text

Photo by Wilhelm Gunkel on Unsplash

Intro

In this article, we will describe how we can achieve a fancy type-writer effect:

Typewriter effect
Author: DEV Community

Although there are some packages available on pub.dev, like the one described on this article, the usual question is: “do we really need another package in our beloved pubspeck just for this..?” Let’s explore some features like:

  • Generators
  • StreamBuilders

and then we will see!

Generators

Generator functions in Dart are used to provide a series of values, sequentially, one after the other.

The values provided by a generator object are not “cooked” before-hand: in fact, these values are created lazily, on demand.

In Dart, you can create 2 types of generators:

  • Synchronous generator: it blocks the execution until the sequence is provided.
  • Asynchronous generator: provides the sequence concurrently, while performing other tasks at (almost) the same time.

Although they are implemented in a different way, both types have some features in common:

  • they are marked with * before the body of the function, in order to state they are “generator” methods
  • the keyword “yield” is used in order to provide the current value generated by the sequence

Synchronous generator

Synchronous generator functions are marked with sync* before their body. Returned data type in the function signature must be a Iterable<T>, where T is the generic data type for the object contained in the series. So we can have, for instance:

Iterable<String> generateNames() sync* {
   ...
}

Iterable<int> generateNums() sync* {
   ...
}

Nevertheless, in the function body, these methods do not explicitly return any value. Instead, they “provide” some element using the yield statement.

The following code show a synchronous generator that provides a set of numbers:

Iterable<int> generateNumsSync() sync* {
  var i = 0;
  
  while (i < 10) {
    yield i++;//XXX: use yield to "push" the current value...
  }
}

Asynchronous generator

On the other hand, asynchronous generator functions use the async* modifier and return a Stream<T> instead. So:

Stream<int> generateNumsAsync() async* {
  var i = 0;
  
  while (i < 10) {
    //XXX: here we could wait for some time...    
    yield i++;
  }
}

As shown on the previous snippet, asynchronous generators do not return a value explicitly either.

One more thing: if you ever want to create a recursive generator, then you can use the modifier yield*.

Now we’ve seen the structure of generators, let’s see how we can “hook” their results into the widget tree.

StreamBuilder

The StreamBuilder widget listens for events on a given stream and updates its child every time a new item is provided down the stream. So you can think about it as some sort of implementation of the observer pattern.

This widget builds its child using the builder pattern. Among the parameters received, it gets a “picture” of the current state of the stream through an AsyncSnapshot object:

StreamBuilder<int>(
   stream: ...,
   builder: (BuildContext context, AsyncSnapshot<int> snapshot) {
   }
);            

This snapshot object contains the item emitted by the stream, but also other info, such as the state of the connection between the widget and the stream, any error that may have happened… Luckily for us, this blog is an bug-free environment 🙂 but when working on production apps, we should check all this additional info before accessing the stream.

The type-writer text

Now that we know how generators and streambuilders work, we can use both of them for our type-writer effect! So let’s put all the moving pieces together:

  • the text we want to animate can be encapsulated on a model class, so one of its methods provide us with a stream of values:
class StreamableTextModel {
  final String msg;

  const StreamableTextModel({this.msg = ""}) : assert(msg != null);

  ... 
  
  Stream<String> toStream() async* {
    for (var i = 0; i < msg.length; i++) {
      yield msg.substring(0, i + 1);
    }
  }
}
  • execution can be “paused” between emissions using Future.delayed()
  • a streambuilder widget can be used to listen to the previous stream and update the UI accordingly
 @override
 Widget build(BuildContext context) {
    return StreamBuilder<String>(
        initialData: "",
        stream: ...,
        builder: (BuildContext cntxt, AsyncSnapshot<String> snap) {
          if (snap.hasData) {
            return Text(snap.data);
          } else {
            return Text("");
          }
        });
  }

And that would be the result (you better skip the initial blank seconds…):

Recap

By combining generators and streambuilders, we can build dynamic widgets that are updated automatically when some data changes. This allow us to create “fake” animations like the one we described.

Write you next time! As usual, full source code is available at:

https://github.com/begomez/TypeWriter