Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Mobile-React Native

React Native Monthly #5 · React Native

React Native Monthly #5

· 4 min read

The React Native monthly meeting continues! Let's see what our teams are up to.

Callstack​

  • We’ve been working on React Native CI. Most importantly, we have migrated from Travis to Circle, leaving React Native with a single, unified CI pipeline.
  • We’ve organised Hacktoberfest - React Native edition where, together with attendees, we tried to submit many pull requests to open source projects.
  • We keep working on Haul. Last month, we have submitted two new releases, including Webpack 3 support. We plan to add CRNA and Expo support as well as work on better HMR. Our roadmap is public on the issue tracker. If you would like to suggest improvements or give feedback, let us know!

Expo​

  • Released Expo SDK 22 (using React Native 0.49) and updated CRNA for it.
    • Includes improved splash screen API, basic ARKit support, “DeviceMotion” API, SFAuthenticationSession support on iOS11, and more.
  • Your snacks can now have multiple JavaScript files and you can upload images and other assets by just dragging them into the editor.
  • Contribute to react-navigation to add support for iPhone X.
  • Focus our attention on rough edges when building large applications with Expo. For example:
    • First-class support for deploying to multiple environments: staging, production, and arbitrary channels. Channels will support rolling back and setting the active release for a given channel. Let us know if you want to be an early tester, @expo_io.
    • We are also working on improving our standalone app building infrastructure and adding support for bundling images and other non-code assets in standalone app builds while keeping the ability to update assets over the air.

Facebook​

  • Better RTL support:
    • We’re introducing a number of direction-aware styles.
      • Position:
        • (left|right) → (start|end)
      • Margin:
        • margin(Left|Right) → margin(Start|End)
      • Padding:
        • padding(Left|Right) → padding(Start|End)
      • Border:
        • borderTop(Left|Right)Radius → borderTop(Start|End)Radius
        • borderBottom(Left|Right)Radius → borderBottom(Start|End)Radius
        • border(Left|Right)Width → border(Start|End)Width
        • border(Left|Right)Color → border(Start|End)Color
    • The meaning of “left” and “right” were swapped in RTL for position, margin, padding, and border styles. Within a few months, we’re going to remove this behaviour and make “left” always mean “left,” and “right” always mean “right”. The breaking changes are hidden under a flag. Use I18nManager.swapLeftAndRightInRTL(false) in your React Native components to opt into them.
  • Working on Flow typing our internal native modules and using those to generate interfaces in Java and protocols in ObjC that the native implementations must implement. We hope this codegen becomes open source next year, at the earliest.

Infinite Red​

  • New OSS tool for helping React Native and other projects. More here.
  • Revamping Ignite for a new boilerplate release (Code name: Bowser)

Shoutem​

  • Improving the development flow on Shoutem. We want to streamline the process from creating an app to first custom screen and make it really easy, thus lowering the barrier for new React Native developers. Prepared a few workshops to test out new features. We also improved Shoutem CLI to support new flows.
  • Shoutem UI received a few component improvements and bugfixes. We also checked compatibility with latest React Native versions.
  • Shoutem platform received a few notable updates, new integrations are available as part of the open-source extensions project. We are really excited to see active development on Shoutem extensions from other developers. We actively contact and offer advice and guidance about their extensions.

Next session​

The next session is scheduled for Wednesday 6, December 2017. Feel free to ping me on Twitter if you have any suggestion on how we should improve the output of the meeting.

Tags:
  • engineering

React Native Monthly #6 · React Native

React Native Monthly #6

· 4 min read

The React Native monthly meeting is still going strong! Make sure to check a note on the bottom of this post for the next sessions.

Expo​

  • Congratulations to Devin Abbott and Houssein Djirdeh on their pre-release of the "Full Stack React Native" book! It walks you through learning React Native by building several small apps. You can try those apps out on https://www.fullstackreact.com/react-native/ before buying the book.
  • Released a first (experimental) version of reason-react-native-scripts to help people to easily try out ReasonML.
  • Expo SDK 24 is released! It uses React Native 0.51 and includes a bunch of new features and improvements: bundling images in standalone apps (no need to cache on first load!), image manipulation API (crop, resize, rotate, flip), face detection API, new release channel features (set the active release for a given channel and rollback), web dashboard to track standalone app builds, and a fix longstanding bug with OpenGL Android implementation and the Android multi-tasker, just to name a few things.
  • We are allocating more resources to React Navigation starting this January. We strongly believe that it is both possible and desirable to build React Native navigation with just React components and primitives like Animated and react-native-gesture-handler and we’re really excited about some of the improvements we have planned. If you're looking to contribute to the community, check out react-native-maps and react-native-svg, which could both use some help!

Infinite Red​

  • We have our Keynote speakers for Chain React conf: Kent C. Dodds and Tracy Lee. We will be opening CFP very soon.
  • Community chat now at 1600 people.
  • React Native Newsletter now at 8500 subscribers.
  • Currently researching best practice for making RN crash resistant, reports to follow.
  • Adding ability to report from Solidarity.
  • Published a HOW-TO for releasing on React Native and Android.

Microsoft​

  • A pull request has been started to migrate the core React Native Windows bridge to .NET Standard, making it effectively OS-agnostic. The hope is that many other .NET Core platforms can extend the bridge with their own threading models, JavaScript runtimes, and UIManagers (think JavaScriptCore, Xamarin.Mac, Linux Gtk#, and Samsung Tizen options).

Wix​

  • Detox
    • In order for us to scale with E2E tests, we want to minimize time spent on CI, we are working on parallelization support for Detox.
    • Submitted a pull request to enable support for custom flavor builds, to better support mocking on E2E.
  • DetoxInstruments
    • Working on the killer feature of DetoxInstruments proves to be a very challenging task, taking JavaScript backtrace at any given time requires a custom JSCore implementation to support JS thread suspension. Testing the profiler internally on Wix’s app revealed interesting insights about the JS thread.
    • The project is still not stable enough for general use but is actively worked on, and we hope to announce it soon.
  • React Native Navigation
    • V2 development pace has been increased substantially, up until now, we only had 1 developer working on it 20% of his time, we now have 3 developers working on it full time!
  • Android Performance
    • Replacing the old JSCore bundled in RN with its newest version (tip of webkitGTK project, with custom JIT configuration) produced 40% performance increase on the JS thread. Next up is compiling a 64bit version of it. This effort is based on JSC build scripts for Android. Follow its current status here.

Next sessions​

There's been some discussion on re-purposing this meeting to discuss a single and specific topic (e.g. navigation, moving React Native modules into separate repos, documentation, ...). That way we feel we can contribute the best to React Native community. It might take place in the next meeting session. Feel free to tweet what you'd like to see covered as a topic.

Tags:
  • engineering
Read article

Implementing Twitter’s App Loading Animation in React Native · React Native

Implementing Twitter’s App Loading Animation in React Native

· 11 min read

Twitter’s iOS app has a loading animation I quite enjoy.

Once the app is ready, the Twitter logo delightfully expands, revealing the app.

I wanted to figure out how to recreate this loading animation with React Native.


To understand how to build it, I first had to understand the difference pieces of the loading animation. The easiest way to see the subtlety is to slow it down.

There are a few major pieces in this that we will need to figure out how to build.

  1. Scaling the bird.
  2. As the bird grows, showing the app underneath
  3. Scaling the app down slightly at the end

It took me quite a while to figure out how to make this animation.

I started with an incorrect assumption that the blue background and Twitter bird were a layer on top of the app and that as the bird grew, it became transparent which revealed the app underneath. This approach doesn’t work because the Twitter bird becoming transparent would show the blue layer, not the app underneath!

Luckily for you, dear reader, you don’t have to go through the same frustration I did. You get this nice tutorial skipping to the good stuff!


The right way​

Before we get to code, it is important to understand how to break this down. To help visualize this effect, I recreated it in CodePen (embedded in a few paragraphs) so you can interactively see the different layers.

There are three main layers to this effect. The first is the blue background layer. Even though this seems to appear on top of the app, it is actually in the back.

We then have a plain white layer. And then lastly, in the very front, is our app.


The main trick to this animation is using the Twitter logo as a mask and masking both the app, and the white layer. I won’t go too deep on the details of masking, there are plenty of resources online for that.

The basics of masking in this context are having images where opaque pixels of the mask show the content they are masking whereas transparent pixels of the mask hide the content they are masking.

We use the Twitter logo as a mask, and having it mask two layers; the solid white layer, and the app layer.

To reveal the app, we scale the mask up until it is larger than the entire screen.

While the mask is scaling up, we fade in the opacity of the app layer, showing the app and hiding the solid white layer behind it. To finish the effect, we start the app layer at a scale > 1, and scale it down to 1 as the animation is ending. We then hide the non-app layers as they will never be seen again.

They say a picture is worth 1,000 words. How many words is an interactive visualization worth? Click through the animation with the “Next Step” button. Showing the layers gives you a side view perspective. The grid is there to help visualize the transparent layers.

Now, for the React Native​

Alrighty. Now that we know what we are building and how the animation works, we can get down to the code — the reason you are really here.

The main piece of this puzzle is MaskedViewIOS, a core React Native component.

import {MaskedViewIOS} from 'react-native';

<MaskedViewIOS maskElement={<Text>Basic Mask</Text>}>
<View style={{backgroundColor: 'blue'}} />
</MaskedViewIOS>;

MaskedViewIOS takes props maskElement and children . The children are masked by the maskElement . Note that the mask doesn’t need to be an image, it can be any arbitrary view. The behavior of the above example would be to render the blue view, but for it to be visible only where the words “Basic Mask” are from the maskElement . We just made complicated blue text.

What we want to do is render our blue layer, and then on top render our masked app and white layers with the Twitter logo.

{
fullScreenBlueLayer;
}
<MaskedViewIOS
style={{flex: 1}}
maskElement={
<View style={styles.centeredFullScreen}>
<Image source={twitterLogo} />
</View>
}>
{fullScreenWhiteLayer}
<View style={{flex: 1}}>
<MyApp />
</View>
</MaskedViewIOS>;

This will give us the layers we see below.

Now for the Animated part​

We have all the pieces we need to make this work, the next step is animating them. To make this animation feel good, we will be utilizing React Native’s Animated API.

Animated lets us define our animations declaratively in JavaScript. By default, these animations run in JavaScript and tell the native layer what changes to make on every frame. Even though JavaScript will try to update the animation every frame, it will likely not be able to do that fast enough and will cause dropped frames (jank) to occur. Not what we want!

Animated has special behavior to allow you to get animations without this jank. Animated has a flag called useNativeDriver which sends your animation definition from JavaScript to native at the beginning of your animation, allowing the native side to process the updates to your animation without having to go back and forth to JavaScript every frame. The downside of useNativeDriver is you can only update a specific set of properties, mostly transform and opacity . You can’t animate things like background color with useNativeDriver , at least not yet — we will add more over time, and of course you can always submit a PR for properties you need for your project, benefitting the whole community 😀.

Since we want this animation to be smooth, we will work within these constraints. For a more in depth look at how useNativeDriver works under the hood, check out our blog post announcing it.

Breaking down our animation​

There are 4 components to our animation:

  1. Enlarge the bird, revealing the app and the solid white layer
  2. Fade in the app
  3. Scale down the app
  4. Hide the white layer and blue layer when it is done

With Animated, there are two main ways to define your animation. The first is by using Animated.timing which lets you say exactly how long your animation will run for, along with an easing curve to smooth out the motion. The other approach is by using the physics based apis such as Animated.spring . With Animated.spring , you specify parameters like the amount of friction and tension in the spring, and let physics run your animation.

We have multiple animations we want to be running at the same time which are all closely related to each other. For example, we want the app to start fading in while the mask is mid-reveal. Because these animations are closely related, we will use Animated.timing with a single Animated.Value .

Animated.Value is a wrapper around a native value that Animated uses to know the state of an animation. You typically want to only have one of these for a complete animation. Most components that use Animated will store the value in state.

Since I’m thinking about this animation as steps occurring at different points in time along the complete animation, we will start our Animated.Value at 0, representing 0% complete, and end our value at 100, representing 100% complete.

Our initial component state will be the following.

state = {
loadingProgress: new Animated.Value(0),
};

When we are ready to begin the animation, we tell Animated to animate this value to 100.

Animated.timing(this.state.loadingProgress, {
toValue: 100,
duration: 1000,
useNativeDriver: true, // This is important!
}).start();

I then try to figure out a rough estimate of the different pieces of the animations and the values I want them to have at different stages of the overall animation. Below is a table of the different pieces of the animation, and what I think their values should be at different points as we progress through time.

The Twitter bird mask should start at scale 1, and it gets smaller before it shoots up in size. So at 10% through the animation, it should have a scale value of .8 before shooting up to scale 70 at the end. Picking 70 was pretty arbitrary to be honest, it needed to be large enough that the bird fully revealed the screen and 60 wasn’t big enough 😀. Something interesting about this part though is that the higher the number, the faster it will look like it is growing because it has to get there in the same amount of time. This number took some trial and error to make look good with this logo. Logos / devices of different sizes will require this end-scale to be different to ensure the entire screen is revealed.

The app should stay opaque for a while, at least through the Twitter logo getting smaller. Based on the official animation, I want to start showing it when the bird is mid way through scaling it up and to fully reveal it pretty quickly. So at 15% we start showing it, and at 30% through the overall animation it is fully visible.

The app scale starts at 1.1 and scales down to its regular scale by the end of the animation.

And now, in code.​

What we essentially did above is map the values from the animation progress percentage to the values for the individual pieces. We do that with Animated using .interpolate . We create 3 different style objects, one for each piece of the animation, using interpolated values based off of this.state.loadingProgress .

const loadingProgress = this.state.loadingProgress;

const opacityClearToVisible = {
opacity: loadingProgress.interpolate({
inputRange: [0, 15, 30],
outputRange: [0, 0, 1],
extrapolate: 'clamp',
// clamp means when the input is 30-100, output should stay at 1
}),
};

const imageScale = {
transform: [
{
scale: loadingProgress.interpolate({
inputRange: [0, 10, 100],
outputRange: [1, 0.8, 70],
}),
},
],
};

const appScale = {
transform: [
{
scale: loadingProgress.interpolate({
inputRange: [0, 100],
outputRange: [1.1, 1],
}),
},
],
};

Now that we have these style objects, we can use them when rendering the snippet of the view from earlier in the post. Note that only Animated.View , Animated.Text , and Animated.Image are able to use style objects that use Animated.Value .

const fullScreenBlueLayer = (
<View style={styles.fullScreenBlueLayer} />
);
const fullScreenWhiteLayer = (
<View style={styles.fullScreenWhiteLayer} />
);

return (
<View style={styles.fullScreen}>
{fullScreenBlueLayer}
<MaskedViewIOS
style={{flex: 1}}
maskElement={
<View style={styles.centeredFullScreen}>
<Animated.Image
style={[styles.maskImageStyle, imageScale]}
source={twitterLogo}
/>
</View>
}>
{fullScreenWhiteLayer}
<Animated.View
style={[opacityClearToVisible, appScale, {flex: 1}]}>
{this.props.children}
</Animated.View>
</MaskedViewIOS>
</View>
);

Yay! We now have the animation pieces looking like we want. Now we just have to clean up our blue and white layers which will never be seen again.

To know when we can clean them up, we need to know when the animation is complete. Luckily where we call, Animated.timing , .start takes an optional callback that runs when the animation is complete.

Animated.timing(this.state.loadingProgress, {
toValue: 100,
duration: 1000,
useNativeDriver: true,
}).start(() => {
this.setState({
animationDone: true,
});
});

Now that we have a value in state to know whether we are done with the animation, we can modify our blue and white layers to use that.

const fullScreenBlueLayer = this.state.animationDone ? null : (
<View style={[styles.fullScreenBlueLayer]} />
);
const fullScreenWhiteLayer = this.state.animationDone ? null : (
<View style={[styles.fullScreenWhiteLayer]} />
);

Voila! Our animation now works and we clean up our unused layers once the animation is done. We have built the Twitter app loading animation!

But wait, mine doesn’t work!​

Don’t fret, dear reader. I too hate when guides only give you chunks of the code and don’t give you the completed source.

This component has been published to npm and is on GitHub as react-native-mask-loader. To try this out on your phone, it is available on Expo here:

More Reading / Extra Credit​

  1. This gitbook is a great resource to learn more about Animated after you have read the React Native docs.
  2. The actual Twitter animation seems to speed up the mask reveal towards the end. Try modifying the loader to use a different easing function (or a spring!) to better match that behavior.
  3. The current end-scale of the mask is hard coded and likely won’t reveal the entire app on a tablet. Calculating the end scale based on screen size and image size would be an awesome PR.
Tags:
  • engineering
Read article

Using AWS with React Native · React Native

Using AWS with React Native

· 9 min read

AWS is well known in the technology industry as a provider of cloud services. These include compute, storage, and database technologies, as well as fully managed serverless offerings. The AWS Mobile team has been working closely with customers and members of the JavaScript ecosystem to make cloud-connected mobile and web applications more secure, scalable, and easier to develop and deploy. We began with a complete starter kit, but have a few more recent developments.

This blog post talks about some interesting things for React and React Native developers:

  • AWS Amplify , a declarative library for JavaScript applications using cloud services
  • AWS AppSync , a fully managed GraphQL service with offline and real-time features

AWS Amplify​

React Native applications are very easy to bootstrap using tools like Create React Native App and Expo. However, connecting them to the cloud can be challenging to navigate when you try to match a use case to infrastructure services. For example, your React Native app might need to upload photos. Should these be protected per user? That probably means you need some sort of registration or sign-in process. Do you want your own user directory or are you using a social media provider? Maybe your app also needs to call an API with custom business logic after users log in.

To help JavaScript developers with these problems, we released a library named AWS Amplify. The design is broken into "categories" of tasks, instead of AWS-specific implementations. For example, if you wanted users to register, log in, and then upload private photos, you would simply pull in Auth and Storage categories to your application:

import { Auth } from 'aws-amplify';

Auth.signIn(username, password)
.then(user => console.log(user))
.catch(err => console.log(err));

Auth.confirmSignIn(user, code)
.then(data => console.log(data))
.catch(err => console.log(err));

In the code above, you can see an example of some of the common tasks that Amplify helps you with, such as using multi-factor authentication (MFA) codes with either email or SMS. The supported categories today are:

  • Auth : Provides credential automation. Out-of-the-box implementation uses AWS credentials for signing, and OIDC JWT tokens from Amazon Cognito. Common functionality, such as MFA features, is supported.
  • Analytics : With a single line of code, get tracking for authenticated or unauthenticated users in Amazon Pinpoint. Extend this for custom metrics or attributes, as you prefer.
  • API : Provides interaction with RESTful APIs in a secure manner, leveraging AWS Signature Version 4. The API module is great on serverless infrastructures with Amazon API Gateway.
  • Storage : Simplified commands to upload, download, and list content in Amazon S3. You can also easily group data into public or private content on a per-user basis.
  • Caching : An LRU cache interface across web apps and React Native that uses implementation-specific persistence.
  • i18n and Logging : Provides internationalization and localization capabilities, as well as debugging and logging capabilities.

One of the nice things about Amplify is that it encodes "best practices" in the design for your specific programming environment. For example, one thing we found working with customers and React Native developers is that shortcuts taken during development to get things working quickly would make it through to production stacks. These can compromise either scalability or security, and force infrastructure rearchitecture and code refactoring.

One example of how we help developers avoid this is the Serverless Reference Architectures with AWS Lambda. These show you best practices around using Amazon API Gateway and AWS Lambda together when building your backend. This pattern is encoded into the API category of Amplify. You can use this pattern to interact with several different REST endpoints, and pass headers all the way through to your Lambda function for custom business logic. We’ve also released an AWS Mobile CLI for bootstrapping new or existing React Native projects with these features. To get started, just install via npm , and follow the configuration prompts:

npm install --global awsmobile-cli
awsmobile configure

Another example of encoded best practices that is specific to the mobile ecosystem is password security. The default Auth category implementation leverages Amazon Cognito user pools for user registration and sign-in. This service implements Secure Remote Password protocol as a way of protecting users during authentication attempts. If you're inclined to read through the mathematics of the protocol, you'll notice that you must use a large prime number when calculating the password verifier over a primitive root to generate a Group. In React Native environments, JIT is disabled. This makes BigInteger calculations for security operations such as this less performant. To account for this, we've released native bridges in Android and iOS that you can link inside your project:

npm install --save aws-amplify-react-native
react-native link amazon-cognito-identity-js

We're also excited to see that the Expo team has included this in their latest SDK so that you can use Amplify without ejecting.

Finally, specific to React Native (and React) development, Amplify contains higher order components (HOCs) for easily wrapping functionality, such as for sign-up and sign-in to your app:

import Amplify, { withAuthenticator } from 'aws-amplify-react-native';
import aws_exports from './aws-exports';

Amplify.configure(aws_exports);

class App extends React.Component {
...
}

export default withAuthenticator(App);

The underlying component is also provided as <Authenticator /> , which gives you full control to customize the UI. It also gives you some properties around managing the state of the user, such as if they've signed in or are waiting for MFA confirmation, and callbacks that you can fire when state changes.

Similarly, you'll find general React components that you can use for different use cases. You can customize these to your needs, for example, to show all private images from Amazon S3 in the Storage module:

<S3Album
level="private"
path={path}
filter={(item) => /jpg/i.test(item.path)}/>

You can control many of the component features via props, as shown earlier, with public or private storage options. There are even capabilities to automatically gather analytics when users interact with certain UI components:

return <S3Album track/>

AWS Amplify favors a convention over configuration style of development, with a global initialization routine or initialization at the category level. The quickest way to get started is with an aws-exports file. However, developers can also use the library independently with existing resources.

For a deep dive into the philosophy and to see a full demo, check out the video from AWS re:Invent.

AWS AppSync​

Shortly after the launch of AWS Amplify, we also released AWS AppSync. This is a fully managed GraphQL service that has both offline and real-time capabilities. Although you can use GraphQL in different client programming languages (including native Android and iOS), it's quite popular among React Native developers. This is because the data model fits nicely into a unidirectional data flow and component hierarchy.

AWS AppSync enables you to connect to resources in your own AWS account, meaning you own and control your data. This is done by using data sources, and the service supports Amazon DynamoDB, Amazon Elasticsearch, and AWS Lambda. This enables you to combine functionality (such as NoSQL and full-text search) in a single GraphQL API as a schema. This enables you to mix and match data sources. The AppSync service can also provision from a schema, so if you aren't familiar with AWS services, you can write GraphQL SDL, click a button, and you're automatically up and running.

The real-time functionality in AWS AppSync is controlled via GraphQL subscriptions with a well-known, event-based pattern. Because subscriptions in AWS AppSync are controlled on the schema with a GraphQL directive, and a schema can use any data source, this means you can trigger notifications from database operations with Amazon DynamoDB and Amazon Elasticsearch Service, or from other parts of your infrastructure with AWS Lambda.

In a way similar to AWS Amplify, you can use enterprise security features on your GraphQL API with AWS AppSync. The service lets you get started quickly with API keys. However, as you roll to production it can transition to using AWS Identity and Access Management (IAM) or OIDC tokens from Amazon Cognito user pools. You can control access at the resolver level with policies on types. You can even use logical checks for fine-grained access control checks at run time, such as detecting if a user is the owner of a specific database resource. There are also capabilities around checking group membership for executing resolvers or individual database record access.

To help React Native developers learn more about these technologies, there is a built-in GraphQL sample schema that you can launch on the AWS AppSync console homepage. This sample deploys a GraphQL schema, provisions database tables, and connects queries, mutations, and subscriptions automatically for you. There is also a functioning React Native example for AWS AppSync which leverages this built in schema (as well as a React example), which enable you to get both your client and cloud components running in minutes.

Getting started is simple when you use the AWSAppSyncClient , which plugs in to the Apollo Client. The AWSAppSyncClient handles security and signing for your GraphQL API, offline functionality, and the subscription handshake and negotiation process:

import AWSAppSyncClient from "aws-appsync";
import { Rehydrated } from 'aws-appsync-react';
import { AUTH_TYPE } from "aws-appsync/lib/link/auth-link";

const client = new AWSAppSyncClient({
url: awsconfig.graphqlEndpoint,
region: awsconfig.region,
auth: {type: AUTH_TYPE.API_KEY, apiKey: awsconfig.apiKey}
});

The AppSync console provides a configuration file for download, which contains your GraphQL endpoint, AWS Region, and API key. You can then use the client with React Apollo:

const WithProvider = () => (
<ApolloProvider client={client}>
<Rehydrated>
<App />
</Rehydrated>
</ApolloProvider>
);

At this point, you can use standard GraphQL queries:

query ListEvents {
listEvents{
items{
__typename
id
name
where
when
description
comments{
__typename
items{
__typename
eventId
commentId
content
createdAt
}
nextToken
}
}
}
}

The example above shows a query with the sample app schema provisioned by AppSync. It not only showcases interaction with DynamoDB, but also includes pagination of data (including encrypted tokens) and type relations between Events and Comments . Because the app is configured with the AWSAppSyncClient , data is automatically persisted offline and will synchronize when devices reconnect.

You can see a deep dive of the client technology behind this and a React Native demo in this video.

Feedback​

The team behind the libraries is eager to hear how these libraries and services work for you. They also want to hear what else we can do to make React and React Native development with cloud services easier for you. Reach out to the AWS Mobile team on GitHub for AWS Amplify or AWS AppSync.

Tags:
  • engineering
Read article

Building <InputAccessoryView> For React Native · React Native

Building <InputAccessoryView> For React Native

· 6 min read

Motivation​

Three years ago, a GitHub issue was opened to support input accessory view from React Native.

In the ensuing years, there have been countless '+1s', various workarounds, and zero concrete changes to RN on this issue - until today. Starting with iOS, we're exposing an API for accessing the native input accessory view and we are excited to share how we built it.

Background​

What exactly is an input accessory view? Reading Apple's developer documentation, we learn that it's a custom view which can be anchored to the top of the system keyboard whenever a receiver becomes the first responder. Anything that inherits from UIResponder can redeclare the .inputAccessoryView property as read-write, and manage a custom view here. The responder infrastructure mounts the view, and keeps it in sync with the system keyboard. Gestures which dismiss the keyboard, like a drag or tap, are applied to the input accessory view at the framework level. This allows us to build content with interactive keyboard dismissal, an integral feature in top-tier messaging apps like iMessage and WhatsApp.

There are two common use cases for anchoring a view to the top of the keyboard. The first is creating a keyboard toolbar, like the Facebook composer background picker.

In this scenario, the keyboard is focused on a text input field, and the input accessory view is used to provide additional keyboard functionality. This functionality is contextual to the type of input field. In a mapping application it could be address suggestions, or in a text editor, it could be rich text formatting tools.


The Objective-C UIResponder who owns the <InputAccessoryView> in this scenario should be clear. The <TextInput> has become first responder, and under the hood this becomes an instance of UITextView or UITextField .

The second common scenario is sticky text inputs:

Here, the text input is actually part of the input accessory view itself. This is commonly used in messaging applications, where a message can be composed while scrolling through a thread of previous messages.


Who owns the <InputAccessoryView> in this example? Can it be the UITextView or UITextField again? The text input is inside the input accessory view, this sounds like a circular dependency. Solving this issue alone is another blog post in itself. Spoilers: the owner is a generic UIView subclass who we manually tell to becomeFirstResponder.

API Design​

We now know what an <InputAccessoryView> is, and how we want to use it. The next step is designing an API that makes sense for both use cases, and works well with existing React Native components like <TextInput> .

For keyboard toolbars, there are a few things we want to consider:

  1. We want to be able to hoist any generic React Native view hierarchy into the <InputAccessoryView> .
  2. We want this generic and detached view hierarchy to accept touches and be able to manipulate application state.
  3. We want to link an <InputAccessoryView> to a particular <TextInput> .
  4. We want to be able to share an <InputAccessoryView> across multiple text inputs, without duplicating any code.

We can achieve #1 using a concept similar to React portals. In this design, we portal React Native views to a UIView hierarchy managed by the responder infrastructure. Since React Native views render as UIViews, this is actually quite straightforward - we can just override:

- (void)insertReactSubview:(UIView *)subview atIndex:(NSInteger)atIndex

and pipe all the subviews to a new UIView hierarchy. For #2, we set up a new RCTTouchHandler for the <InputAccessoryView> . State updates are achieved by using regular event callbacks. For #3 and #4, we use the nativeID field to locate the accessory view UIView hierarchy in native code during the creation of a <TextInput> component. This function uses the .inputAccessoryView property of the underlying native text input. Doing this effectively links <InputAccessoryView> to <TextInput> in their ObjC implementations.

Supporting sticky text inputs (scenario 2) adds a few more constraints. For this design, the input accessory view has a text input as a child, so linking via nativeID is not an option. Instead, we set the .inputAccessoryView of a generic off-screen UIView to our native <InputAccessoryView> hierarchy. By manually telling this generic UIView to become first responder, the hierarchy is mounted by responder infrastructure. This concept is explained thoroughly in the aforementioned blog post.

Pitfalls​

Of course not everything was smooth sailing while building this API. Here are a few pitfalls we encountered, along with how we fixed them.

An initial idea for building this API involved listening to NSNotificationCenter for UIKeyboardWill(Show/Hide/ChangeFrame) events. This pattern is used in some open-sourced libraries, and internally in some parts of the Facebook app. Unfortunately, UIKeyboardDidChangeFrame events were not being called in time to update the <InputAccessoryView> frame on swipes. Also, changes in keyboard height are not captured by these events. This creates a class of bugs that manifest like this:

On iPhone X, text and emoji keyboard are different heights. Most applications using keyboard events to manipulate text input frames had to fix the above bug. Our solution was to commit to using the .inputAccessoryView property, which meant that the responder infrastructure handles frame updates like this.


Another tricky bug we encountered was avoiding the home pill on iPhone X. You may be thinking, “Apple developed safeAreaLayoutGuide for this very reason, this is trivial!”. We were just as naive. The first issue is that the native <InputAccessoryView> implementation has no window to anchor to until the moment it is about to appear. That's alright, we can override -(BOOL)becomeFirstResponder and enforce layout constraints there. Adhering to these constraints bumps the accessory view up, but another bug arises:

The input accessory view successfully avoids the home pill, but now content behind the unsafe area is visible. The solution lies in this radar. I wrapped the native <InputAccessoryView> hierarchy in a container which doesn't conform to the safeAreaLayoutGuide constraints. The native container covers the content in the unsafe area, while the <InputAccessoryView> stays within the safe area boundaries.


Example Usage​

Here's an example which builds a keyboard toolbar button to reset <TextInput> state.

class TextInputAccessoryViewExample extends React.Component<
{},
*,
> {
constructor(props) {
super(props);
this.state = {text: 'Placeholder Text'};
}

render() {
const inputAccessoryViewID = 'inputAccessoryView1';
return (
<View>
<TextInput
style={styles.default}
inputAccessoryViewID={inputAccessoryViewID}
onChangeText={text => this.setState({text})}
value={this.state.text}
/>
<InputAccessoryView nativeID={inputAccessoryViewID}>
<View style={{backgroundColor: 'white'}}>
<Button
onPress={() =>
this.setState({text: 'Placeholder Text'})
}
title="Reset Text"
/>
</View>
</InputAccessoryView>
</View>
);
}
}

Another example for Sticky Text Inputs can be found in the repository.

When will I be able to use this?​

The full commit for this feature implementation is here. <InputAccessoryView> will be available in the upcoming v0.55.0 release.

Happy keyboarding :)

Tags:
  • engineering
Read article

Built with React Native - The Build.com app · React Native

Built with React Native - The Build.com app

· 5 min read

Build.com, headquartered in Chico, California, is one of the largest online retailers for home improvement items. The team has had a strong web-centric business for 18 years and began thinking about a mobile App in 2015. Building unique Android and iOS apps wasn’t practical due to our small team and limited native experience. Instead, we decided to take a risk on the very new React Native framework. Our initial commit was on August 12, 2015 using React Native v0.8.0! We were live in both App Stores on October 15, 2016. Over the last two years, we’ve continued to upgrade and expand the app. We are currently on React Native version 0.53.0.

You can check out the app at https://www.build.com/app.

Features​

Our app is full featured and includes everything that you’d expect from an e-commerce app: product listings, search and sorting, the ability to configure complex products, favorites, etc. We accept standard credit card payment methods as well as PayPal, and Apple Pay for our iOS users.

A few standout features you might not expect include:

  1. 3D models available for around 40 products with 90 finishes
  2. Augmented Reality (AR) to allow the user to see how lights and faucets will look in their home at 98% size accuracy. The Build.com React Native App is featured in the Apple App Store for AR Shopping! AR is now available for Android and iOS!
  3. Collaborative project management features that allow people to put together shopping lists for the different phases of their project and collaborate around selection

We’re working on many new and exciting features that will continue to improve our app experience including the next phase of Immersive Shopping with AR.

Our Development Workflow​

Build.com allows each dev to choose the tools that best suit them.

  • IDEs include Atom, IntelliJ, VS Code, Sublime, Eclipse, etc.
  • For Unit testing, developers are responsible for creating Jest unit tests for any new components and we’re working to increase the coverage of older parts of the app using jest-coverage-ratchet .
  • We use Jenkins to build out our beta and release candidates. This process works well for us but still requires significant work to create the release notes and other artifacts.
  • Integration Testing include a shared pool of testers that work across desktop, mobile and web. Our automation engineer is building out our suite of automated integration tests using Java and Appium.
  • Other parts of the workflow include a detailed eslint configuration, custom rules that enforce properties needed for testing, and pre-push hooks that block offending changes.

Libraries Used in the App​

The Build.com app relies on a number of common open source libraries including: Redux, Moment, Numeral, Enzyme and a bunch of React Native bridge modules. We also use a number of forked open source libraries; forked either because they were abandoned or because we needed custom features. A quick count shows around 115 JavaScript and native dependencies. We would like to explore tools that remove unused libraries.

We're in the process of adding static typing via TypeScript and looking into optional chaining. These features could help us with solving a couple classes of bugs that we still see:

  • Data that is the wrong type
  • Data that is undefined because an object didn’t contain what we expected

Open Source Contributions​

Since we rely so heavily on open source, our team is committed to contributing back to the community. Build.com allows the team to open source libraries that we've built and encourages us contribute back to the libraries that we use.

We’ve released and maintained a number of React Native libraries:

  • react-native-polyfill
  • react-native-simple-store
  • react-native-contact-picker

We have also contributed to a long list of libraries including: React and React Native, react-native-schemes-manager , react-native-swipeable , react-native-gallery , react-native-view-transformer , react-native-navigation .

Our Journey​

We’ve seen a lot of growth in React Native and the ecosystem in the past couple years. Early on, it seemed that every version of React Native would fix some bugs but introduce several more. For example, Remote JS Debugging was broken on Android for several months. Thankfully, things became much more stable in 2017.

One of our big recurring challenges has been with navigation libraries. For a long time, we were using Expo’s ex-nav library. It worked well for us but it was eventually deprecated. However, we were in heavy feature development at the time so taking time to change out a navigation library wasn’t feasible. That meant we had to fork the library and patch it to support React 16 and the iPhone X. Eventually, we were able to migrate to react-native-navigation and hopefully that will see continued support.

Bridge Modules​

Another big challenge has been with bridge modules. When we first started, a lot of critical bridges were missing. One of my teammates wrote react-native-contact-picker because we needed access to the Android contact picker in our app. We’ve also seen a lot of bridges that were broken by changes within React Native. For example, there was a breaking change within React Native v40 and when we upgraded our app, I had to submit PRs to fix 3 or 4 libraries that had not yet been updated.

Looking Forward​

As React Native continues to grow, our wishlist to our community include:

  • Stabilize and improve the navigation libraries
  • Maintain support for libraries in the React Native ecosystem
  • Improve the experience for adding native libraries and bridge modules to a project

Companies and individuals in the React Native community have been great about volunteering their time and effort to improve the tools that we all use. If you haven’t gotten involved in open source, I hope you’ll take a look at improving the code or documentation for some of the libraries that you use. There are a lot of articles to help you get started and it may be a lot easier than you think!

Tags:
  • showcase
Read article