Brian chats with Kent about the building React developer tooling, React performance, and intelligent application performance tuning.

Transcript

Kent C. Dodds: 0:00 Hi everyone. My name's Kent C. Dodds and I am super excited to be joined by my friend, Brian Vaughn. Say hi, Brian.

Brian Vaughn: 0:06 Hi everybody.

Kent: 0:07 Brian and I actually go pretty far back, back in the AngularJS days. I was working on Angular Formly. I don't think it was you. There was somebody who brought a bunch of us together who were doing forms libraries.

0:25 You were working on one. I was working on another, and we had this little effort to make the forms JS. I think that you put the most work into it, and then it just kind of fizzled off. Sorry about that. [laughs]

Brian: 0:41 Happens.

Kent: 0:42 I think it was a hard problem. That was years ago. Then, we both found our way. You went through Google and then over to Facebook. I found my way somehow into React and now here we are. Do you want to give a little intro to yourself like what do you work on, and what are you doing, and then we can chat about React DevTools and stuff?

Brian: 1:08 Yeah, sure thing. Like Kent said, I'm Brian. I'm on the React core team at Facebook. I've been there for almost four years. The core team is pretty small, so we all work on a variety of things, but we also need to have our specializations.

1:25 I would be the person of my team that specializes in developer tooling and our Profiling APIs and various things that go along with performance profiling.

Kent: 1:36 Nice, and you've been working on that for two years now or maybe longer?

Brian: 1:44 Yeah. DevTools rewrite for our extension and our npm package. Then the Profiler API are both about two and a half years old, so it's been my focus for a couple years now.

Kent: 1:56 Wow, man. I remember watching that rewrite and getting the...I was early adopter. I built it locally so that I could use it. It was a great upgrade, so thank you. It's just been improving ever since, so it's been awesome.

2:17 One other thing about the DevTools and the codebase, there's one thing that I teach about in Epic React, where for context like how you share context effectively and helper methods and stuff, just a pattern for making it easier to do code splitting and various other things.

2:41 I observed that pattern in the DevTools codebase. Dan convinced me of this idea of something that they do at Facebook, and we don't have to dive into it deeply or anything. The DevTool is where he showed me this pattern, so I've dived into the code there and it's pretty great, so thank you.

Brian: 3:07 Glad to hear it.

Kent: 3:11 One of the thing interesting about the DevTools is that the DevTools are probably the best metric for how many users there are of a particular framework. Maybe not a perfect metric, but the best that we have. I look just yesterday. There are over two million users of the DevTools.

Brian: 3:32 That sounds about right. We've got probably Ballpark of 150,000 npm users, so these are folks that use the Standalone DevTools for React Native primarily.

3:45 We've got Edge and Firefox DevTools, which are both slow growing community, but then Chrome just gives us the course two million plus, so we're somewhere over two million weekly active for Chrome.

Kent: 3:56 That's wild. That's so many.

Brian: 4:00 They do.

Kent: 4:02 That's amazing. I don't know how many people who build developer tooling can say that they ship developer tooling to millions of people like, maybe, VS code and those kinds of things. That's a lot of people.

Brian: 4:17 Yeah, it's pretty intimidating, or it was [inaudible] but I remember publishing the first few major releases and it's scary because you...Chrome does give you the ability to roll out gradually, so you can detect the bug and roll back.

4:32 That's assumes that you're tracking metrics and whatnot. We're very intentional about not tracking any user metrics, any usage, other than Chrome's own daily active extension things, which were totally decoupled from.

4:45 If something breaks, we find out through GitHub issues. Whenever I publish an update, it's 100 percent out within a couple hours whenever the conflict is switch.

Kent: 4:54 How do you maintain your confidence? Is there a test suite for the DevTools and that sort of thing?

Brian: 4:59 There's pretty good end-to-end integration tests. It's creatively wired up, I would say. Basically, the way DevTools works, it's a React application itself, and it runs in a separate memory space in the browser as an extension.

5:17 You might be using React 15 on your website, and DevTools is built on whatever the latest master React 16 version and soon to be 17. They talk to each other through postMessage primarily.

5:31 For our test to simulate that, I load two versions of React into the test script. Depending on the test, we run tests against version 15 and version 16. We'll load ReactDOM of that version. Then, we'll do some just magic to reset modules and stuff.

5:49 Then we'll load react-test-renderer to run the DevTools UI. They use the same serialization mechanism to talk to each other, so we can basically -- our tests -- if you ever want to poke around on them. I think they're pretty cool.

6:00 You render things into the ReactDOM version of React, and you can dispatch events and do state updates and everything like you normally would in a test. That ReactDOM is talking through the bridge to your DevTools, and then you can assert things about the DevTool's store which is the thing that holds sort of the state of the tree.

6:19 I wrote a custom Jest Snapshot or Snapshot serializer, whatever they call him. Whenever DevTools breaks, you get a nice pretty printed div that shows the before and after. Its test writing and test main -- it's super nice, super easy.

6:34 That was a fun challenge to figure out how to get everything wired up, but it's pretty valuable anytime you find a bug. Certain types of bugs are obviously harder to capture if they're user-interaction type bugs, but those have been far and few, I should say, most of the integration tests give us pretty good coverage.

6:56 Then, we just do some basic smooth testing on a couple of sites like the React.js docs and Facebook, obviously just to make sure the sanity check stuff, but it's a surprisingly minimal amount of testing considering the...I might snap test or release for five minutes before I publish it.

7:14 We do another thing that helps a little bit with confidence too though, which is I have an internal group at Facebook that folks can join. Then, we have an experiment running for everyone who's a member of this group that uses Chef to push out pre-built installations of DevTools, so if I'm trying a major new feature like the Suspense toggle, or new Profiler UI, or whatever.

7:33 I roll it out to a group of...It's probably a couple hundred Facebook engineers. I don't even know the size of the group at this point. [inaudible] for a week and if something breaks, they tell me and otherwise, I assume that it's pretty good so.

Kent: 7:45 That sounds pretty solid. It's not like you're going to break a Checkout button or something, but it sounds like a pretty solid way to feel pretty good about releasing stuff. Good. I've never noticed any enormous breaking issues.

Brian: 8:03 I've had very few. We've been fortunate. There's been maybe one in the past year, and I was notified pretty quickly and rolled out an update pretty quickly. Overall, it's knock on wood.

Kent: 8:14 It's cool. With the DevTools, there the two tabs. You've got the one for the components, which is just really awesome, the stuff that you can do in there. If there's anything you want to call it specifically in there, then we can totally chat about that stuff.

8:33 In Epic React, I give them an exploration of "Here's the DevTools. Here's what you want to know about them." I'd love to talk more on the profiler side of the DevTools and not just the React DevTools but also the Performance Profiler of the Chrome DevTools and what you know about that, what you've learned in building the profiler and stuff.

8:54 A good lead in question to get us talking about, this is if the Chrome DevTools and Firefox DevTools, they already have their Performance Profilers, why is it necessary to have a profiler for React DevTools?

Brian: 9:13 There's a bunch of answers to this. One of the answers that's maybe on the surface is that React application tree is separate from...It's often closely related to the DOM tree, but it's not always the same, especially when you think of things like portals.

9:34 Then, you can have your React application tree with all your components and then a component, that's far down in the tree. It might render to a totally different routes in the DOM, so they're decoupled for one thing.

9:45 Also, the properties and attributes are different as well. You might pass in a property like className to your React component. That might get attached to the DOM as well in the class lesson.

10:01 Sometimes, there are an obvious mapping, but sometimes they're totally different. Just like the browsers have their own elements panel which gives you a nice style inspector that does some fancy formatting and checking for CSS properties and things like that, the React DevTools components tree gives you nicely formatted, interactive breakdowns of your props, your state, and your hooks, and context, obviously, if you're using that as well.

10:28 Just having them formatted there in a way that is smarter than they would otherwise be if you viewed them as properties in the DOM helps.

10:40 Another interesting answer as we move forward with a bunch of new APIs the React team has been exploring now for pretty much most of the time I've been on the team, which is concurrent mode and all the bucket of things that go with it like suspense and other APIs we're working on, is that the built-in browser profiling tools...

11:01 There's a gap between what React can do and the interactions the browsers typically meant to profile. An example of this that I think is an example of the browsers adapting in a way that's cool is when they added the async call stack to the debugger.

11:22 It used to be that a few set a timeout, or you have an event handler, or something come back, and you hit a breakpoint. You can see, "Oh, this is setTimeout(). I don't know what it's in reference to."

11:34 The browsers added this support to backtrace to the stack many frames ago when the native API was called. You can step backwards and see, "Oh, this was the timeout of the event listener I added here."

11:47 There's a similar gap right now that hasn't been filled by the browsers. I think they'll do more in the future, but for libraries like React. React has a couple of things that make it challenging to profile using the built-in profiling tools.

12:02 One of them is the concurrent mode APIs that I've mentioned. People probably heard explanations of concurrent mode in a bunch of places. Essentially, JavaScript in the browser is single-threaded, which simplifies a lot of things when you're programming, but it can also cause a lot of problems.

12:22 If I have a script that parses a huge file and does something with it, the web page can feel completely locked up during the time when my script is running. Other scripts can't run. Event handlers can't run. With the new concurrent mode APIs, what React will do to help with this?

12:41 One thing that'll do is it'll yield every couple of milliseconds to the browser, and then it'll be like, "Hey, do you have anything to do? If you do, go for it. If you don't, hand back to me, and I'll work some more." This keeps the web page feeling interactive and responsive.

12:54 Another thing that it does is it has this baked in concept of priorities. They're like threads. They're not truly like threads, but it's a way that React can prioritize different categories of work.

13:08 Let's say that I'm rendering a page like a tab container. I want to render the tab that's currently selected at the highest priority, but let's say I start rendering the tree, and then I reach a component that uses the suspense API that say, "I need some data before I can do anything else." It throws the premise.

13:25 I could wait on that IO to send a network request and come back, and then resume my rendering, but that's potentially wasting some amount of CPU cycles rendering nothing. Instead, React concurrent mode has this built-in sense of other scheduled work that's maybe at a lower priority.

13:43 Maybe I've told it to pre-render the other tabs so that they're going to be faster to display when the user clicks them. While I'm waiting on IO for the higher priority work, I can start working on this idle priority stuff in the background, and then yield as soon as I'm ready to come back to the higher priority stuff.

14:01 Similarly, If I'm rendering something like, let's say, you click a few things. Maybe I'll start rendering something, and it's not done. I'm not blocking on IO, though. Then, the user clicks something, and it's a higher priority interaction.

14:16 Then, React can do a couple of things. It can throw away what it was working on and take the higher priority thing, or it could just set what it was working on aside and process the higher priority update, and then go back.

14:26 There's a lot of interesting cooperative scheduling stuff that it does that if you were using Chrome's built-in performance tab to debug, it would look like it's all React. It's all JavaScript.

14:40 You don't have insight into this because the performance profile, unlike the debugger...The debugger, when you hit a breakpoint, it shows you all the variables in scope, and you can inspect things and interact in the console.

14:52 The performance panel doesn't do that. It just shows you call stacks, how long each function took to execute, and at what time it executed. Then, you can see things like, "Oh, there was a click, or a keyboard event, or a network event." That's it. You don't have the state of your application at that time, so it's hard to reason about.

15:10 We've been working on a lot of tooling over the past couple of years. While we have been rolling out and testing these new APIs inside of Facebook, we've noticed the gaps where product developers would come to us and say, "This part of the page is taking a long time to render. I don't know why."

15:24 We'll sit down and look at it with them, and then we'll say, "Here is why, but I only know this because I know the code well that's in this part of React. There's no way you could've known this." Then, we work backwards. What kind of tool could we have built to make that easier?

15:38 One of the first things we've built -- it's been a while now -- but it was this profiler you mentioned. The profiler shows you snapshots where React committed work to the DOM, for instance.

15:50 React does work in two phases. It has a render phase, which is when it render on your class components, or when it runs your function components, and you do your work in return, the child components, and so forth.

16:05 That work is what the new concurrent mode will do in that yield time slice way. The reason is that work is generally the longest running work. It takes the longest to process. Also, if any error is thrown, or an interruption comes in, and it's higher priority, that work can be set aside or thrown away entirely with very little cost.

16:27 Then, while React is doing this rendering, it's building up a small set of instructions it'll need to apply to the DOM to change it based on your rendering. When it's done, we enter in what's called the commit phase, which is when we actually mutate the DOM.

16:45 It's important that we don't mutate the DOM during render because if we have an error, now the DOM is going to be in a broken half mutated state. There's these two phases. The commit phase is when we actually change the DOM or the native view.

16:58 The profiler shows you each of these commit phases. It shows when it was committed, which components actually resulted in changes that were committed, and then it lets you drill down more into the state, which is something that the built-in browser profilers don't do.

17:14 It'll say, "Oh, this component rendered and committed because this prop changed, or because its state, or its hooks changed." It gives you a little bit more insight into that. It lets you know things like, "I commit to the long time because this part of the tree took a long time to render."

17:33 Then, you can drill, and then say, "Why? Was something slow? Should it have been memoized? Did it even need to render at all? Maybe you could use the shake component update type APIs to prevent the re-render. It's an additional set of tools that go along with the browser profile for diagnosing that stuff.

17:53 I've been talking for what feels like half an hour.

17:56 [laughter]

17:56 [crosstalk]

Brian: 17:56 and you jump in.

17:57 [laughter]

Kent: 17:57 That was awesome. I didn't expect to get a conversation about concurrent mode and the impact. That makes total sense. If react is do some work, yield to the browser, do some more work, I can imagine that the flame graph of the dev tools, like the Chrome Dev Tools, would just be like this choppy thing of react does stuff.

18:20 The React dev tools can kind of give a more cohesive, here's the thing that react is working on in between all of those things.

Brian: 18:29 The flame graph is another thing. It used to be with React 15. Rendering was recursive. We also use user timing API's to mark a flame graph above and the user timing section of the profiler. Such that if you were profiling and you saw a really wide function call on the call stack, you could reason about where you were in the React tree based on the call stack above you.

18:58 With React 16Plus, in order to support these sort of concurrent UV API's, we don't use recursion anymore, where we iteratively walk through the tree. We're only ever rendering one component deep at a time. When we're done with it, then we reset the current fibers that we're working on, and we work on another thing.

19:17 What this means for the flame graph, and your profiler is that it's very short. It's a couple of React functions or React DOM functions, and then it's one of your functions, and then it steps back up. Then it's one of your functions.

19:29 When you're using it, if you spend a lot of time and zoom in, you can find your component calls. You can see this one took this amount of time but you have to spend a lot of time to drill in far enough to see it. Once you drill in, you don't have that context.

19:47 The profiler was optimized to give you all of that at a glance. We use both texture, like the solid color or the diagonal stripes to view at a glance. All this determined at all versus this did render. As a product developer, if whether or not React DOM is efficient, is something you can't control.

20:11 You could contribute to the GitHub repo. When you're optimizing your app that's a couple of steps removed. What the profiler does is it focuses only on your components. The time it reports, for example, or all the time spent in your components and your functions, because those are the things you can immediately affect.

20:28 All of those views were written up to emphasize that and de-emphasize what's in the native browsers Performance tab. The majority of the call stack is our react DOM code, which you don't know, you're not familiar with it, and you can't change it.

Kent: 20:45 Sure. One thing regarding performance that I talk a lot about in Epic React is that the secret to improve performance is running less code. Getting a focus on, like here are the things that your code is doing, helps us avoid the noise of what the Chrome Developer Tools gives us.

21:13 Another thing that I wanted to talk about a little bit is looking at the actual numbers like the amount of time especially in the React dev tools, and thinking, "Well, this is taking 20 milliseconds, that's greater than 16, so this is slow."

21:33 That's important except one thing that I want to just talk about a little bit is the difference between production mode and development mode. My mind is in a jumble. During development mode, you can't really look at those times as what the user is going to experience.

21:58 If you do want to use production mode, which is what the user will experience, then you can't use the dev tools without flipping on a little switch. Can you talk about why that's important? What's the difference between development production and profiling? When should those numbers of milliseconds actually count for people?

Brian: 22:21 There's three builds of React DOM. They're in React. We have a development build, pro-production build and a profiling build. The development build is the thing that you're genuinely running locally when you're entering in your code.

22:35 It does a lot of additional things for you that are trying to point out problems that would eventually be pushed to production if you didn't fix them. What this means is we have a lot of code in React itself that we have dev mode, conditional gates wrapped around.

22:55 Then we do things that will stripped that code out of production. The things that we do will do a lot of extra validations of things that you're passing into functions. We will, in some context, call render or call lifecycle method twice in a row to tease out side effects.

23:12 A lot of additional things we do, we'll check return types, for instance, from a render function. If it's undefined, will warn, because maybe that means you forgot to return statement or maybe it means you're returning something you thought was a component, but your import was wrong.

23:26 A lot of things that we check for that have over the years, bugs have been reported to us and we've said, this is a bug we could have warned about. Dev mode is very heavy. It's fine if you're running a new fancy developer laptop.

23:40 It's plenty fast for you but it's definitely not something you would want to deploy to your end users, because they're not going to get any value, or it's strictly going to waste their battery. That's dev mode. You can do some profiling in dev mode, if you want to, for example, just check to see if something is re-rendering when it shouldn't, that's going to be the same.

24:02 You shouldn't pay much attention to the milliseconds values because dev mode is definitely, it would depend on the type of application we can't assign like it's 2X slower, but it is probably on that order of magnitude. It's much slower.

Kent: 24:20 I think those milliseconds can be useful if you think about them relative to each other. Within here, this is relatively twice as long as this and...

Brian: 24:29 Yeah, I want to circle back to that relative thing, because that's super relevant. The production bundle is the optimize, we strip out all of those dev only checks, we minimize the code, we run it through Google's Closure Compiler to do a bunch of additional inlining and stuff to make it as small and as fast as we can.

24:49 This assumes you fix all those potential bugs we've warned you about in production mode. Now we're just wanting to make it run as fast as possible for the end users. Then this profiling bundle is a relatively new addition, it's probably two years old at this point but its [inaudible] .

Kent: 25:11 It is to me too, but it's been around for a while.

Brian: 25:15 Which is the production bundle with everything stripped out. Then a tiny bit of additional stuff added on top, which are the API's that we need to support the profiler. The profiler API is this thing you can wrap around parts of your application tree. React will tell you each time it commits. We'll call a callback.

25:36 You give it and we'll say here's some numbers about how long this subsection of your app took to render. You can put these all throughout your apps you want at the top and some inside, and you can record these to your server if you want. In the production bundle, this profiler API does nothing.

25:51 It just passes through like the fragment API would. It passes through its children. In the dev and profiling bundles, it collects time and reports it. That's the difference. Circling back to the dev tools, you can also use the dev tools profiler for the dev and profiling bundles to do all the stuff we talked about earlier. In production, the profiler is disabled.

26:14 You mentioned something about milliseconds and how it's all relevant. I wanted to call that out because it definitely is also relative. In a couple of ways, people that are developers are probably running pretty fast hardware.

26:33 It's important, that you don't look at the profiler, and think, well, this is you know, faster than 16 milliseconds so I'm good. You are probably good but maybe someone that's running on a mobile phone or slower laptop isn't good.

26:49 That's why the React profiler UI, we don't show...People have in the past asked us to do indications of good and bad or safe and unsafe, and we can't because we don't know enough to make that call. Instead, all of the colorings and the whips and everything are on our gradient, that's all relative to the rest of your app.

27:15 We can say, the orange part of the tree here, that's the slowest part. The blue part is the fastest part, but maybe they're both fast enough, or maybe even the fastest part is too slow. We rely on you to make that judgment with your additional contacts that you have.

27:29 There is something that we're working on inside of Facebook that's pretty cool. I think it's probably many months. This is probably a 2021 thing. I should just put it that way and let it be. The browser has additional metrics that aren't exposed to JavaScript yet.

27:51 We have some builds internally that we've played with that exposes one of these metrics, which is the instruction count. When JavaScript does things, the browser on those ultimately boils down to instructions.

28:02 The browser can track the number of these instructions between periods of time. The costs of these instructions differ a little bit depending on what you're doing. It is possible for you to assign a rough weight to an average instruction.

28:18 We have some builds of chromium internally that provide an API just like you can call performance.now. You can call performance.constructioncount and get the current number of instructions. This allows you, the profiler, to do a delta between times. I have a build of dev Tools internally, which uses this instruction count API, instead of time.

28:40 We have more work to do. We have some folks working on it that aren't on the React team, but are on our browser speed teams internally. There's some more work to do to remove noise. A big problem when you're profiling is you want to make repeated runs as similar as possible and that's really hard to do.

29:01 Removing a bunch of noise and getting these figures more reliable. The idea is sort of twofold. One, once we've done this, we think that we can have with a pretty high confidence, automated tests that run and use the React profiler, to point out regressions in either React itself or an application code when it's changed.

29:21 We can be like, "Oh, the number of instructions jumped up by 20 percent." This is a pretty solid signal that you've got a regression, and maybe that's fine. Maybe you added a huge new section of UI, maybe you didn't.

29:33 A neat thing in the context of the React profiler though is that this would allow us, assuming we are able to land this in Chrome Pollick, which is the plan eventually. This would allow us to use this API instead. Right now, the profiler uses performance.now or fallsbacktodate.now.

29:53 We could have it use this new profile profiling API if it was available. We could say, given the fact that an average cost of an instruction as such, then here's the profile that you just recorded for a low end phone, or here's what it is for a high end laptop or a medium laptop and actually change those instruction counts to actual seconds-millisecond values.

30:20 At which point we could definitely say this part of the application is bad, you need to fix it or this part is good. We have a high confidence that it's good, even on low end hardware because we know the number of instructions that it calls.

30:31 This is something next year, we'll probably be talking more about but I'm really excited about it. That would allow us to do a lot more in the profiler than just say here's a bunch of information, make sense of it yourself based on what I showed you.

Kent: 30:45 That's very interesting. Do you think that you would even be able to make that relative so that it would be useful even during development mode so you could say, "Because you're in development mode, we expect it to have more instructions," or something?

Brian: 30:59 It's hard to say exactly because dev mode is still going to be generating a lot more instructions already from things it's doing. The short answer is that we probably would still want you to use the profiling bundle. There are some parts of dev mode where we're doing, for instance, what we call your functions twice to tease out side effects.

31:23 Sometimes we're able to call the second time, or the first time, outside of our profiling scope, but there are some places where we start and stop profiling, we're inside of that range, there will be some redundant calls.

31:37 Dev mode is probably still not going to be great to use for anything other than spotting unexpected cascading updates or unexpected problems of memorization, things like that. If you want to focus on time, you still would want to use the profiling bundle.

Kent: 31:55 Cool. Just as we're wrapping up here, I want to say my recommended approach for people who want to identify performance bottlenecks, and I'd love to hear what your opinion on it.

32:11 What I tell people is, "Don't look at the time. Don't benchmark or anything in development mode. Build it in production mode. It's better if you actually do the profiling in production with exactly how your user would do it." For that to work, you have to have the profiler enabled so that you can actually pull up the profiler.

32:41 Then, in addition to that, if you can get your hands on a device that's the lowest common denominator device of your users and do the profiling there, then that is perfect. Get that device that your users are using and hit it on production and profiler there with the profiler DevTools enabled.

33:02 If you can't do that, you can simulate it with the DevTools, but always make sure you're throttling so you can see what the real experience is like. Do you have any other additional ideas around that?

Brian: 33:14 Yeah. Those are so many good advice. You can also, depending on how or what your budget is and what your hardware is, testing it on different networks can be useful.

33:25 Maybe don't test it on your Internet at your corporation because probably your connection to your APIs are going to be fast, but maybe your office has an external cable modem you can wire up to you or you can go sit in a Starbucks somewhere, post-COVID, and try it out. That's definitely an important consideration.

33:46 Testing in different browsers. That's another consideration there, especially if you're getting into the nitty-gritty of fine-tuning optimizations. There's still a couple of different browser engines left, so you want to consider them all because they each optimize things differently, but that's good general advice.

Kent: 34:01 One other thing that I wanted to ask you about is how Facebook does performance monitoring. Because I know that you use the profiler, but I pulled up the DevTools and I don't think that I've ever gotten the bundle that has the profiler enabled. I asked you about this once and you mentioned how that works at Facebook.

Brian: 34:20 That's a good thing. I should have mentioned that earlier. It's true that you can only use the profiling APIs in the profiling bundle. That bundle is much faster than dev, but it is still strictly slower than the production bundle because we're doing some additional things.

34:36 We're doing very minimal additional things but we are calling performance time now and storing some values and doing some addition and some minor things. If you're not using that information, it's not a good idea to have your end-users paying that cost for nothing.

34:52 What we do at Facebook is we have a framework from which we can run experiments on a server. Then the different JavaScript bundles we serve down to someone coming to Facebook will depend on the experiments they're in.

35:07 This lets us do all sorts of cool things in terms of A/B tests and things like that, but it also lets us pick at random a small percentage of our users, which is still a big bucket of users, that get the profiling version of React.

35:22 For those users, we'll log the time it takes to render certain parts of the page. Basically, like 99 percent of users will get the faster production model, and then some small percentage of users will get the profiling bundle. From that information, we can still see regressions over time or improvements over time with the variations and stuff.

Kent: 35:45 Do you have an internal tool for graphing out this time series data or do you...?

Brian: 35:50 Yeah, lots of tools for that.

Kent: 35:56 Very cool. The React profile, I can't ever remember if it's profiler or profile component, but that component, [laughs] where do you typically see people getting value out of as far as positioning? Where do you typically put those?

36:14 Of course, you're not going to have just only one for the whole app. You want it to be scoped to somewhere. Maybe actually you can speak to why you might want to scope it.

Brian: 36:24 I don't know if there is a definite recipe for where you should put it, but I will say that one place you could put it would be if you're using a router and maybe you want to wrap your pages. If you have an app that has a chat pop up window or something, maybe you'd wrap that small part of the app.

36:46 The thing that you would want to use to make this decision is the fact that...There's two profilers. There's the DevTools extension profiler in your browser, which you run on your app, but only your app. That profiler gives you fine-grained information about every component in your React app.

37:02 Then there's the profiling API, like React App Profiler, that you put into your source code that gives you summary information for everything and anything. If you were to use only one profiler in your whole app and put it around the top of the application, that would tell you every time the DOM was mutated, and how long it took to render, and how long it took to do the mutation.

37:26 If you saw a regression at a certain point in time and you thought, "I wonder what got slower," "Something in your app," is all that we can tell you.

Kent: 37:36 [laughs]

Brian: 37:37 When you're hunting down this sort of stuff, it might be good as a sanity check to put one at the root of your app just to check for an overall regression. It might be slightly better to put one in your application routing code around each route view. This way, you can see like, "The profile page had a regression but everything else is probably fine."

38:02 It is a bad idea to wrap every single component in a profiler, even though it will give you the most information, so the sweet spot is somewhere in between. It's probably closer to the side that's putting something around each route.

38:13 Then maybe if you have something special in your page, like a chat or an interactive map component, or whatever it is, then you could also consider wrapping that too to give you a little more fine-grained signal. But we leave that out to folks to decide based on the characteristics of their app.

Kent: 38:32 That makes a lot of sense. One thing that I show in the Epic React is we wrap the whole app, but we only report on the mount phase. We just ignore updates and just say, "This is how long it took for the user to see anything for your app."

38:50 I don't think that it would be a good idea to just wrap the whole app and report on every single one of those. I like the suggestion that you made. That was good. Great. We've been chatting for a while. Is there anything else that we didn't talk about that you want to bring up before we wrap up?

Brian: 39:08 Yeah. I'll say that we're working on a new profiler that right now is a standalone app that has been deployed to [inaudible] . If I could give the one-minute pitch of this app, I'd do.

Kent: 39:20 Yeah.

Brian: 39:21 Just like there were some gaps I mentioned in the built-in profiling tools, there's still some gaps when you're digging in in the React profiling tool. If you say, "This page took a long time, this transition took a long time to render. Let's look at the profiler."

39:37 Then we see, "There's nothing particularly slow in here. I wonder why it took so long," and still you are missing the context in between. We're working on a new profiler, we're calling it the scheduling profiler. It will eventually be integrated inside of the DevTools extension, but right now it's a standalone app.

39:55 The way you use it as you record in Chrome's Performance tab, you just use the built-in Performance tab to record, assuming you're running a dev or profiling bundle. Then you do some things, and then when you're done, you stop and you export that JSON and then you import it into this standalone app.

40:13 What it does is it renders a flame graph over time that looks very similar to the built-in browser's Performance profile. Except above that flame graph are going to be some boxes and dots that show when work was scheduled with React, when state updates were done, when components suspended, and then the flame graph for React components that were rendering, and when React committed and stuff.

40:39 It shows all of this by priority, these threads that we were talking about earlier. This is cool because using this, you can see, for instance, "This commit took a long time because a bunch of things suspended and I had to restart," or, "because something suspended and there was a gap," maybe I was working on idle stuff but there was a gap while it was just blocked by IO, "and the browser did nothing."

41:04 Or, "This took a long time because there was a huge chunk of JavaScript that was slow that was outside of React." Maybe I have an event handler somewhere that's totally unrelated to React. This gives you a much nicer high-level view.

41:18 This thing doesn't show you props or state. There's some missing context in this, too, that's why I think it will eventually be nice to have it integrated and paired with the other one more, but this is a really exciting tool.

41:30 We've already been using it a little bit inside of Facebook to find some cascading update type bugs, but it's nice because it shows you and lets you drill down on exactly what happens. If anyone's interested in seeing this, I've been posting some videos and pictures of it on my Twitter handle.

41:50 Also some links, you can play with the actual live profiler. I posted a link to a sample JSON profile and the profiler itself so you can play with it. This is one of the things I'm excited about. We technically already released an early version of it, but hopefully, we'll do a release of the DevTools extension within the next couple of months that has it integrated more properly.

42:10 At that time, you wouldn't need to start and stop the performance browser stuff itself. React DevTools could do that. Right now it's a little manual, but it's a super valuable source of information too.

Kent: 42:22 That's very cool. I had no idea. I guess I missed your tweets about it. I'm glad that you brought that up. That's awesome. There's one other thing that I wanted to ask your opinion on, and that is, how fast is React? Should everybody have to optimize every component in their application or is it more surgical than that?

Brian: 42:43 This is like a trick question at the end. React is pretty fast. I will say that React's not been optimized and created with low-end devices in mind, so truly low-end integrated applications and things like that. There are some libraries like Preact or something that would be probably better choices that have a similar API if you're targeting low-end...

Kent: 43:09 Is that mostly for memory consumption, just the size of the library?

Brian: 43:14 Can be both. Can be memory and can also be...it's probably mostly memory. That being said, React has a lot of optimizations built-in. Stepping away from React itself, generally, the industry-standard advice for this thing is, avoid prematurely optimizing stuff.

43:40 I personally don't get caught up on avoiding every unnecessary render because sometimes you might do more work trying to memorize a thing than you do just letting it render once in a while.

43:54 If a component renders and the children that it returns are the same or even slightly different, there are a lot of scenarios in which a component can render where React will itself know to bail out because no DOM attributes changed or nothing was added or removed from the child list ultimately.

44:15 There's a lot of redundant optimizations for that sort of thing built-in. It's a good idea probably to use the profiling tools. If you see something slow, then you definitely want to say, "Can I make this faster? Can I avoid doing it entirely on an update?"

"44:34 Can I make it faster" because you have to do it at least once, like your mount scenario earlier. We want to check that. If you can make it faster, you're going to make the time to interactive better for your users, which is strictly a win. If you can't make it faster, can you at least avoid doing it unnecessarily again? I don't think there's a hard-and-fast rule for it.

Kent: 44:57 That's great, and it's validating. That's the approach that I suggest people, like, "Fix the slow render before you fix the re-render." Maybe you can fix both, but do that in order and make sure you measure before and after so that your optimizations didn't make it slower.

Brian: 45:16 I will maybe point out, this might be getting too into the weeds, but the profiler API gives you two times. [laughs] You're probably like, "Shut up. Don't talk about that." It gives you actual duration and tree duration. These times are slightly different, and they're explained in the documentation, so I'd suggest folks just go check them out.

45:35 Basically, the tree duration is the best guess estimate at what it would take to render the entire tree as it is right now if it was the first render. The actual duration is how much time it actually took to render the whole tree or a part of the tree in this commit.

45:55 If you render a list of 10 things and each takes, I don't know, 100 milliseconds, maybe you have a second worth of actual duration, then if you re-render and only two of them render and the rest of them bail out, then your actual duration might be 200 milliseconds but your tree duration would still be 10, or one second, whatever.

46:16 These numbers let you tease apart if you're getting to the level where you're trying to think, "Which should I optimize? Am I trying to avoid re-rendering or am I trying to make the initial amount faster?" Those figures should give you slightly different information that can help you spot, depending on which thing you're looking to optimize, you can use those numbers to decide where to drill in.

46:37 This is the profiling API and the profiling extension in DevTools. Both will give you these separate figures. I'd say, in most cases, people are probably just interested in the actual duration one. That's the one that's reported first. You'll have that second bit of information in there if you're really diving into the weeds.

Kent: 46:56 That actually makes so much sense. You have the actual duration, the other one's called base duration. Is that right?

Brian: 47:02 Yeah, tree/base duration. [laughs] Base duration, yeah.

Kent: 47:07 I'm glad that you mentioned that because until now, I wasn't sure I understood that. I knew that it helps you know whether there was opportunity to memorize stuff, or whether memorization might be able to help, but now it makes sense.

Brian: 47:24 You can infer that based on the delta between the two. If they're always the same, then you have no memorization, unless it's a mount. There's a lot of things you can tease out. I feel like to actually fully understand it, you probably would have to read our RC. Because it's unusual type of number, the way that we do it, but it could be a useful source of information if you're...

Kent: 47:48 Really in the weeds.

Brian: 47:49 Yeah.

Kent: 47:51 This is a super interesting conversation, Brian. Thank you so much for giving me some of your time. Continue to enjoy the wonderful work you're doing on the DevTools. If there's something that people want to ask you, or if they want to just reach out to you and follow up to keep up on the stuff you're working on, where should they go?

Brian: 48:14 I'm on Twitter, I'm pretty active. My Twitter handle is brian, B-R-I-A-N, _d_vaughn, V-A-U-G-H-N. Get a lot out of React questions there. You're welcome to ask them. The only thing I ask people is, don't send them as DM. Send them as tweets. That way other people can find them too. It scales better.

48:36 If I get the same question a bunch, I can just link rather than having to copy-paste. That's good. GitHub is another obvious one. Normally, I check React issues on GitHub every day. I haven't checked it in a week or so that I've been moving, but normally, it's something I load up every morning and again in the afternoon just to look for anything that's in APIs that I...Of course, if you tag bvaughn on GitHub, I get an email notification. That's fine, too.

Kent: 49:04 Awesome. Thanks so much. Really appreciate your time, and we'll see everybody later. Bye.