00:00:00
as you all probably know by now versel
00:00:02
and I broke up I still use them for a
00:00:04
lot of the things I'm shipping but they
00:00:05
are no longer a channel sponsor that
00:00:07
means I can talk about things they might
00:00:09
not have wanted me to talk about in the
00:00:10
past and today we're talking about a big
00:00:12
one how to not have a crazy versell bill
00:00:15
I see a lot of fear around how expensive
00:00:17
verell is and these terrible bills that
00:00:19
float around online I've had the
00:00:21
pleasure of auditing almost all of them
00:00:23
as in I've Doven into code bases that
00:00:25
caus these huge bills and I've learned a
00:00:27
ton about how to use Rell right and more
00:00:29
importantly the ways you can use it
00:00:31
wrong so I did something I have not done
00:00:33
before I built an app to Showcase just
00:00:36
how bad things can be I did all of the
00:00:39
stuff that I have seen that causes these
00:00:41
big versell bills and we're going to go
00:00:42
through and fix them so that you can
00:00:45
find them in your own code base and
00:00:46
prevent one of these crazy bills as I
00:00:48
mentioned before verell has nothing to
00:00:49
do with this video they did not sponsor
00:00:51
it but we do have a sponsor so let's
00:00:52
hear from them really quick
00:00:55
[Applause]
00:00:57
[Music]
00:00:57
[Applause]
00:01:01
[Music]
00:01:02
[Applause]
00:01:04
this seems like the most innocent thing
00:01:05
in the world you put a video in the
00:01:07
public directory you put it in a video
00:01:09
tag and then you go to your website now
00:01:12
the video is playing this is great right
00:01:14
totally safe fine except that for CS
00:01:18
infra is expensive for bandwidth I know
00:01:21
people look at it and then they compare
00:01:23
to things like hetner and they're like
00:01:24
wow versell charges so much for
00:01:26
bandwidth the reason is everything you
00:01:28
put in this public directory gets thrown
00:01:30
on a CDN and good cdns are expensive the
00:01:33
reason you'd want things on a CDN is
00:01:34
because stuff like a favicon which is
00:01:36
really really small is really really
00:01:39
beneficial to have close to your users
00:01:42
even Cloud flare has acknowledged this
00:01:44
when they built R2 because R2 despite
00:01:46
being cheaper to host files is much much
00:01:49
slower than the CDN here because of that
00:01:52
putting stuff in this folder is
00:01:53
expensive and if it's something that you
00:01:55
can't reasonably respond with in a
00:01:58
single like chunk of a request it
00:02:00
shouldn't go in here my general rule is
00:02:02
if it's more than like 4 kilobytes do
00:02:05
not put it in here if you want the
00:02:07
easiest thing to put it in we'll have a
00:02:09
small self plug throw it an upload thing
00:02:11
I'm going to go to my dashboard we're
00:02:12
going to create a static asset host
00:02:15
create
00:02:16
app files
00:02:19
upload go to the public folder grab drop
00:02:25
upload now all I have to do copy file
00:02:29
URL
00:02:31
go back here and just swap the source
00:02:33
out that's it we just potentially saved
00:02:37
ourselves from a very very expensive
00:02:39
bill because we don't charge for egress
00:02:41
on upload thing so instead of
00:02:43
potentially spending thousands of
00:02:44
dollars go drag and drop it into upload
00:02:47
thing and spend zero instead you can
00:02:49
also throw it on S3 or R2 or other
00:02:52
products all over the internet but this
00:02:53
is the one we built for next devs and it
00:02:56
makes avoiding these things hilariously
00:02:58
easy on the topic of assets though there
00:03:00
is one other Edge case I see people
00:03:02
coming into and I made a dedicated
00:03:04
example for this this page grabs a
00:03:07
thousand random Pokemon
00:03:09
Sprites and there's a lot of them and
00:03:11
they take quite a bit to
00:03:12
load this is doing something right that
00:03:15
I think is really really important we're
00:03:18
using the next image component and this
00:03:20
is awesome because if we were using our
00:03:23
own images like we had put in public
00:03:25
instead of serving the 4 megabyte Theo
00:03:27
face this could compress it all the way
00:03:29
down to like three kilobytes depending
00:03:31
on the use case but the way that versel
00:03:33
bills on the image Optimizer is really
00:03:35
important to note by default on a free
00:03:38
plan on versel you get a th000 image
00:03:40
optimizations by default but then they
00:03:43
cost $5 per thousand you get 5,000 for
00:03:45
free on the pro tier but that $5 per
00:03:47
thousand optimizations that's not cheap
00:03:49
and we made a couple mistakes in this
00:03:52
implementation one is that we are just
00:03:55
referencing these files that are already
00:03:57
really small the ones we're grabbing
00:03:58
from this GitHub repo poke AP API these
00:04:00
are already small files they don't
00:04:01
really need to be optimized it's nice if
00:04:03
the other one for sell CDN but it's not
00:04:05
necessary the much bigger mistake we
00:04:07
made is how we indicate which images
00:04:10
we're cool with optimizing you'll see
00:04:12
here that we're allowing any path from
00:04:15
GitHub user content so if other people
00:04:18
are hosting random images on GitHub they
00:04:22
could use your optimization endpoint to
00:04:25
generate tens of thousands of additional
00:04:27
image optimizations and I want to be
00:04:29
clear about what what an image
00:04:30
optimization is if you were to rerender
00:04:32
these below at a different size so we
00:04:34
were to change this to 200 a lot of
00:04:37
platforms Will Bill you separately for
00:04:39
the different optimizations if we make a
00:04:40
version of this image that's 1,000
00:04:42
pixels wide and tall and a version
00:04:44
that's 200 you would pay for both but on
00:04:46
versell you're only paying based on the
00:04:48
unique URLs the important thing to make
00:04:50
sure you do right here is that you
00:04:52
configure the path name to be more
00:04:54
restrictive so the quick fix for this
00:04:57
one is pretty simple you grab more the
00:04:59
URL so we go here we say /on API Sprites
00:05:03
Master starstar now this app will only
00:05:07
optimize images that come from Pokey API
00:05:09
so as long as this repo isn't
00:05:12
compromised you're good this also goes
00:05:14
for upli thing by the way if you just
00:05:17
call utfs doio here which a lot of
00:05:19
people do you just set it up so any
00:05:23
image on upload thing is optimizable
00:05:25
through your app what you want to do is
00:05:27
use the SL a SL style URLs because these
00:05:31
URLs allow you to specify an ID that's
00:05:33
unique to your app so in the example I
00:05:35
gave earlier if we were to use upload
00:05:37
thing to be the original host the E ID
00:05:40
is just this part right
00:05:42
here and now we can only optimize the
00:05:45
images as long as if they are coming
00:05:47
from my app because this is the path for
00:05:48
files that are from my app and you
00:05:50
cannot serve files from other people's
00:05:52
apps if you put the app ID in it like
00:05:53
this so if you're using upload thing and
00:05:55
you're also using the next image
00:05:57
component to optimize the image as
00:05:58
upload thing please make sure you do it
00:05:59
this this way and if you want to change
00:06:00
the URLs over the API will start doing
00:06:02
these by default soon but if you're
00:06:04
doing this early enough where that
00:06:05
hasn't happened copy file URL grab this
00:06:08
part put that after here so if we wanted
00:06:11
to put this optimized image on the
00:06:13
homepage image let's import the image
00:06:16
component from next in the
00:06:20
source will be https utfs doio
00:06:25
a did it get that correct from my config
00:06:28
it did look at that good job cursor now
00:06:30
that we've done this I can grab an
00:06:32
optimized image from my host which is
00:06:35
upload thing you don't have to pay for
00:06:38
somebody potentially going directly to
00:06:39
that URL because we eat that with upload
00:06:41
thing and users are now getting a much
00:06:42
more optimized image sent down to them
00:06:45
instead of the giant 10 megabyte thing
00:06:47
that you might be hosting with upload
00:06:48
thing and you don't have to worry about
00:06:50
users abusing it because if they don't
00:06:52
have the file in your service they can't
00:06:54
generate an optimized image this covers
00:06:56
a comically large amount of the bills
00:06:59
and concerns I've seen so make sure
00:07:01
you're doing this optimize your images
00:07:03
especially if you're still putting them
00:07:04
on verel for some reason and ideally
00:07:06
take every single asset you have that is
00:07:08
larger than a few kilobytes and throw it
00:07:10
on a real file host because vell's goal
00:07:13
is to do things so they're really fast
00:07:16
when you put them in the public folder
00:07:17
because if you put something like an SVG
00:07:19
or a favicon it needs to go really quick
00:07:21
which makes it more expensive but you
00:07:22
can even use for sales blob product
00:07:24
which is similar to upload thing R2 S3
00:07:26
all of those it immediately wipe these
00:07:28
costs out ideally they would introduce
00:07:30
something in the build system that Flags
00:07:33
when you have large files here and the
00:07:34
potential risk I might even make a
00:07:36
eslint plugin that does this in the
00:07:38
future but for now just make sure you're
00:07:40
not embedding large Assets in a way that
00:07:42
they get hosted on versell thing one
00:07:44
complete okay that's just bandwidth but
00:07:46
serverless is so expensive you got to
00:07:48
make that cheap too let's get to it
00:07:51
let's say you made a Blog and you have a
00:07:53
data model that includes posts comments
00:07:57
and of course users both posts and
00:08:00
comments reference users and you can see
00:08:02
how one might write a query that gets
00:08:04
all of these things at once but let's
00:08:06
say you started with just posts and you
00:08:09
made an endpoint that Returns the posts
00:08:11
then you added comments so you added a
00:08:13
call to your DB to get the comments and
00:08:15
then you added users so you added a
00:08:17
bunch of calls to grab the right user
00:08:18
info for all the things you just did you
00:08:20
might end up with an API that looks a
00:08:23
little something like
00:08:26
this hopefully y'all can quickly see the
00:08:29
problem here I was surprised when one of
00:08:32
those companies with a really big Bill
00:08:33
could not the problem here is we do this
00:08:37
blockin call cx. db. posts. findfirst to
00:08:41
get the post then we have the comments
00:08:44
which we get using the post ID then we
00:08:47
have the author which we also get using
00:08:49
the post idid well the post user ID then
00:08:52
we get the users in comments by taking
00:08:54
all the comments selecting up the user
00:08:56
ID and selecting things where those
00:08:58
match
00:09:00
this is really really bad it is
00:09:03
hilariously bad because let's say your
00:09:06
database is relatively fast each of
00:09:08
these only takes 50 milliseconds to
00:09:10
complete blocking for 50 milliseconds
00:09:12
blocking for another 50 blocking for
00:09:14
another 50 locking for another 50 this
00:09:16
is 200 milliseconds minimum of compute
00:09:20
that should probably be a single
00:09:23
instance the dumb Quick Fix is to take
00:09:27
things that can be happening at the same
00:09:29
time and do them at the same time so we
00:09:31
can grab comments and author at the same
00:09:34
time a quick way to do this make the
00:09:37
comments promise don't block for it make
00:09:39
the author promise don't block for it
00:09:43
now these are both going at the same
00:09:44
time and if we need the comments here
00:09:46
which we do con comments is a wait
00:09:48
comments promise now we have them now at
00:09:51
the very very least we took these two
00:09:53
queries and allow them to run at the
00:09:54
same time but we can do much better than
00:09:56
this this is a real quick hack fix if
00:09:59
you don't have dependencies like if all
00:10:00
of these queries don't share data you
00:10:03
could just run all of them at once in a
00:10:05
promise do all settled but ideally we
00:10:07
would use SQL so I could write this
00:10:11
myself instead we're going to tell
00:10:13
cursor to change this code so a single
00:10:18
Prisma query is made that gets all of
00:10:22
the data in a single pass using
00:10:25
relations
00:10:30
look at
00:10:32
that hilariously simpler db. post. fine
00:10:35
first order by down but we're also
00:10:37
telling it to include the user because
00:10:39
that's the author as well as comments
00:10:41
but in those comments we also want to
00:10:43
include user so we get all of the data
00:10:45
back directly here here they're cleaning
00:10:48
up because the data model I actually had
00:10:49
for this was garbage but honestly when
00:10:51
we get this back we have post which has
00:10:54
the user in it which is the author I
00:10:56
should probably have named that properly
00:10:57
whatever too late now and we have com
00:10:59
which have users as well as the comment
00:11:00
data all in one query this means that
00:11:03
this request takes four times less time
00:11:07
to resolve and I you not one of
00:11:09
those massive versell bills I saw
00:11:11
requests were taking over 20 seconds and
00:11:14
the average request had over 15 blocking
00:11:17
Prisma calls most of which didn't need
00:11:20
data shared with each other so a single
00:11:22
promise.all cut their request times down
00:11:24
by like 90% then using relations cut it
00:11:28
down another like 5% and I got the
00:11:31
runtime down in an Uber and 30 minutes
00:11:33
from over 20 seconds the requests were
00:11:35
often timing out down to like two in
00:11:39
very very little time in an Uber without
00:11:41
even being able to run the code you need
00:11:43
to know how to use a database and one of
00:11:46
my spicy takes is that forel's
00:11:47
infrastructure scales so well that
00:11:50
writing absolute garbage code like that
00:11:52
can function if you were using a VPS and
00:11:55
the average request took 20 seconds to
00:11:58
resolve I don't care how good bps's are
00:12:01
you wrote something terrible and your
00:12:02
bill is still going to suck or users are
00:12:04
going to get a lot more timeouts or
00:12:06
requests bouncing because the server is
00:12:08
too busy doing all of this stuff verell
00:12:10
did just add a feature to make the issue
00:12:13
here slightly less bad which is their
00:12:15
serverless servers announcement check
00:12:17
out my dedicated video on this if you
00:12:19
want to understand more about it the
00:12:20
tldr is when something is waiting on
00:12:22
external work other users can make
00:12:25
requests on the same Lambda so each
00:12:27
request isn't costing you money because
00:12:29
if these DB calls took 20 seconds then
00:12:32
every user going to your app is costing
00:12:34
you 20 seconds of compute with the new
00:12:37
concurrency model at the very least when
00:12:38
you're waiting on data externally other
00:12:40
users can be doing other things so it
00:12:43
reduces the bill there a little bit and
00:12:45
by a little bit I mean half or more
00:12:47
sometimes so it is a big deal especially
00:12:48
if you have long requests like if you're
00:12:51
requesting to an external API for doing
00:12:54
generation for example very good use
00:12:56
case for doing something like this if
00:12:59
you're waiting 20 plus seconds for an AI
00:13:01
to generate something for your users
00:13:03
paying the 20 seconds of waiting for
00:13:05
every single user sucks and this helps a
00:13:07
ton there there are other things we can
00:13:09
do to help with that though one of those
00:13:11
things I didn't take the time to put it
00:13:13
in here is queuing instead of having
00:13:16
your server wait for that data to come
00:13:18
back you could throw it in a queue and
00:13:20
have the service that's generating your
00:13:23
stuff update the queue when it's done
00:13:25
there are lots of cool services for this
00:13:27
inest is one of the most popular had a
00:13:29
really good experience with them they
00:13:30
allow you to create durable functions
00:13:32
that will trigger the generation and
00:13:35
then die and then when the generation is
00:13:37
done trigger again to update your
00:13:39
database really cool in order to avoid
00:13:41
those compute moments entirely another
00:13:43
that I've been talking with more is
00:13:45
trigger. deev open source background
00:13:47
jobs with no timeouts this lets you do
00:13:49
one of those steps where you're waiting
00:13:51
a really long time for Dolly to generate
00:13:53
something without having to pay all of
00:13:55
the time to wait for your service
00:13:58
sitting there as you this thing is being
00:14:00
generated so if you do have requests
00:14:02
that have to take long amounts of time
00:14:04
you should probably throw those in a
00:14:05
queue of some form instead of just
00:14:08
letting your servers eat all of that
00:14:09
cost these Solutions all help a ton be
00:14:12
it a que or the concurrency stuff that
00:14:14
verell shipping at the very least you
00:14:15
should go click the concurrency button
00:14:18
because it's one click and might save
00:14:20
you 80% of your bill all of the things I
00:14:22
just showed assume that the compute has
00:14:25
to be done but you don't always have to
00:14:27
do the compute sometimes you can skip it
00:14:30
let's say theoretically this query took
00:14:33
a really long time it didn't take 100
00:14:36
milliseconds maybe it takes 10 seconds
00:14:39
but also the data that this resolves
00:14:41
doesn't change much we can call things a
00:14:43
little bit differently if we have const
00:14:47
cached post call equals unstable cach
00:14:51
stable version of this coming very soon
00:14:52
as long as versel gets their stuff
00:14:54
together before next conf here I need to
00:14:57
import DB now we have this function
00:14:59
cached post call I should name this
00:15:02
better because post call has a specific
00:15:03
meaning cached blog post fetcher now
00:15:07
with the special cach blog post fetcher
00:15:09
function the first time it's called it
00:15:11
actually does the work but from that
00:15:13
point forward all of the data is cached
00:15:16
and now you don't have to do the call
00:15:17
again so if this call took 10 seconds
00:15:21
now it's only going to take 10 seconds
00:15:24
the first time this is a huge win
00:15:28
because
00:15:29
now future requests are significantly
00:15:32
cheaper and if you can find the points
00:15:34
in your app where things take a long
00:15:35
amount of time and don't change that
00:15:37
much huge win but they do change
00:15:41
sometimes and it's important to know how
00:15:42
to deal with that so let's say we have a
00:15:44
leave comment
00:15:46
procedure it's a procedure where a user
00:15:49
creates a comment so context DB comment
00:15:52
create and we create this new comment
00:15:54
let's not return just yet though we'll
00:15:57
await this const comment equals that but
00:16:00
now this old cache is going to be out of
00:16:03
date and it's not going to show the
00:16:04
comment because this page got fetched
00:16:06
earlier that's pretty easy to
00:16:08
fix all you have to do is down here
00:16:11
revalidate tag post and now since we
00:16:15
called revalidate tag with this tag
00:16:17
versel is smart enough to know okay this
00:16:20
cash is invalid now so the next time
00:16:23
somebody needs the data we're going to
00:16:24
have to call the function again but now
00:16:26
you only have to call this query which
00:16:28
which we are pretending is very slow
00:16:31
once per comment so when a user leaves a
00:16:34
comment you run this heavy query but
00:16:36
when a user goes to the page you don't
00:16:38
have to because the results are already
00:16:39
in the cache we've just changed the
00:16:41
model from every request requires this
00:16:43
to run to every comment being made
00:16:46
requires it to run but then nobody else
00:16:49
has to deal with it from that point
00:16:50
forward huge change a common one I see
00:16:54
is users who are calling their database
00:16:56
to check the user and get user data on
00:16:58
every single request that is a database
00:17:01
call that is blocking at the start of
00:17:03
every single request you do if instead
00:17:06
you cach that user data then most of
00:17:09
those requests will now be instantaneous
00:17:11
instead of being blocking on a DB call
00:17:14
huge change so this will not only make
00:17:16
your build cheaper it'll also make the
00:17:19
website feel significantly faster you
00:17:21
don't have to wait for a database to be
00:17:23
called in order to generate this
00:17:24
information I'm seeing some confusion
00:17:26
about unstable cach I want to call these
00:17:28
things out this cash isn't running on
00:17:30
the client at all the client has no idea
00:17:33
about any of this things like react
00:17:35
query things like St while R validate
00:17:37
all of that stuff for the most part is
00:17:39
client side things to worry about this
00:17:41
is the server the server is making a
00:17:44
call to your database to get this data
00:17:47
and you are telling the server when you
00:17:48
wrap it with unstable cache hey once
00:17:51
this has been done you don't have to
00:17:52
call the database anymore you can just
00:17:54
take the result this is kind of just a
00:17:56
wrapper to store this in a k TV store
00:17:59
invers cells data center or if you
00:18:01
implement it yourself wherever else you
00:18:02
could do this Yourself by writing the
00:18:04
function a little differently I'll show
00:18:06
you what the DIY version would look
00:18:08
like DIY cach blog post so first thing
00:18:13
we have to do is check our KV so I'm
00:18:15
assuming we have a KV const KV result
00:18:19
equals yeah await kv. getet poost I
00:18:21
don't actually have a KV in here so
00:18:22
ignore the fact it's going to type error
00:18:23
if KV result return KV result otherwise
00:18:27
we do the compute
00:18:30
we set the result and then we return it
00:18:33
this is effectively what vel's cash is
00:18:35
doing they have some niceties to make it
00:18:36
easier to interrupt with and validate
00:18:38
things on I've diy things like this so
00:18:41
often in my life versel gave us some
00:18:42
synx sugar for it but you can DIY this
00:18:45
if you want to yourself I could rewrite
00:18:47
the unstable cache function and just
00:18:49
throw it in KV if I wanted to but this
00:18:51
is using a store in the cloud to C cash
00:18:55
the result of what this function returns
00:18:57
so you don't have to call it again if
00:18:59
you already have the result as you see
00:19:00
here if we have the result we just
00:19:02
return it from the KV otherwise we run
00:19:04
the other code again think that help
00:19:06
clarify that one that all said if you
00:19:08
know anything about blog posts you might
00:19:10
be suspicious of this example in the
00:19:12
first place because you shouldn't have
00:19:13
to make an API call to load a blog post
00:19:16
you should be able to just open the page
00:19:17
and have the blog post and here's
00:19:19
another one of those common failures I
00:19:21
see you might have even noticed it
00:19:22
earlier if you're paying close enough
00:19:24
attention see this export cons Dynamic
00:19:27
equals force Dynamic call here
00:19:29
this forces the page that I'm on to be
00:19:31
generated every time a user goes to it
00:19:34
this page doesn't have any user specific
00:19:36
data we have this API call but this one
00:19:39
doesn't use any user specific data we
00:19:42
have this void get latest. prefetch call
00:19:45
which allows for data to be cached when
00:19:47
things load on the client side we don't
00:19:49
even need that though we can kill it
00:19:51
nothing on this page is user specific so
00:19:54
loading this page shouldn't require any
00:19:56
compute at all but because we set it to
00:19:59
be dynamic it will and this whole page
00:20:02
is going to require compute to run on
00:20:05
your server every time someone goes to
00:20:07
it if you have pages that are mostly
00:20:09
static like a terms of service page a
00:20:12
Blog docs all of those things it's
00:20:15
important to make sure the pages being
00:20:17
generated are static thankfully verell
00:20:20
makes this relatively easy to check if
00:20:22
you run a build they will just show you
00:20:25
all of these details in the output and
00:20:27
they don't just show it when you run the
00:20:28
build locally I can also go to my verell
00:20:31
deployments and go take a look so we'll
00:20:34
hop into quick pick which is a service I
00:20:37
just put out and in here we can take a
00:20:39
look the deployment summary and see what
00:20:42
got deployed in what ways we have the
00:20:44
static assets the functions and the ISR
00:20:46
functions and it tells you which does
00:20:48
what the more important thing that's a
00:20:50
little easier in my opinion to
00:20:51
understand is in the build output it
00:20:53
shows you here each route and what it
00:20:55
means so the circle means static the f
00:20:59
means Dynamic and you want to make sure
00:21:01
all of your heavy things like your pages
00:21:04
that are static are static because you
00:21:07
want the user receiving generated HTML
00:21:09
you don't want to have a server spin up
00:21:11
to generate the same HTML for every user
00:21:14
when they go to every page back to image
00:21:17
optimization for a sec CU I know I
00:21:18
showed you how to use them right and as
00:21:20
long as you have less than 5,000 images
00:21:22
honestly you should probably use their
00:21:24
stuff it is very good and very
00:21:26
convenient despite being pretty happy
00:21:28
with the experience of using the next
00:21:30
image component on versel once you break
00:21:32
5,000 images price gets rough that's why
00:21:35
the example loader configurations page
00:21:37
is pretty useful here frankly I'm not
00:21:40
happy with either the pricing or the DX
00:21:42
around any of these other options but
00:21:44
they are significantly cheaper if you
00:21:46
want to use them sometimes sometimes
00:21:49
they're more expensive but for the most
00:21:50
part all the options here are cheaper
00:21:51
they have their own goas I've been to
00:21:53
Hell in back with Cloud flares to the
00:21:55
point where I'm putting out my own thing
00:21:56
in the near future if you need to save
00:21:57
your money now take a look through this
00:21:59
list and find the thing that fits your
00:22:00
needs the best but in the future image.
00:22:03
engineering is going to be a thing I am
00:22:05
very excited about this project I've
00:22:07
been working out in the background for a
00:22:08
while if you look closely enough at the
00:22:10
URLs on pck thing you'll see that all of
00:22:12
the URLs on this page are being served
00:22:15
by image. engineering already we're dog
00:22:17
fooding it we're really excited about
00:22:19
what it can do and in the near future
00:22:21
you'll be able to use that too so for
00:22:23
now if you need to get something cheap
00:22:25
ASAP go through the list here if this
00:22:28
video has been out for long enough or
00:22:30
maybe just check the pin comment I'll
00:22:32
have something about image engineering
00:22:33
if it's ready but for now use for sell
00:22:35
until you break for, images if the bill
00:22:37
gets too bad consider moving to
00:22:38
something like anything in this list and
00:22:41
keep an eye out for when our really
00:22:43
really fast and cheap solution is ready
00:22:45
to go which will be effectively a drop
00:22:47
in and have some really cool benefits as
00:22:49
well so yeah one last thing there's a
00:22:53
tab in everyone's versell dashboard for
00:22:56
everyone's versell deployments that
00:22:58
seems very innocent analytics you will
00:23:01
notice that you not have it enabled
00:23:04
there's a reason for that these
00:23:06
analytics events are not product
00:23:08
analytics if you're not familiar with
00:23:09
the distinction product analytics are
00:23:11
how you track what a user does on your
00:23:13
site so if you want to see which events
00:23:15
a specific user had that's product
00:23:17
analytics to track the Journey of a user
00:23:19
if you want to know which Pages people
00:23:21
are going to you want to have a count
00:23:23
for how many people go to a specific
00:23:24
page that is web analytics web analytics
00:23:27
is like the old Google analytics type
00:23:29
stuff product analytics is things like
00:23:31
amplitude mix panel the tools that let
00:23:33
you track what users are specifically
00:23:35
doing my preference on how to set this
00:23:37
up is to use post hog and thankfully
00:23:40
they made a huge change to how they
00:23:41
handle Anonymous users they also made a
00:23:43
really useful change to their site the
00:23:45
mode which hides all of the crap it
00:23:47
makes it much nicer for videos so thank
00:23:49
you to them for
00:23:50
that but what we care about here is the
00:23:53
new pricing where it is
00:23:55
0.005 cents per event and that is the
00:24:00
most expensive ones and the first
00:24:01
million are free so you get a million
00:24:03
free events the next million are at this
00:24:05
price but if you're doing a million
00:24:07
events you're probably doing two million
00:24:08
events this is the more fair number so
00:24:10
we're going to take this number we're
00:24:11
going to compare it here so that is at
00:24:14
100,000 events times this
00:24:18
$343 versus 14 bucks pretty big deal
00:24:21
there interesting apparently the web
00:24:24
analytics plus product has a cap for how
00:24:27
many events you can do a month even in
00:24:28
the Pro window Enterprise can work
00:24:30
around it but 20 million events is a
00:24:32
pretty hard cap like we can even get
00:24:35
close to that with upload thing so yeah
00:24:38
not my favorite certainly not at the $14
00:24:40
per 100,000 event pricing and certainly
00:24:42
not for 50 bucks a month generally I
00:24:45
recommend not using the versel analytics
00:24:49
but if they do get cheaper in the future
00:24:50
I'll be sure to let you guys know so you
00:24:52
can consider it one last thing if you
00:24:54
are still concerned about the bill I
00:24:57
understand the thought about having some
00:24:59
massive multi thousand bill out of
00:25:01
nowhere is terrifying they have a
00:25:04
solution for that too spend management
00:25:06
you can set up a spend limit in your app
00:25:09
if you are concerned about the price
00:25:11
getting too
00:25:13
expensive you can go in to the spend
00:25:16
management tab in Billing and specify
00:25:18
that you only want to be able to spend
00:25:20
up to this much money and even manage
00:25:21
when you get notifications so if you are
00:25:24
concerned that usage will get to a point
00:25:25
where you have a really high Bill there
00:25:27
you go Bill handled it does mean your
00:25:30
service will go down so there's a catch
00:25:32
there but the reason this is happening
00:25:34
is either you implemented things really
00:25:36
wrong or your service went super viral
00:25:39
for what it is worth I have never
00:25:40
enabled this because the amount of
00:25:42
compute each requests cost for us is
00:25:45
hilariously low so even when we were
00:25:47
being dosed the worst Bill somebody
00:25:50
could generate was like 80 bucks after
00:25:52
spamming us for hours straight with
00:25:54
millions of computers because they found
00:25:56
one file on one of our apps that was
00:25:58
like 400 kilobytes so if you one things
00:26:00
well you almost certainly won't have
00:26:02
problems my napkin math suggested that
00:26:04
for us to have a $100,000 a month Bill
00:26:06
we'd have to have a billion users so
00:26:08
you're probably fine but if you are the
00:26:10
nervous type I understand go hit the
00:26:13
switch I hope this was helpful I know a
00:26:15
lot of y'all are scared about your for
00:26:16
sale bills but as long as you follow
00:26:18
these basic best practices you can keep
00:26:20
them really low our bill has been like
00:26:23
$10 a month for a while and not counting
00:26:25
seats it's not a big deal High recommend
00:26:28
taking advantage of these things and
00:26:30
continue using things like for sale all
00:26:32
of these tips apply other places too
00:26:34
it's not just for sale you can use these
00:26:35
same things to be more successful on
00:26:37
netlify cloud flare any other serverless
00:26:39
platform and these things will also
00:26:40
speed up your apps for using a VPS build
00:26:43
your apps in a way that you understand
00:26:44
and try your best to keep the complexity
00:26:46
down in the end the bill comes from the
00:26:49
things you shouldn't be doing until next
00:26:51
time peace nerds