00:00:00
hey it's Matt with repet here and if
00:00:01
you're trying to deploy an application
00:00:02
you're in the right place because today
00:00:04
we're going to talk about the four
00:00:05
different types of deployments that we
00:00:07
have on repet and we're going to go over
00:00:09
each type in detail we're going to talk
00:00:10
about what apps make the most sense for
00:00:13
each type and then in cases where you
00:00:16
could go either way specifically with
00:00:18
Reserve VMS or Autos skill deployments
00:00:20
we're going to discuss the pros and cons
00:00:22
of each approach and maybe there are
00:00:24
some things you could do to deploy uh
00:00:26
your applications with one but it might
00:00:28
not make sense um so we're going to go
00:00:30
over all of that first I'm going to walk
00:00:31
through the four different types uh and
00:00:33
then I'm going to present a decision
00:00:34
tree which you'll be able to find in the
00:00:36
comments uh in the links below that's
00:00:39
exactly the same logic I would use when
00:00:41
I'm trying to figure out how to deploy
00:00:43
my application so my goal is that by the
00:00:45
end of this video you're pretty
00:00:46
confident about how you deploy any
00:00:49
project that you're cooking up and I'll
00:00:51
link to some other tutorials that might
00:00:52
help you with more complex deployments
00:00:55
but let's jump right into it so our
00:00:56
first type of deployment is a static
00:00:58
deployment and the name is pretty selfes
00:00:59
sctive static deployments are called
00:01:02
Static because they compile to a set of
00:01:04
static files often these are HTML or
00:01:06
JavaScript files um but when I say
00:01:09
static I just mean they're being run by
00:01:11
the client they're being executed by
00:01:13
your browser and browsers most commonly
00:01:15
serve HTML files and execute JavaScript
00:01:17
files um now for example something like
00:01:20
running streamlet is not an example of
00:01:23
of a static file because in order to
00:01:25
serve uh you know a python uh
00:01:27
application like streamlet um you have
00:01:29
to to run streamlit main.py so you're
00:01:31
actively running a command against
00:01:34
python which is not live in the browser
00:01:37
now you'd be very surprised the
00:01:40
different types of things that you
00:01:41
actually can run in the browser because
00:01:43
browsers support running JavaScript
00:01:45
client side for example some really cool
00:01:47
demos with 3js I've shared similar demos
00:01:50
some really cool uh new projects from
00:01:52
the Transformers JS Library um which
00:01:55
involve running llms client side are all
00:01:58
becoming more possible so don't kind of
00:02:00
like throw shade at static deployments
00:02:01
for not you know having a client
00:02:03
component but what a static deployment
00:02:05
is is a set of pre-compiled static files
00:02:08
the thing about that is that they're
00:02:10
very fast to deploy you can deploy them
00:02:12
in quite literally seconds on repet uh
00:02:14
and if you're looking for a parallel
00:02:16
here maybe you're still trying to bridge
00:02:17
the gap as to what a static deployment
00:02:19
is think GitHub Pages if you can deploy
00:02:21
it on GitHub Pages you also can deploy
00:02:22
it as a static deployment on repet our
00:02:25
requirements for static deployments are
00:02:27
a public directory and a build command
00:02:30
the public directory is where the output
00:02:32
of the build command goes so I think for
00:02:34
Frameworks like V that's uh dissed uh
00:02:37
for some under the Frameworks it might
00:02:38
be build it's just basically if you run
00:02:40
like mpm run build what's the directory
00:02:43
that those static files get output to
00:02:46
and then the build command follows from
00:02:47
that right npm run build would be your
00:02:49
build command if you're building to that
00:02:51
directory so pretty straightforward
00:02:52
static deployments are really great my
00:02:54
blog is deployed as a static deployment
00:02:57
um but they only serve a limited set of
00:02:59
deployments if you have a server if you
00:03:01
have something that's actively running a
00:03:03
framework this is not the deployment for
00:03:05
you but we're going to go on to our next
00:03:08
deployment which is also pretty simple
00:03:10
pretty straightforward and those are
00:03:12
scheduled deployments scheduled
00:03:13
deployments as the name would suggest
00:03:15
are scripts that run on a schedule and
00:03:17
these could be great for fetching
00:03:19
scraping updating data they could be
00:03:22
great for any number of things that you
00:03:24
need executed maybe sending a message to
00:03:26
a slack Channel um or uh performing an
00:03:29
action on a set Cadence and the great
00:03:31
part about scheduled deployments is that
00:03:33
they're going to execute once and then
00:03:34
they'll be off so if you think about you
00:03:37
know how much compute you're using it's
00:03:39
likely lower than if you just ran
00:03:40
something constantly because scheduled
00:03:42
deployments ex are executed and then
00:03:44
shut down they don't really have a UI
00:03:47
the UI wouldn't be available for the
00:03:48
user so if you need a scheduled uh job
00:03:51
as a part of some larger application you
00:03:53
might think about multiple deployments
00:03:54
or folding that scheduled job into uh
00:03:58
another type of deployment now if you're
00:04:00
familiar you can think about a scheduled
00:04:01
deployment like a Cron job we actually
00:04:03
use the KRON syntax when defining the
00:04:05
scheduled deployment if you're not
00:04:07
familiar with that don't worry because
00:04:08
we compile natural language to that
00:04:11
syntax we make it really easy for you to
00:04:13
Define exactly how often you want the
00:04:15
thing to run or Define KRON syntax
00:04:18
because you know what that is so
00:04:19
requirements for a schedule deployment
00:04:21
we have a build command completely
00:04:23
optional we have a run command for
00:04:25
example if you wrote a python script
00:04:27
that went out and fetched data and it
00:04:28
was in Main .p your run command would be
00:04:31
python main.py then we have a timeout uh
00:04:34
for example if you're doing some kind of
00:04:36
fancy script that you know doesn't
00:04:38
necessarily work every time or or uh you
00:04:40
want to be terminated after a certain
00:04:42
number of minutes or you know a certain
00:04:44
number of seconds you could provide an
00:04:46
optional timeout but really all you need
00:04:48
for this one is a run command it's
00:04:49
pretty straightforward you're just
00:04:50
running a script so now we're getting
00:04:52
into the two more complex type of
00:04:54
deployments or I would call them complex
00:04:56
because it's easy to mix them up but the
00:04:58
first of which is reserved VM and a
00:05:01
reserved VM stands for a reserved
00:05:03
virtual machine again the name is very
00:05:05
descriptive here is just an always on
00:05:07
sort of machine that's running in the
00:05:09
background that you deploy your app to
00:05:11
so if I had python you know main.py and
00:05:14
I wanted it to run continuously I deploy
00:05:16
it to a reserve em it's going to run and
00:05:19
execute Python main.py and it's going to
00:05:21
keep that command alive until I tell it
00:05:23
not to so in a reserve VM the machine is
00:05:26
always on what that means is that
00:05:27
because the machine has the same size
00:05:29
you're defining the size of the machine
00:05:31
when you deploy and it's always on
00:05:33
you're going to have a fixed cost it's
00:05:35
going to cost a certain amount of money
00:05:37
every month you know exactly what that
00:05:38
is it's not going to change now there
00:05:41
are downsides to that right if you
00:05:43
define a certain number of resources and
00:05:46
say your app blows up or you build an
00:05:48
API and deploy it and the API blows up
00:05:51
or it's doing something really hard and
00:05:52
you didn't Define enough resources your
00:05:54
users are probably going to experience
00:05:56
like degraded performance it's not going
00:05:57
to be good so uh you have to be careful
00:06:00
when you're allocating resources to be
00:06:02
sure that you know you're you're giving
00:06:03
your reserve VM enough to execute on the
00:06:06
jobs that you want it to the
00:06:07
requirements for a reserve VM are a
00:06:10
optional build type uh a run command you
00:06:12
need the Run command um an app type so
00:06:15
you're either defining a web server or a
00:06:18
background worker uh you know that just
00:06:20
determines if your app has like an
00:06:23
exposed front end for example if you
00:06:25
wanted to deploy say a Discord bot or a
00:06:27
slackbot those would be background
00:06:29
workers because they don't have a UI
00:06:30
they're actually really good choices for
00:06:32
Reserve VMS because uh if you think
00:06:34
about most of those Bots they need to
00:06:36
listen um constantly you know like if a
00:06:38
user is to put a slash command in slack
00:06:41
or put a command in Discord um and that
00:06:43
command is going to ping you know your
00:06:45
server and then the server is going to
00:06:47
do something and it's going to respond
00:06:48
to the command and those servers need to
00:06:50
be always on they need to be listening
00:06:53
uh for those commands so Reserve VMS are
00:06:55
really great for that often times you
00:06:57
know the types of computations or work
00:06:59
that you're doing
00:07:00
for those Bots isn't super intensive um
00:07:03
now right you could also deploy if your
00:07:05
slackbot is not responsive to the user
00:07:07
and it's just sending a message you know
00:07:09
every 20 minutes every day that could be
00:07:13
a scheduled deployment as well it would
00:07:14
be cheaper that way um but if you need
00:07:16
something that's always on you want to
00:07:18
go with a reserve VM and finally a port
00:07:21
configuration this is mostly optional
00:07:23
because if your app runs in repet we're
00:07:25
going to configure the port for you so
00:07:26
you don't really need to worry about
00:07:27
that um unless you're running into
00:07:29
problems deploying the app or you
00:07:31
haven't actually run the app and rep
00:07:33
replit or configured the port but
00:07:36
assuming that you said everything up
00:07:37
correctly this shouldn't really be
00:07:39
required so this brings me to our final
00:07:41
type of deployment autoscale deployments
00:07:43
autoscale deployments are scaling
00:07:45
servers they can scale up or down to
00:07:48
accommodate varying amounts of traffic
00:07:49
for your application so think about it
00:07:51
like if you post something on Twitter
00:07:52
maybe you you build an app and you're
00:07:54
trying it out you don't know if
00:07:55
anybody's going to use it and you just
00:07:56
like put it on X the the platform
00:07:59
formerly known as Twitter um and it goes
00:08:01
viral and a million people are coming to
00:08:03
view it if you configure autosale
00:08:04
deployments properly you can scale up
00:08:07
the number of machines automatically as
00:08:10
people come to your site uh and if you
00:08:13
select the proper machine resources this
00:08:15
is kind of like a trial and error logic
00:08:17
thing based on what your app does right
00:08:19
uh but if you configure the resources
00:08:21
properly um there shouldn't really be
00:08:23
any Interruption as all these people
00:08:25
come to your application because we're
00:08:27
going to spawn multiple instances and
00:08:28
scale this appli up to accommodate all
00:08:31
those people so autoscale deployments
00:08:33
are really powerful that way and the
00:08:34
flip side to that is that um if nobody
00:08:38
use your application that's most stuff I
00:08:41
post on on X honestly right if I don't
00:08:43
need to be worried about anything going
00:08:45
viral you know I need to be worried
00:08:46
about posting all this stuff and then
00:08:47
getting build for it because nobody
00:08:49
views it and it's running all the time
00:08:51
like if I spawned a bunch of resered VMS
00:08:53
they're always on they're running all
00:08:54
the time but with autoscale deployments
00:08:56
if nobody looks at your application it's
00:08:58
going to scale down to zero which means
00:09:00
that it just kind of like shuts off it's
00:09:01
like kind of like in sleep mode if you
00:09:03
think about like your your laptop goes
00:09:04
to sleep your deployment goes to sleep
00:09:06
what that means is that if nobody's
00:09:09
looking at your application and then one
00:09:10
person comes to see your application
00:09:12
there might be a small amount of time
00:09:13
where it has to wake up very much like a
00:09:15
computer waking up uh but often it's one
00:09:18
or two seconds it's unnoticeable and
00:09:20
then it's going to stay warm for quite
00:09:22
some time uh while other people use the
00:09:24
application that's really great because
00:09:27
it means I'm not getting build when it's
00:09:28
asleep and that's great for you know
00:09:30
like cost purposes there are some pros
00:09:32
and cons to that we're going to talk
00:09:33
about those uh a little bit later in the
00:09:35
video for an auto scale deployment you
00:09:37
optionally need a build commands you
00:09:38
need a run command obviously uh and you
00:09:41
need a port configuration again the port
00:09:43
configuration is likely already set also
00:09:45
note that there are many more
00:09:46
configuration options for the type of
00:09:48
machine when you're deploying via
00:09:50
autoscale because you can pick the
00:09:52
maximum number of machines that we scale
00:09:54
horizontally you know for example you
00:09:56
could scale one machine or you could
00:09:57
scale five um as well as the actual
00:10:01
power for each machine that's being
00:10:04
spawned so pay attention to that when
00:10:05
you're you're building your application
00:10:07
that's largely dependent on what your
00:10:08
app is doing so I'll leave that to you
00:10:11
now one additional thing before we dive
00:10:12
into my decision tree I want to talk a
00:10:14
bit about autoscale versus reserved
00:10:17
deployments uh because this is something
00:10:19
that's confusing it was confusing to me
00:10:20
when I first learned about the concept
00:10:22
so I want to try and break it down if
00:10:23
you're saying hey I know this thing
00:10:25
isn't static I know it's not a schedule
00:10:26
deployment it should probably be
00:10:27
autoscale or reserved I just really
00:10:29
don't know which one hopefully the sort
00:10:31
of cost certainty and wakeup uh
00:10:34
explanation can help you decide that but
00:10:36
if not I'm going to break it down a
00:10:38
little bit more so again autoscale
00:10:40
deployments are really for scaling
00:10:42
servers and they have a variable cost
00:10:44
right but reserved deployments are
00:10:47
always on they could be like a web
00:10:49
server or background worker uh and
00:10:50
they're going to have a fixed cost so
00:10:51
you get cost certainty with those we're
00:10:54
going to talk a bit more in depth like
00:10:56
technically about what that looks like
00:10:58
but the anal I like to use here is a
00:11:01
nest thermostat or a greenhouse right if
00:11:03
you have an apartment I'm in an
00:11:06
apartment now what a coincidence you
00:11:07
probably want a smart thermostat because
00:11:09
your smart thermostat is going to keep
00:11:11
the apartment cool when you're there or
00:11:13
keep it warm when you're there and then
00:11:14
when you leave it's going to save energy
00:11:16
and that saves you money especially in
00:11:17
California Energy prices are crazy here
00:11:20
what's the deal anyway uh your smart
00:11:22
thermostat is going to be intelligent
00:11:23
and save you money that means that when
00:11:25
nobody is using the space you're saving
00:11:28
money when people are using it you're
00:11:30
you're you're spending money but that's
00:11:32
the whole point right to be comfortable
00:11:33
if I had 40 people in here you know this
00:11:36
assumes I have friends you know for the
00:11:39
sake of this video Let's just assume I
00:11:41
have some friends if I had 40 people in
00:11:43
here if I managed to convince 40 people
00:11:45
to spend time with me this apartment
00:11:47
would probably get pretty warm and my
00:11:48
Nest Thermostat would sort of kick into
00:11:50
gear and cool the apartment down that's
00:11:53
like an autoscaling appointment as more
00:11:54
people come in you're going to kick up
00:11:56
those servers they're going to spin up
00:11:58
multiple instances and the the workload
00:12:01
shouldn't be interrupted at all and so
00:12:04
in that sense it's really great now Matt
00:12:06
you're probably saying okay well like
00:12:07
that sounds perfect why the heck would I
00:12:09
ever want the alternative a reserve VM
00:12:12
well I also have plants in here you
00:12:14
might not be able to see them in the
00:12:16
frame this is one such plant those
00:12:18
plants you know like if you think about
00:12:20
the optimal way to grow plants you don't
00:12:22
want the temperature to be like going
00:12:23
all over the place right you want the
00:12:25
temperature to be stable and that's the
00:12:27
whole concept of a greenhouse
00:12:29
uh so for other things you want a
00:12:31
constant temperature for example you
00:12:33
have a slackbot or a Discord bot it just
00:12:35
needs to be running all the time you
00:12:37
don't want it to scale down to zero
00:12:38
because it's actually watching for some
00:12:40
message similarly if you had an API
00:12:43
right the whole idea behind an API and
00:12:45
application programming interface is
00:12:47
that at any time someone could call that
00:12:49
API maybe you have another application
00:12:51
that's calling the API maybe you're
00:12:53
using something uh like nextjs which has
00:12:56
both client and server side code nexts
00:12:59
uses apis if you dig into uh like app
00:13:02
router and things like that those apis
00:13:05
have to be running all the time
00:13:06
otherwise the client might try to call
00:13:08
the API and if it's autoscale deployment
00:13:10
starts to spin up from zero and then it
00:13:11
takes there's some latency there right
00:13:13
and that's a non-optimal experience so
00:13:15
the analogy I like to use here Auto
00:13:17
scale deployments kind of like a NE Nest
00:13:19
Thermostat they're going to keep you
00:13:20
cool even if there's a lot of stuff
00:13:22
going on uh sometimes you need a
00:13:24
constant temperature Reserve VMS more
00:13:26
like a greenhouse okay that's my nerdy
00:13:29
analogy that I just really love for some
00:13:31
reason we're going to get into a
00:13:32
decision tree if you're still watching
00:13:34
this video I assume you have more
00:13:36
questions so this will cover basically
00:13:39
every scenario I could come up with when
00:13:41
you're trying to deploy okay so here's
00:13:43
my decision tree this is how I think
00:13:46
about building on repet we're going to
00:13:47
start with the simpler Solutions first
00:13:49
and we're going to dig into the more
00:13:50
complex stuff you're deploying an app or
00:13:52
website yeah actually no I'm not
00:13:55
deploying an app or website what am I
00:13:56
doing though I am building an API if
00:13:59
you're building an API just deploy a
00:14:01
reserve VM it's always on people will be
00:14:03
able to access your API um you just need
00:14:05
to Define uh the parameters you'd want
00:14:08
to use a reserve VM and not an Autos
00:14:10
scale deployment because of that cold
00:14:11
start uh problem I described earlier
00:14:14
where if nobody's using your API it's
00:14:15
going to shut off and you don't want
00:14:17
that if you're deploying an API if
00:14:19
you're deploying a bot I would put Bots
00:14:21
under the category of an API um or other
00:14:24
type of infra like that that has to
00:14:26
listen for some event Reserve easy
00:14:29
answer um also no I'm running a script
00:14:32
we talked about this scheduled jobs if
00:14:34
you're running a script it's not a bot
00:14:36
it's not a website you just want to run
00:14:37
something on a Cadence it doesn't need
00:14:38
to be running all the time it doesn't
00:14:40
have to accept user input it just needs
00:14:42
to run every day at a certain time you
00:14:45
deploy a scheduled job okay let's get to
00:14:49
the more interesting part of this
00:14:50
decision tree you're deploying an app or
00:14:52
a website yeah yeah all right cool let's
00:14:55
start on the right here my app needs a
00:14:57
server or Pro program that's running
00:15:00
continuously for example if I ran
00:15:02
something like python main.py or
00:15:05
streamlet main.py or NP or you're
00:15:08
running you know like next Uh
00:15:11
something's being continuously run now
00:15:13
there are a couple options here right
00:15:15
this is kind of what I was talking about
00:15:16
earlier we did the auto scale versus
00:15:17
reserved type deal uh same thing here
00:15:20
you could deploy this thing as the auto
00:15:22
scale deployment you could deploy it as
00:15:23
a reserve deployment there's another uh
00:15:26
solution we're going to talk about in a
00:15:28
little bit as well um so first option I
00:15:30
want to be able to scale up quickly to
00:15:32
handle large amounts of requests um and
00:15:35
I'm okay with variable cost and minimal
00:15:37
warm-up time you might be you might
00:15:40
already know what I'm referring to here
00:15:42
um so that's one option the other sort
00:15:44
of option is my app needs to be running
00:15:46
constantly with access to consistent
00:15:47
resources I want cost
00:15:49
certainty okay we're going to assume uh
00:15:53
my hypothetical app is on the left and
00:15:56
if it's on the left and if it meets
00:15:58
these criteria maybe it's a server it's
00:16:00
using httt HTTP http2 websockets or grpc
00:16:05
or I want to try out multiple ideas
00:16:07
without spending too much this is kind
00:16:08
of like the X example that I mentioned
00:16:10
right I could post 100,000 apps if
00:16:13
nobody uses any of them I'm not getting
00:16:14
buil for them um then we'd want Autos
00:16:17
scale deployments cool now Flip Flip
00:16:20
that over to the other uh sort of option
00:16:22
here maybe I want a wrong Lan con
00:16:24
connection I want to deploy a bot I want
00:16:27
to run a background activity outside of
00:16:29
request handling okay so uh in in
00:16:33
addition to just handling the user
00:16:35
requests I want to be running something
00:16:37
in the background I want this thing to
00:16:38
be constantly doing something um my app
00:16:41
is not a server uh my app does not
00:16:43
tolerate being restarted easily one of
00:16:46
the things about autoscale deployments
00:16:48
is that if it scales down to zero your
00:16:50
app's going to restart and this gets us
00:16:52
into like persistent data within reples
00:16:55
the repple storage is not persistent
00:16:57
it's actually a snapshot of your
00:16:59
application when you click deploy so
00:17:00
think of all the files and directories
00:17:02
in your app when you click deploy that's
00:17:05
snapshotted and deploy if you have like
00:17:07
a SQL light database in there SQL light
00:17:10
database is not going to persist that's
00:17:12
why you'd need to use persistent data in
00:17:15
the form of a postgress database that's
00:17:17
we have options for a key value database
00:17:19
we have options for our object storage
00:17:20
which we also have options for um but if
00:17:23
your app does not tolerate being restart
00:17:25
easily uh Reserve VMS are a great choice
00:17:28
for you and all of these tie into
00:17:30
Reserve vmss cool let's continue down
00:17:32
the left here we're actually getting
00:17:34
through this pretty well my app is
00:17:36
entirely client side code we've talked
00:17:38
about this if your app is running on the
00:17:39
client that can actually be some pretty
00:17:41
cool stuff right you could have some
00:17:43
pretty neat visualizations I've actually
00:17:44
shared a few it can be deployed on
00:17:46
GitHub Pages it can be deployed on cloud
00:17:48
Fair Pages it's a it's a static site use
00:17:50
static deployments really fast really
00:17:53
cheap really simple easy solution there
00:17:56
okay let's get to the fun one my app is
00:17:58
a mix of client side and server side
00:18:00
code for example you have a front end
00:18:02
you have a back end you actually have
00:18:04
options dog what's up dog so the easy
00:18:08
solution if you don't care run it as a
00:18:10
monole reserve VM what do I mean when I
00:18:12
say monor repple stuff all that that
00:18:14
jazz in one repple and just run it we're
00:18:18
just going to spin up two services at
00:18:19
the same time you have to play around
00:18:21
with this a bit I have a tutorial
00:18:22
actually on how to do this so uh rather
00:18:25
than just taking my word for it go watch
00:18:27
the tutorial which I'll link in the
00:18:28
description below uh and just deploy it
00:18:30
as a single reppel that's a simple
00:18:31
solution um if your app is like a next
00:18:34
app that just kind of has those Services
00:18:36
baked in it's actually really
00:18:37
straightforward it only requires one
00:18:39
command now the more complex or more
00:18:41
advanced solution would be to spin up
00:18:43
two reppel and have them talk to each
00:18:45
other for example say you have a server
00:18:48
you could build out a reppel that's a
00:18:49
server deploy it now you have a reel
00:18:51
that's a server it's listening on some
00:18:53
port at some address the cool thing
00:18:55
about development environments right is
00:18:57
because a develop environment is exposed
00:18:59
to the web while you're developing you
00:19:01
just use that development environment um
00:19:04
as the server basically and you can
00:19:05
develop live both the server and the
00:19:07
front end it's kind of neat um and you
00:19:11
deploy that server and then you have
00:19:13
your front end and you deploy the front
00:19:14
end separately advantages here well you
00:19:18
could basically segment the parts that
00:19:20
you're getting built for so maybe I only
00:19:21
want to be build for this like reserve
00:19:23
VM and then I want to offload the
00:19:25
traffic like all the people visiting my
00:19:27
app like most of it's client side so we
00:19:29
want their client to handle most of the
00:19:31
workload and that way I'm not paying for
00:19:33
it but then when they make calls to the
00:19:34
server when they're hitting the API we
00:19:35
want to return data dynamically I have a
00:19:38
tutorial on how to do that I have a
00:19:39
tutorial on how to sort of deploy a
00:19:41
static front end with a dynamic backend
00:19:44
thus reducing the cost offloading most
00:19:46
of the work to the client saving you
00:19:47
money maybe a little bit more efficient
00:19:49
a different setup slightly more
00:19:51
complicated that's kind of the gist of
00:19:53
it but those that's my decision tree I'm
00:19:56
going to make this available to you I'm
00:19:57
going to put this in our document ation
00:19:59
um hopefully that kind of helps you
00:20:01
decide uh how you go about choosing what
00:20:04
type of deployment you want to share
00:20:05
with the world um hopefully this video
00:20:08
has helped you understand the different
00:20:09
types of deployments on repet and when
00:20:11
you use each type um again I'm Matt with
00:20:14
repet this has been an introduction to
00:20:15
the types of deployments we have and all
00:20:17
of our options for taking your idea and
00:20:20
putting it live on the internet but
00:20:21
until next time
00:20:27
peace e