00:05
Now we're back in the training room,
00:07
and I want to use this as an opportunity to show you Seervision tracking because of course,
00:12
it's really useful in a room like this where you might be speaking
00:15
to people live and also presenting to the far end.
00:19
Now, unlike a lot of other tracking systems,
00:22
Seervision does not just use face detection to track people.
00:25
It's actually using full body detection.
00:28
But rather than just take my word for it,
00:30
let me actually show you what Seervision is seeing right now.
00:34
You can see I've got 18 points of contact across my body that Seervision is following.
00:41
Now, this means that it's actually building a representation of me as a presenter in the system.
00:46
And it's not just relying on the face.
00:49
So now I can turn around. I could completely hide my face,
00:53
maybe wear a mask or something like that.
00:55
See, vision does not lose the presenter.
00:57
And even in situations like this where I'm actually occluded by furniture in the room,
01:02
see if vision just continues to work really, really nicely.
01:06
Okay, let's talk in a bit more detail about some of the overlays on the presenter.
01:11
So we talked about the skeleton,
01:12
which of course is tracking the different points on my body here.
01:15
I've also got a bounding box around me which you can see shows the size of me on the screen.
01:21
And then I've also got a couple of numbers here up in the corner.
01:25
Now one of these says 0.94, 0.93, that kind of thing.
01:29
It's changing quite a lot but that is the percentage likelihood that
01:33
I'm a human being according to Seervision.
01:37
Of course that means that non-human subjects are not going to be tracked.
01:42
Now the other number is a unique identifier given to each presenter in the scene.
01:47
Now if I bring in another presenter,
01:49
my colleague, jasper here,
01:51
you can see that Jasper has got a different number.
01:56
and these numbers are signed dynamically.
01:59
Now the good thing about this is because I am being tracked right now,
02:03
if me and Jasper were to walk in front or behind each other,
02:06
Seervision is just going to keep on tracking
02:08
me because it recognizes me as being unique to Jasper,
02:12
not something that happens with a lot of other tracking systems.
02:15
So we've seen how Seervision understands a presenter,
02:19
but how do we actually initiate tracking?
02:21
Right now the camera's just in a stationary view.
02:24
I'm moving around, clearly Seervision is detecting me.
02:27
But we're not actually tracking just yet.
02:30
That is where trigger zones come in
02:32
And you can see right now we have a green zone over here and a purple zone over here.
02:38
What a trigger zone is typically used for is to start tracking.
02:41
So I'm going to walk into a zone right now.
02:44
Let's go over here and we can see that this zone has been set up to
02:48
actually recall a kind of half body shot.
02:53
Now, as I move around,
02:54
I might go over to this zone instead.
02:55
And this zone has been set to change that to a full body shot.
02:59
So you can see that depending on what zone you go in,
03:02
you can start tracking,
03:03
but you can also change the type of tracking which is being performed well.
03:08
It's all saved in something called a container,
03:11
which contains the parameters of the tracking,
03:14
and different containers can be assigned to different trigger zones.
03:17
For example, if I step into the Green Zone once again,
03:20
you can see it’s recalled the container called medium shot
03:24
and Seervision is now framing me to match the stick person that you can see on the container.
03:29
Of course you can adjust the size
03:31
and the position of the stick person to create different types of tracking.
03:36
But if I move over to the pink zone,
03:39
we'll see that it can recall a very specific type of container,
03:42
a full body shot where I'm framed on the right hand side of the screen.
03:46
And now as I move around, Seervision is still going to keep me tracked.
03:50
But it's going to bring me back to the position set by the container.
03:54
But how do we stop tracking?
03:55
Well, that's done using something called the tracking zone,
03:58
which tells the system when a presenter who is being tracked leaves.
04:03
So you can see right now we have a light grey area left to right across this whole presentation area.
04:08
That's the tracking zone.
04:09
And when we leave that zone,
04:11
once tracking is happening,
04:13
it will go back to the home position.
04:16
Stepping into the trigger zone,
04:18
tracking begins and then I'm going to walk out of the tracking zone
04:21
and you will see that as soon as I completely leave.
04:24
The camera goes back to the home position.
04:31
So there you have some of the key terms in a vision,
04:34
you get the trigger zone and when the presenter enters the trigger zone,
04:37
a container is recalled which begins tracking.
04:41
Then when the presenter leaves the tracking zone,
04:43
a static position container is recalled,
04:46
which takes the camera home.
04:48
But how does this actually interface with cases?
04:51
Well, to understand how Seervision integrates with Q-SYS,
04:54
let's start off with a basic video system.
04:57
We've got a Q-SYS core,
04:59
NC series camera and of course our USB bridge.
05:03
The core is controlling those peripherals
05:05
and the video from the camera is going directly to the USB bridge.
05:10
So how to Seervision come into this?
05:12
Well, we're going to add in a Seervision server,
05:15
and that server is also going to receive the same camera stream..
05:19
It’s performing computer vision analysis on that to understand where presenters are in the scene
05:24
and for tracking it controls the camera directly,
05:28
but it's a server’s integration with the Q-SYS Core,
05:31
which completes this solution.
05:33
Rather than the server taking trigger zone events and tying those directly to tracking containers,
05:38
it sends all of the trigger events to a Q-SYS Core,
05:42
which is running the Seervision plug in,
05:44
and it's there where decisions are made about what trigger events result in what kind of tracking.
05:51
Of course, what we've talked about so far is only for basic present tracking.
05:55
But throughout this training you're going to learn how to really finesse that tracking
05:59
and then start to build upon it,
06:01
adding multiple cameras and also working with multiple presenters.
06:05
In the next section,
06:05
we're going to have a look at the Seervision hardware
06:07
and software before we integrate it with Q-SYS.
06:11
I'll see you there.