pull/9/head
Louis DUFOUR 1 year ago
commit 56adc0493c

@ -1,117 +1,33 @@
**Home**
| [Exercise 1 - Kinect Streams](./exo1_subject.md)
| [Exercise 2 - Introduction](./exo2_subject.md)
| [Exercise 2 part 1 - Postures](./exo2_1_subject.md)
| [Exercise 2 part 2 - Gestures](./exo2_2_subject.md)
| [Exercise 2 part 3 - Mapping](./exo2_3_subject.md)
| [Exercise 3 - Free App](./exo3_subject.md)
# Projet Kinect
# Kinect Exercises
## Introduction
Ce projet a pour objectif de développer une application de bureau qui exploite les différents capteurs de la Kinect. Il vise à montrer comment la Kinect peut être utilisée dans des applications interactives et immersives.
- [Kinect Exercises](#kinect-exercises)
- [Kinect sensor](#kinect-sensor)
- [What's inside the Kinect sensor?](#whats-inside-the-kinect-sensor)
- [Software and SDK](#software-and-sdk)
- [Course of the practical work](#course-of-the-practical-work)
- [Subjects](#subjects)
- [Evaluation](#evaluation)
## Équipe
- Louis DUFOUR
- Johan LACHENAL
## Technologies Utilisées
- **Langage de Programmation**: C#
- **Framework**: Windows Presentation Foundation (WPF)
The goal of this resource is to go deeper into *Object Oriented Programming* through the use of an (formerly) innovative device (**KinecT**) and simple gesture detection algorithms.
It is made of three exercises:
- the first one aims at retrieving information from the **Kinect** (4 hours)
- the second one aims at writing a gesture detection algorithm (4 hours)
- and the last one is a free subject exercise where you are asked to develop a simple app using the former two exercises.
## Prérequis
- **Matériel**: Une Kinect est nécessaire pour tester le projet.
- **Logiciel**: Visual Studio pour exécuter et développer le projet WPF.
This document is a quick introduction to the **Kinect** sensor.
## Installation et Configuration
1. **Cloner le Répertoire**: Clonez le dépôt GitHub à l'aide de `git clone [url-du-dépôt]`.
2. **Ouvrir avec Visual Studio**: Lancez Visual Studio et ouvrez le projet cloné.
3. **Installer les Dépendances**: Assurez-vous que toutes les bibliothèques nécessaires sont installées.
4. **Connecter la Kinect**: Branchez votre Kinect à votre ordinateur.
# Kinect sensor
## Utilisation
- **Démarrage**: Pour démarrer l'application, exécutez le projet via Visual Studio.
- **Interaction avec la Kinect**: Suivez les instructions à l'écran pour interagir avec l'application en utilisant les capteurs de la Kinect.
**Kinect** is a device designed by Microsoft, initially for the **Xbox360** and **Windows**, and then also for **Xbox One**.
Its purpose is to allow controlling applications (mainly games) without any physical device, with your own body. It is then a **Natural User Interface**.
## Notre avancement
- Body Stream **OK**
- Color Steam **OK**
It has first been sold in late 2010 (v1), and later in february 2012 (v2), and the windows version in july 2014. The production has ended in 2017.
But since 2019, the new **Azure Kinect** has been released. I do not have tested it yet.
> Tip 💡
> The word **Kinect** is a puzzle between *kinetic* (movement) and *connect*.
## What's inside the Kinect sensor?
The **Kinect** sensor owns various sensors.
It is an horizontal bar with:
- an infrared projector,
- cameras (RGB camera, time of flight camera),
- depth sensor,
- a special microchip generating a grid to allow position calculation
- a motorized pivot to position automatically the bar towards the users
- microphones
It allows:
- full body 3D motion capture (skeleton tracking)
- facial recognition
- voice recognition
## Software and SDK
If you want to use the **Kinect** sensor v2, you need to use the [**Kinect for Windows SDK 2.0**](https://www.microsoft.com/en-us/download/details.aspx?id=44561). You can then use C# (XAML), C++ or VB.NET.
However, due to its released date, it is not possible to use it with .NET Core or MAUI. You have to use a **.NET Framework** project to develop for the **Kinect**.
# Course of the practical work
You have only 1 hour of introduction in theater, and then 14 hours of practical work. This can be done only in the **C21** room where the **Kinect SDK** is installed.
> ⚠️ Available material ⚠️
> Due to the small number of **Kinect** sensors owned by IUT Clermont Auvergne, it is not possible to borrow one for the period, a weekend, or a night. If you want to work with the sensor outside of the practical work, you can borrow one during the day, if it is not used by another group. But all sensors should be returned at the end of the day.
You have three exercises to realize. It is advised to use:
- 4 to 6 hours for the first one,
- 4 to 6 hours for the second one,
- 2 to 6 hours for the third one.
Some parts of the exercises are mandatory to access the next exercise, some are not.
It is advised to organized yourself:
- before practical work:
- study the advised themes via resources given in the subjects,
- prepare your questions if you have some
- during the practical work:
- realize the different tasks
- validate your new skills acquisition with your teacher
- after the practical work:
- update and analyze your *to do list*
- check the skills that have not been vaidated yet with your teacher.
# Subjects
- [Exercise 1 - Kinect Streams](./exo1_subject.md)
- [Exercise 2 - Introduction](./exo2_subject.md)
- [Exercise 2 part 1 - Postures](./exo2_1_subject.md)
- [Exercise 2 part 2 - Gestures](./exo2_2_subject.md)
- [Exercise 2 part 3 - Mapping](./exo2_3_subject.md)
- [Exercise 3 - Free App](./exo3_subject.md)
# Evaluation
You will be evaluated three times (one for each exercise). It will be done during the practical works.
In order to help you, we deliver you a *skill list* that you will have to fill by validating skills with your teacher.
To obtain a validation, you will have to prove your skill knowledge to your teacher through an oral interview or a test.
The final note will be composed of all the validated skills by the teacher at the end of the last minute of the last practical work hour.
Exercise | Subject | Coefficient
--- | --- | ---
1 | Kinect Streams | 7
2.1 | Gesture Recognition (Bases & Postures) | 3
2.2 | Gesture Recognition (Dynamic Gestures) | 2
2.3 | Gesture Recognition (Mapping) | 2
3 | Free App | 7
---
**Home**
| [Exercise 1 - Kinect Streams](./exo1_subject.md)
| [Exercise 2 - Introduction](./exo2_subject.md)
| [Exercise 2 part 1 - Postures](./exo2_1_subject.md)
| [Exercise 2 part 2 - Gestures](./exo2_2_subject.md)
| [Exercise 2 part 3 - Mapping](./exo2_3_subject.md)
| [Exercise 3 - Free App](./exo3_subject.md)
---
Copyright © 2023-2024 Marc Chevaldonné
## Problèmes Connus et Solutions
Aucun problème rencontré pour le moment

@ -26,6 +26,8 @@
<Button Content="Filtre 5" Margin="5"/>
</StackPanel>
<Image Grid.Row="2" Source="{Binding KinectStream.Bitmap}" />
<Image Grid.Row="2" Source="{Binding InfraredBitmap}" />
<Canvas Grid.Row="2" x:Name="skeletonCanvas" />
</Grid>
</Window>

@ -23,27 +23,277 @@ namespace WpfApp
/// </summary>
public partial class MainWindow : Window
{
private KinectStream _kinectStream;
public KinectStream KinectStream
private KinectSensor kinectSensor = null;
// Attribut captor color
private ColorFrameReader colorFrameReader = null;
private WriteableBitmap colorBitmap = null;
// Attribut captor Body
private BodyFrameReader bodyFrameReader = null;
private Body[] bodies = null;
// Attribut captor Depth
private DepthFrameReader depthFrameReader = null;
private WriteableBitmap depthBitmap = null;
private byte[] depthPixels;
// Attribut captor InfraRouge
private InfraredFrameReader infraredFrameReader = null;
private byte[] infraredPixels;
private WriteableBitmap infraredBitmap;
// Propriété publique pour le binding
public WriteableBitmap ColorBitmap
{
get { return this.colorBitmap; }
}
public WriteableBitmap DepthBitmap
{
get { return _kinectStream; }
set { _kinectStream = value; }
get { return this.depthBitmap; }
}
public WriteableBitmap InfraredBitmap
{
get { return this.infraredBitmap; }
}
public MainWindow()
{
InitializeComponent();
this.DataContext = this;
KinectStream = new ColorImageStream();
Debug.WriteLine(KinectStream.KinectManager.StatusText);
KinectStream.Start();
Debug.WriteLine(KinectStream.KinectManager.StatusText);
Debug.WriteLine(KinectStream.Bitmap);
// Initialiser la Kinect
this.kinectSensor = KinectSensor.GetDefault();
/* Capteur couleur
// Ouvrir le lecteur de flux de couleur
this.colorFrameReader = this.kinectSensor.ColorFrameSource.OpenReader();
// Frame description pour les images de couleur
FrameDescription colorFrameDescription = this.kinectSensor.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Bgra);
// Créer le bitmap pour afficher l'image
this.colorBitmap = new WriteableBitmap(colorFrameDescription.Width, colorFrameDescription.Height, 96.0, 96.0, PixelFormats.Bgr32, null);
// Gérer l'événement FrameArrived pour le flux de couleur
this.colorFrameReader.FrameArrived += this.Reader_ColorFrameArrived;
*/
// Initialisation du BodyFrameReader
this.bodyFrameReader = this.kinectSensor.BodyFrameSource.OpenReader();
this.bodyFrameReader.FrameArrived += this.Reader_BodyFrameArrived;
// Initialiser le tableau des corps
this.bodies = new Body[this.kinectSensor.BodyFrameSource.BodyCount];
/* Capteur profondeur
// Initialisation du DepthFrameReader
this.depthFrameReader = this.kinectSensor.DepthFrameSource.OpenReader();
this.depthFrameReader.FrameArrived += this.Reader_DepthFrameArrived;
FrameDescription depthFrameDescription = this.kinectSensor.DepthFrameSource.FrameDescription;
// Initialisez depthPixels pour stocker les données de chaque pixel
this.depthPixels = new byte[depthFrameDescription.Width * depthFrameDescription.Height];
// Initialisez depthBitmap pour afficher les données de profondeur
this.depthBitmap = new WriteableBitmap(depthFrameDescription.Width, depthFrameDescription.Height, 96.0, 96.0, PixelFormats.Gray8, null);
*/
// Initialisation du InfraredFrameReader
this.infraredFrameReader = this.kinectSensor.InfraredFrameSource.OpenReader();
this.infraredFrameReader.FrameArrived += this.Reader_InfraredFrameArrived;
FrameDescription infraredFrameDescription = this.kinectSensor.InfraredFrameSource.FrameDescription;
// Initialisez infraredPixels pour stocker les données de chaque pixel
this.infraredPixels = new byte[infraredFrameDescription.Width * infraredFrameDescription.Height];
// Initialisez infraredBitmap pour afficher les données infrarouges
this.infraredBitmap = new WriteableBitmap(infraredFrameDescription.Width, infraredFrameDescription.Height, 96.0, 96.0, PixelFormats.Gray8, null);
// Ouvrir la Kinect
this.kinectSensor.Open();
}
private void Reader_ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
using (KinectBuffer colorBuffer = colorFrame.LockRawImageBuffer())
{
this.colorBitmap.Lock();
// Vérifier si la taille de l'image a changé
if ((colorFrameDescription.Width == this.colorBitmap.PixelWidth) && (colorFrameDescription.Height == this.colorBitmap.PixelHeight))
{
colorFrame.CopyConvertedFrameDataToIntPtr(
this.colorBitmap.BackBuffer,
(uint)(colorFrameDescription.Width * colorFrameDescription.Height * 4),
ColorImageFormat.Bgra);
this.colorBitmap.AddDirtyRect(new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight));
}
this.colorBitmap.Unlock();
}
}
}
}
// Assurez-vous de fermer correctement le lecteur et le capteur Kinect lors de la fermeture de la fenêtre
private void MainWindow_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
Debug.WriteLine(KinectStream.KinectManager.StatusText);
KinectStream.Stop();
if (this.colorFrameReader != null)
{
this.colorFrameReader.Dispose();
this.colorFrameReader = null;
}
if (this.kinectSensor != null)
{
this.kinectSensor.Close();
this.kinectSensor = null;
}
}
private void Reader_BodyFrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var bodyFrame = e.FrameReference.AcquireFrame())
{
if (bodyFrame != null)
{
bodyFrame.GetAndRefreshBodyData(this.bodies);
skeletonCanvas.Children.Clear(); // Nettoyer le canvas avant de dessiner
foreach (var body in this.bodies)
{
if (body.IsTracked)
{
// Dessiner le squelette
DrawSkeleton(body);
}
}
}
}
}
private void DrawSkeleton(Body body)
{
foreach (JointType jointType in body.Joints.Keys)
{
Joint joint = body.Joints[jointType];
if (joint.TrackingState == TrackingState.Tracked)
{
// Convertir les coordonnées du joint en coordonnées de l'écran
Point point = new Point();
ColorSpacePoint colorPoint = this.kinectSensor.CoordinateMapper.MapCameraPointToColorSpace(joint.Position);
point.X = float.IsInfinity(colorPoint.X) ? 0 : colorPoint.X;
point.Y = float.IsInfinity(colorPoint.Y) ? 0 : colorPoint.Y;
// Dessiner le joint
DrawJoint(point);
}
}
// Dessinez les os ici si nécessaire
}
private void DrawJoint(Point point)
{
Ellipse ellipse = new Ellipse
{
Width = 10,
Height = 10,
Fill = new SolidColorBrush(Colors.Red)
};
Canvas.SetLeft(ellipse, point.X - ellipse.Width / 2);
Canvas.SetTop(ellipse, point.Y - ellipse.Height / 2);
skeletonCanvas.Children.Add(ellipse);
}
private void Reader_DepthFrameArrived(object sender, DepthFrameArrivedEventArgs e)
{
using (DepthFrame depthFrame = e.FrameReference.AcquireFrame())
{
if (depthFrame != null)
{
FrameDescription depthFrameDescription = depthFrame.FrameDescription;
// Créez un tableau pour stocker les données de profondeur
ushort[] depthData = new ushort[depthFrameDescription.LengthInPixels];
depthFrame.CopyFrameDataToArray(depthData);
// Traitez les données de profondeur
ProcessDepthFrameData(depthData, depthFrameDescription.LengthInPixels, depthFrame.DepthMinReliableDistance, depthFrame.DepthMaxReliableDistance);
// Mettez à jour le bitmap de profondeur
this.depthBitmap.WritePixels(
new Int32Rect(0, 0, depthFrameDescription.Width, depthFrameDescription.Height),
this.depthPixels,
depthFrameDescription.Width,
0);
}
}
}
private void ProcessDepthFrameData(ushort[] depthData, uint depthFrameDataSize, ushort minDepth, ushort maxDepth)
{
// Convertir les données de profondeur en niveaux de gris
for (int i = 0; i < depthFrameDataSize; ++i)
{
ushort depth = depthData[i];
this.depthPixels[i] = (byte)(depth >= minDepth && depth <= maxDepth ? (depth % 256) : 0);
}
}
private void ProcessInfraredFrameData(ushort[] frameData, uint frameDataSize)
{
// Convertir les données infrarouges en niveaux de gris
for (int i = 0; i < frameDataSize; ++i)
{
// Convertir la valeur infrarouge en une intensité lumineuse
byte intensity = (byte)(frameData[i] >> 8);
this.infraredPixels[i] = intensity;
}
}
private void Reader_InfraredFrameArrived(object sender, InfraredFrameArrivedEventArgs e)
{
using (InfraredFrame infraredFrame = e.FrameReference.AcquireFrame())
{
if (infraredFrame != null)
{
FrameDescription infraredFrameDescription = infraredFrame.FrameDescription;
// Créez un tableau pour stocker les données infrarouges
ushort[] infraredData = new ushort[infraredFrameDescription.LengthInPixels];
infraredFrame.CopyFrameDataToArray(infraredData);
// Traitez les données infrarouges
ProcessInfraredFrameData(infraredData, infraredFrameDescription.LengthInPixels);
// Mettez à jour le bitmap infrarouge
this.infraredBitmap.WritePixels(
new Int32Rect(0, 0, infraredFrameDescription.Width, infraredFrameDescription.Height),
this.infraredPixels,
infraredFrameDescription.Width,
0);
}
}
}
}
}

Loading…
Cancel
Save