1
General / SFML and FFMpeg Tutorial English Translation
« on: April 16, 2010, 11:53:18 am »
The original tutorial is in French and is at the following link:
http://www.sfml-dev.org/wiki/fr/tutoriels/integrervideo
I translate it into English using the Google language tool. I know nothing about French except the word "Bonjour". It's raw google translation. Please feel free to improve it.
--------------------------------------------------------------------------
/*
SFML does not have a module to load, save, convert or view the video. For this, there is a library developed in C language called FFMPEG. This library was originally designed to work with SDL or OpenGL and GLUT possibly Linux, but it is also possible to compile Windows, Mac OS and BSD and use it with other booksellers from the moment you knows how to manipulate the pixels in each image. This tutorial does not explain how to do the compiling FFMpeg, but the only way to load a video. Mpg (or. Avi ...) with SFML. It is based on FFMPEG version 0.4.8. The purchase of the sound is not answered here.
Getting Started
The following are the basic elements which we will need for our program. Include files for SFML and FFMpeg. FFMpeg is coded in C and our projects in C + +, it is desirable to indicate to the compiler.
*/
#include <SFML/Graphics.hpp>
extern "C"
{
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
}
/*
We will declare respect SFML. You are free to structure the code in a cleaner. For this tutorial, I'm more in form and substance.
*/
sf::RenderWindow App;
sf::Image im_video;
sf::Sprite sp_video;
sf::Uint8 *Data;
/*
im_video will receive the pixel array containing each image from the video. sp_video is that using the sprite SFML to draw our video screen. Finally Data is the array of pixels, without which we could not transfer the video to SFML. We will now report regarding FFMpeg.
*/
AVFormatContext *pFormatCtx;
int videoStream;
int iFrameSize;
AVCodecContext *pCodecCtx;
AVFrame *pFrame;
AVFrame *pFrameRGB;
uint8_t *buffer;
AVPacket packet;
/*
These variables will be explained at the time of their use in the rest of this tutorial.
Upload a video
The first thing to do is open the video file and prepare for the latter reading. To do this, we'll make a function init_video (char * filename) which returns an integer. Again, once you understand the operation of this program, you are free to structure the code in other object oriented.
*/
int init_video(char* filename);
void display();
void close_video();
int init_video(char* filename)
{
AVCodec *pCodec;
/*Declare a pointer pCodec will receive the code for decoding the video file.*/
av_register_all();
if(av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)!=0)
{
fprintf(stderr, "Unexisting file!\n");
return -1;
}
if(av_find_stream_info(pFormatCtx)<0)
{
fprintf(stderr, "Couldn't find stream information!\n");
return -1;
}
dump_format(pFormatCtx, 0, filename, 0);
/*
It first calls the procedure av_register_all () which prepares all the formats and codecs that FFMpeg is able to read. Then we open the file and verifies the same time it is there. The pointer pFormatCtx retrieves the video and the function av_find_stream_info (pFormatCtx) one seeks the video stream, ie images, sounds ... It also provides any errors. dump_format () can send information on the control box for debugging.
*/
videoStream=-1;
for(int i=0; i<(pFormatCtx->nb_streams); i++)
{
if(pFormatCtx->streams->codec.codec_type==CODEC_TYPE_VIDEO)
{
videoStream=i;
break;
}
}
if(videoStream==-1)
return -1;
pCodecCtx=&pFormatCtx->streams[videoStream]->codec;
/*
pFormatCtx → streams is an array of pointers. It looks inside the video stream, then we place ourselves at the beginning of the film.
*/
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
{
fprintf(stderr, "Unsupported codec!\n");
return -1;
}
if(avcodec_open(pCodecCtx, pCodec)<0)
return -1;
iFrameSize = pCodecCtx->width * pCodecCtx->height * 3;
/*
To finish this first part, we look for the codec and verifies if FFMPEG supports it. Finally, it opens with avcodec_open (). iFrameSize used to store the total size of one frame of video for display.
Now that the file is open, the codecs are checked and the video stream is found it must also store the information of the video before it can finally show our film to the screen.
*/
pFrame=avcodec_alloc_frame();
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL)
return -1;
/*
PFrame is prepared to store our video is in YUV format, ie Hue, Saturation and Brightness. Then, we prepare pFrameRGB to store the video in RGB format, with which work SFML.
*/
int numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
/*
With numbytes we get the number of bytes of an image in RGB24 and dimensions of the video and it allocates the buffer with. Then, the buffer is assigned to pFrameRGB.
*/
Data = new sf::Uint8[pCodecCtx->width * pCodecCtx->height * 4];
return 0;
}
/*
Finally, prepare Data, our array of pixels that will serve as a bridge between FFMpeg and SFML. So this concludes our function init_video (char * filename).
Read and draw a Video
We will create a function display () which will be responsible for reading the video and draw each image at each round of the main loop.
*/
void display()
{
int frameFinished;
if (av_read_packet(pFormatCtx, &packet) < 0)
{
close_video();
//exit(0);
}
if(packet.stream_index==videoStream)
{
/*
It says in the video and if it is in the end we just close the video example and leave. It is a condition for packet.stream_index is not always equal to videostreams. I did not take it initially, and by drawing in the condition it had caused a flash of the picture:)
*/
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,
packet.data, packet.size);
if(frameFinished)
{
// Convert the image from its native format to RGB
img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,
(AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,
pCodecCtx->height);
}
/*
First, we decode an image from the video, then the film is not finished, it converts the image in RGB format chosen initially.
*/
int j = 0;
for(int i = 0 ; i < (iFrameSize) ; i+=3)
{
Data[j] = pFrameRGB->data[0];
Data[j+1] = pFrameRGB->data[0][i+1];
Data[j+2] = pFrameRGB->data[0][i+2];
Data[j+3] = 255;
j+=4;
}
im_video.LoadFromPixels(pCodecCtx->width, pCodecCtx->height, Data);
/*
It transfers the image to SFML using our array of pixels and the method LoadFromPixels sf:: Image.
*/
}
// Dessiner l'image sur le tampon de l'écran
//Draw the image on the screen buffer
App.Draw(sp_video);
}
/*
It draws out the condition, for the reason that you know now:)
Closing a video
Let's create a function close_video (), in which we will simply leave the variables useful for FFMpeg.
*/
void close_video()
{
// Libérer le packet alloué par av_read_frame
//Free the allocated packet av_read_frame
av_free_packet(&packet);
// Libérer l'image RGB
// Free the RGB image
av_free(buffer);
av_free(pFrameRGB);
// Libérer l'image YUV
// Free the YUV image
av_free(pFrame);
// Fermer le codec
//Close the codec
avcodec_close(pCodecCtx);
// Fermer le fichier video
//Close the video file
av_close_input_file(pFormatCtx);
}
/*
The main program
To complete our program, we will call all these people in hand with SFML. Once compiled, if all goes well, you should enjoy your first video SFML sauce:)
*/
int main()
{
// Notre fonction pour initialiser la video
//Our function to initialize the video
if ( init_video("test.avi") == 0 )
{
// Code SFML de base
//Code SFML base
App.Create( sf::VideoMode(pCodecCtx->width*2, pCodecCtx->height*2, 32),
"Video avec SFML et FFMpeg"
//"Video with SFML and FFMpeg"
);
// On crée notre image, en blanc par exemple
//Create our image, in white for example
im_video.Create(pCodecCtx->width, pCodecCtx->height, sf::Color(255,255,255,255));
// J'aime bien ne pas mettre le smooth, ça dépend de la qualité de la video
// I really do not get the smooth, it depends on the quality of the video:)
im_video.SetSmooth(false);
// On crée notre sprite
//We create our sprite
sp_video.SetImage(im_video);
// Vous pouvez utiliser les fonctionnalité du sprite sur la video,
// comme le scale, de la même manière qu'une simple image fixe
//You can use the functionality of the sprite on the video,
// As the scale, the same way that a single image
// La boucle principale
//The main loop
bool Running = true;
while (Running)
{
// Les évènements
//Events
sf::Event Event;
while (App.GetEvent(Event))
{
if (Event.Type == sf::Event::Closed)
Running = false;
if ((Event.Type == sf::Event::KeyPressed) && (Event.Key.Code == sf::Key::Escape))
Running = false;
}
// Notre fonction de lecture et dessin de la video
//Our function of reading and drawing of the video
display();
// On affiche tout ça
//Print it all
App.Display();
App.SetFramerateLimit(50);
}
// Notre fonction pour fermer la video
//Our function to close the video
close_video();
return EXIT_SUCCESS;
}
return EXIT_FAILURE;
/*
I used Set.FramerateLimit (50), if you are better, there are no worries. The 50 is twice 25 frames per second. I noticed that avcodec_decode_video (pCodecCtx, pFrame, & frameFinished, packet.data, packet.size) gives a video image on two once each turn. There is no loss of image, it's just a function like this, I can not tell you:) So, putting the limit at 50 frames shows the same rate as 25 frames per second.
I hope I have said enough to allow you to integrate video into your program. Well, it's silent film for the moment, but we must start somewhere:)
*/
http://www.sfml-dev.org/wiki/fr/tutoriels/integrervideo
I translate it into English using the Google language tool. I know nothing about French except the word "Bonjour". It's raw google translation. Please feel free to improve it.
--------------------------------------------------------------------------
/*
SFML does not have a module to load, save, convert or view the video. For this, there is a library developed in C language called FFMPEG. This library was originally designed to work with SDL or OpenGL and GLUT possibly Linux, but it is also possible to compile Windows, Mac OS and BSD and use it with other booksellers from the moment you knows how to manipulate the pixels in each image. This tutorial does not explain how to do the compiling FFMpeg, but the only way to load a video. Mpg (or. Avi ...) with SFML. It is based on FFMPEG version 0.4.8. The purchase of the sound is not answered here.
Getting Started
The following are the basic elements which we will need for our program. Include files for SFML and FFMpeg. FFMpeg is coded in C and our projects in C + +, it is desirable to indicate to the compiler.
*/
#include <SFML/Graphics.hpp>
extern "C"
{
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
}
/*
We will declare respect SFML. You are free to structure the code in a cleaner. For this tutorial, I'm more in form and substance.
*/
sf::RenderWindow App;
sf::Image im_video;
sf::Sprite sp_video;
sf::Uint8 *Data;
/*
im_video will receive the pixel array containing each image from the video. sp_video is that using the sprite SFML to draw our video screen. Finally Data is the array of pixels, without which we could not transfer the video to SFML. We will now report regarding FFMpeg.
*/
AVFormatContext *pFormatCtx;
int videoStream;
int iFrameSize;
AVCodecContext *pCodecCtx;
AVFrame *pFrame;
AVFrame *pFrameRGB;
uint8_t *buffer;
AVPacket packet;
/*
These variables will be explained at the time of their use in the rest of this tutorial.
Upload a video
The first thing to do is open the video file and prepare for the latter reading. To do this, we'll make a function init_video (char * filename) which returns an integer. Again, once you understand the operation of this program, you are free to structure the code in other object oriented.
*/
int init_video(char* filename);
void display();
void close_video();
int init_video(char* filename)
{
AVCodec *pCodec;
/*Declare a pointer pCodec will receive the code for decoding the video file.*/
av_register_all();
if(av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)!=0)
{
fprintf(stderr, "Unexisting file!\n");
return -1;
}
if(av_find_stream_info(pFormatCtx)<0)
{
fprintf(stderr, "Couldn't find stream information!\n");
return -1;
}
dump_format(pFormatCtx, 0, filename, 0);
/*
It first calls the procedure av_register_all () which prepares all the formats and codecs that FFMpeg is able to read. Then we open the file and verifies the same time it is there. The pointer pFormatCtx retrieves the video and the function av_find_stream_info (pFormatCtx) one seeks the video stream, ie images, sounds ... It also provides any errors. dump_format () can send information on the control box for debugging.
*/
videoStream=-1;
for(int i=0; i<(pFormatCtx->nb_streams); i++)
{
if(pFormatCtx->streams->codec.codec_type==CODEC_TYPE_VIDEO)
{
videoStream=i;
break;
}
}
if(videoStream==-1)
return -1;
pCodecCtx=&pFormatCtx->streams[videoStream]->codec;
/*
pFormatCtx → streams is an array of pointers. It looks inside the video stream, then we place ourselves at the beginning of the film.
*/
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL)
{
fprintf(stderr, "Unsupported codec!\n");
return -1;
}
if(avcodec_open(pCodecCtx, pCodec)<0)
return -1;
iFrameSize = pCodecCtx->width * pCodecCtx->height * 3;
/*
To finish this first part, we look for the codec and verifies if FFMPEG supports it. Finally, it opens with avcodec_open (). iFrameSize used to store the total size of one frame of video for display.
Now that the file is open, the codecs are checked and the video stream is found it must also store the information of the video before it can finally show our film to the screen.
*/
pFrame=avcodec_alloc_frame();
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL)
return -1;
/*
PFrame is prepared to store our video is in YUV format, ie Hue, Saturation and Brightness. Then, we prepare pFrameRGB to store the video in RGB format, with which work SFML.
*/
int numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
/*
With numbytes we get the number of bytes of an image in RGB24 and dimensions of the video and it allocates the buffer with. Then, the buffer is assigned to pFrameRGB.
*/
Data = new sf::Uint8[pCodecCtx->width * pCodecCtx->height * 4];
return 0;
}
/*
Finally, prepare Data, our array of pixels that will serve as a bridge between FFMpeg and SFML. So this concludes our function init_video (char * filename).
Read and draw a Video
We will create a function display () which will be responsible for reading the video and draw each image at each round of the main loop.
*/
void display()
{
int frameFinished;
if (av_read_packet(pFormatCtx, &packet) < 0)
{
close_video();
//exit(0);
}
if(packet.stream_index==videoStream)
{
/*
It says in the video and if it is in the end we just close the video example and leave. It is a condition for packet.stream_index is not always equal to videostreams. I did not take it initially, and by drawing in the condition it had caused a flash of the picture:)
*/
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,
packet.data, packet.size);
if(frameFinished)
{
// Convert the image from its native format to RGB
img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24,
(AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width,
pCodecCtx->height);
}
/*
First, we decode an image from the video, then the film is not finished, it converts the image in RGB format chosen initially.
*/
int j = 0;
for(int i = 0 ; i < (iFrameSize) ; i+=3)
{
Data[j] = pFrameRGB->data[0];
Data[j+1] = pFrameRGB->data[0][i+1];
Data[j+2] = pFrameRGB->data[0][i+2];
Data[j+3] = 255;
j+=4;
}
im_video.LoadFromPixels(pCodecCtx->width, pCodecCtx->height, Data);
/*
It transfers the image to SFML using our array of pixels and the method LoadFromPixels sf:: Image.
*/
}
// Dessiner l'image sur le tampon de l'écran
//Draw the image on the screen buffer
App.Draw(sp_video);
}
/*
It draws out the condition, for the reason that you know now:)
Closing a video
Let's create a function close_video (), in which we will simply leave the variables useful for FFMpeg.
*/
void close_video()
{
// Libérer le packet alloué par av_read_frame
//Free the allocated packet av_read_frame
av_free_packet(&packet);
// Libérer l'image RGB
// Free the RGB image
av_free(buffer);
av_free(pFrameRGB);
// Libérer l'image YUV
// Free the YUV image
av_free(pFrame);
// Fermer le codec
//Close the codec
avcodec_close(pCodecCtx);
// Fermer le fichier video
//Close the video file
av_close_input_file(pFormatCtx);
}
/*
The main program
To complete our program, we will call all these people in hand with SFML. Once compiled, if all goes well, you should enjoy your first video SFML sauce:)
*/
int main()
{
// Notre fonction pour initialiser la video
//Our function to initialize the video
if ( init_video("test.avi") == 0 )
{
// Code SFML de base
//Code SFML base
App.Create( sf::VideoMode(pCodecCtx->width*2, pCodecCtx->height*2, 32),
"Video avec SFML et FFMpeg"
//"Video with SFML and FFMpeg"
);
// On crée notre image, en blanc par exemple
//Create our image, in white for example
im_video.Create(pCodecCtx->width, pCodecCtx->height, sf::Color(255,255,255,255));
// J'aime bien ne pas mettre le smooth, ça dépend de la qualité de la video
// I really do not get the smooth, it depends on the quality of the video:)
im_video.SetSmooth(false);
// On crée notre sprite
//We create our sprite
sp_video.SetImage(im_video);
// Vous pouvez utiliser les fonctionnalité du sprite sur la video,
// comme le scale, de la même manière qu'une simple image fixe
//You can use the functionality of the sprite on the video,
// As the scale, the same way that a single image
// La boucle principale
//The main loop
bool Running = true;
while (Running)
{
// Les évènements
//Events
sf::Event Event;
while (App.GetEvent(Event))
{
if (Event.Type == sf::Event::Closed)
Running = false;
if ((Event.Type == sf::Event::KeyPressed) && (Event.Key.Code == sf::Key::Escape))
Running = false;
}
// Notre fonction de lecture et dessin de la video
//Our function of reading and drawing of the video
display();
// On affiche tout ça
//Print it all
App.Display();
App.SetFramerateLimit(50);
}
// Notre fonction pour fermer la video
//Our function to close the video
close_video();
return EXIT_SUCCESS;
}
return EXIT_FAILURE;
/*
I used Set.FramerateLimit (50), if you are better, there are no worries. The 50 is twice 25 frames per second. I noticed that avcodec_decode_video (pCodecCtx, pFrame, & frameFinished, packet.data, packet.size) gives a video image on two once each turn. There is no loss of image, it's just a function like this, I can not tell you:) So, putting the limit at 50 frames shows the same rate as 25 frames per second.
I hope I have said enough to allow you to integrate video into your program. Well, it's silent film for the moment, but we must start somewhere:)
*/