Michał Kazimierz Kowalczyk

weblog

Import your Skype account data on MacOS


There are moments in your life, when you want to start everything once again... One of them is called an installation of your MacOS on new Hard Drive Disk. You can use Migration Assistant to help you import lot of things, but... if you don't want to use it, you'll find recovering your personal Skype data (chat histories, contacts, favorites etc.) really hard.

I found at this website: https://support.skype.com/en/faq/FA181/how-do-i-back-up-my-contact-list-on-mac-os-x an information that the Skype data are stored in:

/Users/---your-account-name---/Library/Application Support/Skype

Unfortunately this guide is not up-to-date (as well for my operating system: Mountain Lion (10.8.2) as for my Skype 6 (6.0.0.2968)). In my case, to achieve my goal, I needed to replace the file main.db (which is placed in a directory named by my Skype identifier) by equivalent file from a copy of my files. Below I present a simple guide how to do it.

On newer version of MacOS Library directory is hidden, so I suggest to use Terminal (Application/Utilities/Terminal) to access it.

First, get root privileges. Type in the Terminal:

sudo su

Then type your administrator password. You should see something like this:

sh-3.2#

Now, go to Library/Application Support:

cd /Users/---your-user-account-name---/Library/Application\ Support/

If you didn't run Skype yet, there will be no Skype directory in Application Support (you can check it by typing "ls"). When you'll run Skype, you'll be sure that Skype direcory exists. You can get into it by typing: 

cd Skype

Now, if you did log in on your Skype account and type "ls", you'll see directory named by your Skype identifier. If you don't see it, it means that you didn't logged in on Skype on your account. After you'll do it, quit Skype application and get into that directory:

cd ---your_skype_identifier---

Now, type "ls -l". This will show you list of files inside this directory. To import your data you need to simply replace the file "main.db" with equivalent file from your user data copy. You can do it by:

cp ---path-to-copy-of-Users-directory---/---your-user-account-name---/Library/Skype/---your-skype-identifier---/main.db main.db

The last thing that you can do is change the owner of new copy of main.db file. Type:

chown ---your-user-acount-name---:stuff main.db

Now, after you run Skype and log into your account, you should get all the personal data from your copy. (-:

MKK

Comments:

See comments...

Necessitas: using OpenGL ES 2.0


Once, I had to work with a QGLWidget. I needed to display something on my tabet with a high speed. To understand how to obtain hardware acceleration I thought that probably I have to use OpenGL with which I wasn't familiarised.

Necessitas is supporting OpenGL (as announced Bogdan Vatra here). But we need to make it more specific. Necessitas currently (vesion 0.3.4) supports OpenGL ES 2.0. This fact really narrows all the knowledge: articles, tutorials, examples about OpenGL that you can find on Internet. Someone found the way of using OpenGL ES 1.0 (more information here) but he claims that is not the most optimal solution. That's why I decided to use OpenGL ES 2.0.

After long time of searching of good example how to use OpenGL ES 2.0 in Qt projects I found this example. To make it easier for you, I provide it in Necessitas project. I have to admit that in my opinion it is not the best example to learn anything but still it's good presentation of technology.

The problem with this example (besides that its quite complicated) is that is not fully compatible with Necessitas. I can't say that it doesn't work perfectly everywhere but in my case there were some problems. The biggest one was displaying user interface elements with rendered image at the same time. They are just invisible.

After reading lot of articles to understand how does it work, I created my own, light example. You can see that this is typical Necessitas project, previous one was created just for Qt.

At the form you can find QDial and QWidget promoted to QGLCanvas. In MainWindow's constructor you can find that signal valueChanged(int) of QDial will be send to setAngle(int) - slot of QGLCanvas. And that's the end of non-GL things.

My goal is not making tutorial about OpenGL ES 2.0 but give you some knowledge to start with it. So, let's start with important notions:

Vertices are describing how our 3D object is created. In general, each vertex is a triple (coordinates: x, y, z). Sequences of vertices are creating an object. In OpenGL ES 2.0 we are limited to use maximally 3 vertices per plane (so we can create triangles but we cannot create squares). Read more about vertices.

In our case, we create a plane of shape of rectangle. We need 2 triangles, each of them consists of 3 vertices and each vertex contains 3 coordinates, so we need 18 coordinates to describe it.

Next thing are texture coordinates. They say us how object will be textured. Read more...

In some situations, we need normals (normal vectors). They are special vectors which are perpendicular to planes. Read more...

Now, let's take a look on shaders. Shaders are special programs that are used for doing calculations for rendering processes. Read more...

In case of OpenGL ES 2.0, there exists special shading language. You can find its specification here.

And, the last but not least, uniforms. Uniforms are special objects which are used to communicate your OpenGL application and shader programs. Read more...

In our case, an uniform is for example QMatrix4x4 object. We use it as a parameter for a shader. By a matrix transformations we rotate rendered object.

How typical QGLWidget is made? The most important methods that we are using are initializeGL () and paintGL (). As you can suppose, in first one we do all the things that we need to do before we draw anything (for instance we define shaders). In second one we describe what and how we will draw.

Each OpenGL statement used for drawing should be placed between beginNativePainting () and endNativePainting () - methods of QPainter.

Before drawing the rectangle, we call:

glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

In this way we clear previous rectangle that we drawed.

To execute paintGL () method we need to call:

updateGL ();

You will find quite important this line in constructor:

setAutoBufferSwap (false);

Without it display is blinking.

For now it's all. I hope it's sufficient to understand my example. If you have some questions, comment this article or contact me. If you would found some better sources of knowledge about notions that I presented here, please, share it.

 

--- update ---

Claude LRV found that to use OpenGL at your project, you need update Package Configuration settings. You need check QtOpenGL library at Project -> Run Settings -> Package Configuration -> Libraries. What is interesting in my case it happens automatically all the time (I'm using Necessitas for MacOS and I guess this is the reason). To learn more, see comments.

MKK

Comments:

See comments...

Necessitas: solution for lack of Camera support


In this article I will present my solution for using Camera in Necessitas (Qt on Android) projects. In current version of Necessitas (0.3.4) Camera is not supported. I was looking for existing solutions but the only one that I found in a patch for using MultimediaKit (proposed by dr. Juan Manuel Sáez) was related to old versions of Necessitas. I tried to use it but it seemes that old versions are now unstable (NecessitasQtCreator is crashing during building project).

I decided to use JNI to have an acces to Camera (with a little help of Java (-: ) in Qt Application.

I created two classes CameraSupport:

  1. in Java, to get every frame using Android API,
  2. in C++, to get frames from Java and retrieve them.

First class is really simple and doesn't contains anything special (maybe using additionals buffers is something not typical). To build it we need Android SDK level 11 (or higher). If you prefer use lower levels of SDK, you can simply delete lines containing preferredPreviewSize.

Second class is little bit more complicated, because:

  1. it's using JNI to communicate with Java class,
  2. Android SDK is delivering frames in YUV format so I need to decode it to get RGB values,
  3. I'm using Samsung Galaxy Tab 10.1 with two-cores processor, so I wanted to take advantage of it by dividing calculations for converting YUV frame into RGB one using two threads.

I little explained how to communicate C++ and Java classses in previous article. In this article I will use my previous solution.

I was looking for long time an optimal way of decoding YUV image to RGB. I was using formulas which I found at Wikipedia (I chose bitwise version). To make it better I done all calculations before and I put results into arrays but still I was using clamping function. I found nice idea how to make it better in an article Optimizing YUV-RGB Color Space Conversion Using Intel’s SIMD Technology of Étienne Dupuis. He suggested removing clamping function and use instead of it predefined array with proper number of zeros at the beginning and 255s at the end. It was the end of my optimization. If anyone has idea how to make it better, please share it.

I guess the first time in my life I felt a need of threading something (yes, I'm quite young (-: ). The result of my work you can find in C++ class. I created it in this way to work with optimal number of threads. I discovered that because pre-emption one thread is doing its job much longer then other. I invented solution that don't divide data into 2 exact size portions (in case of 2-cores procesors) but gives each thread smaller portions and if thread finished its job and there is still some data to retrieve, it takes next portion. Of course, I you have better solution or you found some error, please contact me. (-:

I won't analyse here all my code, I just show you how to use it.

First, copy these files (you can find sources here) to your project (respectively to paths):

  1. PROJECT_NAME/android/src/pl/ekk/mkk/necessitas/CameraSupport.java
  2. PROJECT_NAME/camerasupport.cpp
  3. PROJECT_NAME/camerasupport.h
  4. PROJECT_NAME/JavaClassesLoader.cpp (if you don't have it already)

Second, add existing files to your Necessitas project.

Third, place this code into your header file of MainWindow class (or other):

#include <QTimer>

class CameraSupport;

#define MILISECONDS_FOR_REFRESH 1
#define WIDTH 640
#define HEIGHT 480

In my case, I use frames with resolution 640 x 480. On my tablet it works with 30 FPS. If you want have another resolution, just change above values. You can find in Application Output in moment of Camera initialization a line like this:

V/CAMERA SUPPORT(30977): Preferred preview size - 1024x768

which says what is preferred size of a frame (remember that you need Android SDK 11 or higher to get this information).

In private section add this code:

    QTimer *timer;
    QImage *frame;

    CameraSupport *cameraSupport;
    bool repaint;

    unsigned int totalFrames;
    unsigned long long *frameTime;
    unsigned int currentFrame;
    unsigned int fps;

    unsigned long long frameCounter;
    clock_t time1, time2;

First, we need QTimer to get frames with a proper frequency. Of course in case of big frames, time of retrieving is so long that time between getting new one can be really small. But still, in case of small images it's good to set this time longer.

Second, we need QImage to use data frame in Qt application. If you prefer other image containters, if they support loading data from RGBA array, you can use it as well.

Third, we need CameraSupport object to ask it about new frames and if there is one to get it.

To avoid painting the same frame I use repaint flag.

Below repaint variable you can find some diagnostic variables to calculate FPS, count time of execution of parts of code etc.

Now, let's add into protected section:

    void paintEvent (QPaintEvent *event);

We will use this method for displaying frames.

The last thing that you need to add into your header file is a slot for QTimer:

private slots:
    void updateFrame();

Now, let's modify cpp file. In place where you want to start using your camera (in my case in constructor) put this code:

    timer = new QTimer (this);
    connect (timer, SIGNAL (timeout ()), this, SLOT (updateFrame ()));
    timer -> start (MILISECONDS_FOR_REFRESH);

    frame = 0;

    cameraSupport = new CameraSupport (WIDTH, HEIGHT);

    repaint = false;

    totalFrames = 1000 / MILISECONDS_FOR_REFRESH;
    frameTime = new unsigned long long[totalFrames];
    for (unsigned int i = 0; i < totalFrames; i++)
        frameTime[i] = 0;
    currentFrame = 0;
    fps = 0;

    frameCounter = 0;
    time1 = time2 = 0;

I guess, this code is quite clear, there is nothing special to explain. So now, let's define updateFrame () slot:

void MainWindow::updateFrame (){
    clock_t start = clock ();
    bool result = cameraSupport -> UpdateFrame ();
    clock_t stop = clock ();

    if (result){
        if (frame != 0){
            delete frame;
            frame = 0;
        }
        frame = new QImage ((unsigned char *)cameraSupport -> GetRGBA (), WIDTH, HEIGHT, QImage::Format_ARGB32_Premultiplied);
        repaint = true;

        time1 += stop - start;         update (0, 0, WIDTH, HEIGHT);

        frameTime[currentFrame] = clock();

        fps++;
        while (frameTime[currentFrame] - frameTime[(currentFrame - fps) % totalFrames] > CLOCKS_PER_SEC)
            fps--;

        if (currentFrame < totalFrames - 1)
            currentFrame++;
        else
            currentFrame = 0;     }
}

Method UpdateFrame () from CameraSupport class returns true if new frame was loaded or false otherwise. To create QImage with new frame we just use method GetRGBA () from CameraSupport class. Notice that I use format QImage::Format_ARGB32_Premultiplied for my QImage. In my case, I display frames on a screen by using QPainter. Format that I chose makes some QPainter operations faster than QImage::Format_ARGB32.

Now, let's define painterEvent (.):

void MainWindow::paintEvent (QPaintEvent *event){
    if (!repaint)
        return;
    repaint = false;

    if (frame){
        time2 -= clock ();
        QPoint topLeft = event -> rect ().topLeft ();

        QPainter displayPainter (this);
        displayPainter.drawImage (topLeft, *frame);

        time2 += clock ();

        frameCounter++;
        qDebug () << frameCounter << ": " << time1 / frameCounter << ", " << time2 / frameCounter << ". FPS: " << fps ;
    }
}

We must remember about cleaning up memory that we allocated! In case of my project, you can find in the destructor these lines:

    delete cameraSupport;

    if (frame != 0)
        delete frame;
    frame = 0;

    delete []frameTime;

If you are using JavaClassLoader function from my previous article add these lines:

    {
        const char* className = "pl/ekk/mkk/necessitas/CameraSupport";
        jclass clazz = env -> FindClass (className);
        if (!clazz){
            __android_log_print (ANDROID_LOG_FATAL,"Qt", "Unable to find class '%s'", className);
            return JNI_FALSE;
        }
        jmethodID constr = env -> GetMethodID(clazz, "<init>", "()V");
        if (!constr){
            __android_log_print (ANDROID_LOG_FATAL,"Qt", "Unable to find constructor for class '%s'", className);
            return JNI_FALSE;
        }
        jobject obj = env -> NewObject (clazz, constr);
        cameraSupportClassPointer = env->NewGlobalRef(obj);
    }

Also, add outside the function this line:

jobject cameraSupportClassPointer;

Variable's name is important because it's used in CameraSupport class. If you are using another solution to load Java classes, remember to create this object.

Now, your project should work with Camera on Android! But there is still something to do. You can notice that paiting QImage takes lot of time. To make it faster we can take advantage of hardware acceleration by using OpenGL. Simply add to your *.pro file this line:

QT_GRAPHICSSYSTEM = opengl

And that's all!.

Now, the question is: is it possible to make it better? I guess it is!

First thing which I would change is a way of displaying frames. This inconspicious operation takes a lot of time. I found an article: How to get faster Qt painting on N810 right now which could be helpful for you.

Another idea is to use own version of QPaintEngine. Some clues could be found here: QGLWidget and hardware acceleration?.

I guess that there is not much things that can be done with YUV to RGB conversion. Maybe there exists better algorithm which do less read/write memory operations. I was thinking about setting two pixels in one time (by using unsigned long long int* instead of unsigned int*) but it will be better only in case of 64 bits architectures. You can always write some part of your code in assembler. If you want to look for more thriftiness you can try to find better solution for getting YUV data from Java.

The last night I found some technology which I never used before - OpenCL. Maybe this could be also useful for decreasing time of conversion YUV to RGB or for displaying frame on a screen? Anyone has some experience with this?

The last but not least - quality. If you want better quality of video frames (after conversion YUV to RGB), consider recalculating precalculated arrays. It is really important to notice that lot of algorithms requires that Y will be in range of <16, 235> and U, V will be in range of <16, 240> (YUV / YCbCr color componet data ranges) while what you really get is <0, 255> for all components.  You can read more here: About YUV VIDEO.

If you has some ideas, found some errors or just found this article interesting, please share with me your opinion.

MKK

Comments:

See comments...

Necessitas: convenient solution for communication between Qt application and Java


This is my first article about Qt / Necessitas / Android / JNI, so if you will find some error or have some ideas, please, contact me. I didn't use these technologies before. Everything what I present, works with my software configuration: Qt 4.8.0, Necessitas 0.3.4, Android 3.2.

In this article I will present my solution for using Java in Necessitas projects. As far as most of you know, to use Java code in C++ applications and vice-versa, you need to use Java Native Interface (JNI). This technology is used in Necessitas projects and you can find it for example in file qtmain_android.cpp located in: necessitas/Android/Qt/{{VERSION_OF_QT}}/armeabi-v7a/src/android/cpp. In my case, {{VERSION_OF_QT}} is 480. This file is included to your projects during building process.

We can say that qtmain_android.cpp is the most important file for JNI in your project because it contains JNI_OnLoad(.). This function is the only one which is allowed to load any Java class, so this is the only place in your application where you can load these yours. Of course, it's possible to load Java classes in other functions, but you are limited to use only some of these from java.* package.

I found an article (little bit vague for me) which deliver some solution for this problem: How to use Java from Qt/C++ in Necessitas.

There is no code example but fortunately I found this: QT in Android -- Example for accessing the GPS Service.

In both cases, authors are changing qtmain_android.cpp file. This approach has two disadvantages:

  1. qtmain_android.cpp is common for all projects, so all your projects require all classes that you load there,
  2. you can use only one version of constructor of your class,
  3. qtmain_android.cpp is part of Necessitas and it will be probably changed in newer versions.

Let's deal with first two disadvantages. It's hard to create special class which would be good for all possible applications ((-: ) so we need to find another solution. Let's make little changes in qtmain_android.cpp file.

Firstly, add this line before JNI_OnLoad(.) definition (of course outside bodies of functions!):

extern int JavaClassesLoader (JNIEnv* env);

Secondly, put this code into JNI_OnLoad(.) defintion:

    if (!JavaClassesLoader (m_env)){
        __android_log_print (ANDROID_LOG_FATAL, "Qt", "Couldn't register user defined classes!");
        return -1;
    }

Notice, that we use m_env variable so we need to put it below this line:

m_env = uenv.nativeEnvironment;

Thirdly, we need to remove static key-words at this two lines:

static JavaVM *m_javaVM = NULL;
static JNIEnv *m_env = NULL;

And that's all what we need to change in qtmain_android.cpp file.

Now, let's add file which will load our Java classes. In NecessitasQtCreator choose File -> New File or Project -> (Files and Classes) C++ -> C++ Source File -> Choose... Set a Name to JavaClassesLoader.cpp, click Continue and after Done. Fill your file with this code:

#include <jni.h>
#include <android/log.h>

jobject classPointer;

int JavaClassesLoader (JNIEnv* env){
{
const char* className = "pl/ekk/mkk/necessitas/YourClassName";
jclass clazz=env->FindClass(className);
if (!clazz)
{
__android_log_print(ANDROID_LOG_FATAL,"Qt", "Unable to find class '%s'", className);
return JNI_FALSE;
}
jmethodID constr = env->GetMethodID(clazz, "<init>", "()V");
if(!constr) {
__android_log_print(ANDROID_LOG_FATAL,"Qt", "Unable to find a constructor for class '%s'", className);
return JNI_FALSE;
}
jobject obj = env->NewObject(clazz, constr);
classPointer = env->NewGlobalRef(obj);
}
return JNI_TRUE;
}

I guess that you already know that to use multiple Java classes in your project, you just need to create other jobject classPointer, copy-paste code of JavaClassesLoader(.) function (without last line!) and change value of constant className.

To use different constructor than version without arguments, you need to set different method signatures (see more here). To set arguments values, just add them to NewObject (.) method.

But where should you place your Java file? In your project folder you have android/src/ branch. This is right localisation for your files. Notice, that your classes are inside packages, so you need to create folder trees corresponding to packages that you are using. After you copy your files you need to add them to Necessitas project (Add Existing Files).

Now we can use your Java classes in your project. Open your C++ filein which you want use JNI, put this four lines at its beginning:

#include <jni.h>
#include <qDebug>
extern JavaVM *m_javaVM;
extern jobject classPointer;

This will give us a way to use variables from previous files. To call JNI functions, you need to use this code:

JNIEnv* env;
if (m_javaVM -> AttachCurrentThread (&env, NULL) < 0){
    qCritical() << "AttachCurrentThread failed";
    return;
}

jclass applicationClass = env -> GetObjectClass (classPointer);

if (applicationClass){
    //Communication between C++ and JAVA!
}

m_javaVM -> DetachCurrentThread ();

That's all! Now you can use your Java classes in more customized way! (-:

I hope that I explained it quite clearly. If your C++ files cannot find variables from different files that we changed, please clean your project (Build -> Clean Project...).

If you still have some problem, I attach small example.

If you want, you can try to use #ifdef, #define and #endif in qtmain_android.cpp to easily turn on and turn off using JavaClassesLoader(.) in your projects.

But... still we have a third disadvantage... I have to admit that I didn't found a good solution for that. Here, I present an effect of my research, I hope that someone will find some good way to deal with it.

It is possible to access to Java Virtual Machine without changing qtmain_android.cpp file (so we don't need to do third step).

The biggest problem is how to use JavaClassesLoader(.) function outside qtmain_android.cpp file, or in another words, outside JNI_OnLoad(.) definition. You can find an explanation of this problem here: FindClass failed.

I tried to use this solutions: Using JNI from a Native Activity, but unfortunately FindClass method can't find java.lang.ClassLoader, so it seems to be useless for us.

Well, for now, that's all. If anyone will find some solution for third disadvantage or want to discuss about my solution for first one, please contact me.

MKK

Comments:

See comments...

Hosted on eKK.pl.