Giter Site home page Giter Site logo

Comments (36)

saki4510t avatar saki4510t commented on August 14, 2024 3

Hi,
I'm not sure what you actually want to do, if you just want to make bitmap and set it to Image view(I don't recommend this way though), please try followings.

Ex. using usbCameratest2(because this is one of the simplest sample).
add setFrameCallback just after startPreview(line 212)

mUVCCamera.startPreview();
mUVCCamera.setFrameCallback(mFrameCallback, UVCCamera.PIXEL_FORMAT_RGB565);

then add following codes at the end of MainActivity.java

private final Object mSync = new Object();
private Bitmap tempBitmap;
private final IFrameCallback mFrameCallback = new IFrameCallback() {
    @Override
    public void onFrame(final ByteBuffer frame) {
        synchronized (mSync) {
            if (tempBitmap == null) {
                tempBitmap = Bitmap.createBitmap(UVCCamera.DEFAULT_PREVIEW_WIDTH, UVCCamera.DEFAULT_PREVIEW_HEIGHT, Bitmap.Config.RGB_565);
            }
            frame.clear();
            tempBitmap.copyPixelsFromBuffer(frame);
        }
        uvc_image.post(mUpdateImageTask);
    }
};

private Runnable mUpdateImageTask = new Runnable() {
    @Override
    public void run() {
        synchronized (mSync) {
            uvc_image.setImageBitmap(tempBitmap);
        }
    }
};

of course you need to add ImageView in layout file(activity_main.xml) and assign it to uvc_image filed as like this.

private ImageView uvc_image;
@Override
protected void onCreate(final Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
...
    uvc_image = (ImageView)findViewById(R.id.imageview);
...
}

The layout file is like this

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/container"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:background="#ff000000"
    tools:context="com.serenegiant.usbcameratest.MainActivity"
    tools:ignore="MergeRootFrame" >

    <com.serenegiant.widget.UVCCameraTextureView
        android:id="@+id/UVCCameraTextureView1"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:layout_centerInParent="true"
        android:layout_gravity="center"
        android:background="#ff000000" />

    <ImageView
        android:id="@+id/imageView1"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignTop="@+id/UVCCameraTextureView1"
        android:layout_alignBottom="@+id/UVCCameraTextureView1"
        android:layout_alignLeft="@+id/UVCCameraTextureView1"
        android:layout_alignRight="@+id/UVCCameraTextureView1"
        android:src="@drawable/border" />

    <ImageButton
        android:id="@+id/capture_button"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignBottom="@id/UVCCameraTextureView1"
        android:layout_centerHorizontal="true"
        android:adjustViewBounds="false"
        android:background="@null"
        android:padding="3dp"
        android:scaleType="fitXY"
        android:src="@android:drawable/ic_menu_camera" />

    <ToggleButton
        android:id="@+id/camera_button"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignTop="@id/UVCCameraTextureView1"
        android:layout_alignLeft="@id/UVCCameraTextureView1"
        android:text="ToggleButton" />

    <ImageView
        android:layout_width="160dp"
        android:layout_height="120dp"
        android:id="@+id/callback_imageview"
        android:layout_below="@+id/camera_button"
        android:layout_alignParentEnd="true"
        />

</RelativeLayout>

Run the sample then you can see two images(one is on original UVCCameraTextureView and the other is on newly added ImageView).
saki

from uvccamera.

filipetrocadoferreira avatar filipetrocadoferreira commented on August 14, 2024 1

And how can I do to show only the processed image? If I comment

// camera.setPreviewDisplay(mPreviewSurface);

nothing is shown

from uvccamera.

thecoder93 avatar thecoder93 commented on August 14, 2024 1

Hi, I had a similar issue with IFrameCallback. With my external webcam I see the video on my SurfaceView, but when I try stream on twilio I don't see anything.
After hard working days I found a solution. The problem was into IFrameCallback. When the frame was updated, the byteArray was always empty. So I added this following code (Kotlin):

private val started = AtomicBoolean(false)

    private val frameCallback = IFrameCallback { frame ->
        if (started.get()) {

                val buffer = ByteArray(frame.capacity())
                frame.get(buffer)
    
                val nativeByteBuffer = allocateDirect(buffer.size)
                nativeByteBuffer.put(buffer)
                val videoBuffer: VideoFrame.Buffer = Rgba8888Buffer(nativeByteBuffer, UVCCamera.DEFAULT_PREVIEW_WIDTH, UVCCamera.DEFAULT_PREVIEW_HEIGHT)
                val videoFrame = VideoFrame(videoBuffer, 0, TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime()))
                capturerObserver?.onFrameCaptured(videoFrame)

            }
    }

With this code I don't have any problems with delay on framerate and the streaming on twilio room is good.

My uvc camera is setting as follow:

private fun startUvcCameraPreviewOnCameraThread(ctrlBlock: USBMonitor.UsbControlBlock?) {
        checkIsOnCameraThread()

        /*
         * Start the UVC Camera preview.
         */
        uvcCamera = UVCCamera()
        uvcCamera?.open(ctrlBlock)
        uvcCamera?.setPreviewSize(
            UVCCamera.DEFAULT_PREVIEW_WIDTH, //640
            UVCCamera.DEFAULT_PREVIEW_HEIGHT, //480
            UVCCamera.FRAME_FORMAT_YUYV
        )
        uvcCamera?.setPreviewDisplay(surface)
        uvcCamera?.startPreview()
        uvcCamera?.setFrameCallback(frameCallback, UVCCamera.PIXEL_FORMAT_RGBX)

        /*
         * Notify the capturer API that the capturer has started.
         */
        capturerObserver?.onCapturerStarted(true)

    }

I used this webcam with 30 FPS:
Trust webcam serial: 20200907 USB ver 2.0 speed: 480Mbps





I want to pass the frame to another library that is compatible with the android camera API. This uses a byte[] of yuv frames.

How can I convert the ByteBuffer to a properly formatted yuv_image byte array?

日本語でも英語でも大丈夫です。

camera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
       @Override
       public void onPreviewFrame(byte[] yuv_image, Camera camera) {
       }
});

versus

mUVCCamera.setFrameCallback(new IFrameCallback() {
        @Override
        public void onFrame(final ByteBuffer frame) {
        }              
}, 
UVCCamera.PIXEL_FORMAT_NV21);

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi,

The Camera.PreviewCallback#onPreviewFrame callback method receive pixel data as byte[] and it default pixel format is NV21(YVU420SemiPlanar).
IFrameCallback#onFrame callback method can receive pixel data with specific pixel format as (native) ByteBuffer. You can get as byte[] using ByteBuffer#get(byte[] dst, int dstOffset, int byteCount) or ByteBuffer#get(byte[] dst) method.

Of course you need to allocate necessary bytes of byte[] array before calling #get and you may call ByteBuffer#order(ByteOrder#nativeOrder()) before calling #get.
For example,

private int buf_sz = 4096;
private byte[] buf = new byte[buf_sz];
private final IFrameCallback mIFrameCallback = new IFrameCallback() {
    @Override
    public void onFrame(final ByteBuffer frame) {
        final int n = frame.limit();
        if (buf_sz < n) {
            buf_sz = n;
            buf = new byte[n];
        }
        frame.get(buf, 0, n);
        ...
    }
}

buf is allocated as field variable to minimize re-allocation in the loop.

I'm not sure what you want to do in your app, but IFrameCallback and it's callback method is relatively slower than other way to get pixel data and if you can use other way, I assume it is much better.

saki

from uvccamera.

zkdzegede avatar zkdzegede commented on August 14, 2024

Thank you for the example code. I was trying something similar to that, but I think I reallocated memory in the loop, so it was even slower.

I'm trying to send the frames to a library that uses a build of ffmpeg to stream over rtmp.

I don't yet have a good grasp of the c++ code to handle the frame in any other way than the IFrameCallBack. If I wanted to pass the data to the native layer as I stated above, what is the other way besides the slow IFrameCallback?

I guess I could modify your c code to call the ffmpeg library directly for streaming? Truth is, I had some issues building your jni code so I ended up just going with the pre built version for now.

Or maybe make a custom MediaEncoder that will send to ffmpeg instead of a file?

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi,

There are several ffmpeg implementation for Android. I don't know what implementation of ffmpeg that you want to use but ffmpeg is basically native library and I think one of the best way is pass frame data from native library to ffmpeg. or if you use ffmpeg via Java layer(JNI), you will be able to pass frame data as direct ByteBuffer with little modification of your ffmpeg JNI layer instead of using byte[] and it will be much efficient (because direct ByteBuffer that IFrameCallback receive just wraps native memory block).

Direct ByteBuffer on IFrameCallback is already copied from original frame buffer and if you use byte[], additional inefficient memory copy occur like these;

  1. native memory block in direct ByteBuffer(IFrameCallback) => byte[] => direct ByteBuffer => ffmpeg
  2. native memory block in direct ByteBuffer(IFrameCallback) => byte[] => ByteBuffer => native memory block => ffmpeg
    (I assume additional internal memory copy will occur in ffmpeg)

On the other hand, if you can directly pass direct ByteBuffer from IFrameCallback to JNI layer of ffmpeg,
3. native memory block in direct ByteBuffer(IFrameCallback) => ffmpeg

If you target API level is API>=18, I think using MediaCodec as encoder and Surface and passing encoded frame data to ffmpeg is much efficient way if MediaCodec supports your favorable codec.

saki.

from uvccamera.

hrstrand avatar hrstrand commented on August 14, 2024

Saki,

you mention in this thread

"I'm not sure what you want to do in your app, but IFrameCallback and it's callback method is relatively slower than other way to get pixel data and if you can use other way, I assume it is much better."

could you elaborate on what is the most efficient way of getting pixel data out on the java side, so it can be worked with?

thanks,
Peter

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi, as you can see do_capture_callback function in UVCPreview.cpp, additional pixel format conversion and creation of DirectByteBuffer of object need when calling IFrameCallback. This is the reason that IFrameCallback is relatively slow. As I know in most case, most people just want to get it raw pixels bytes of RGB(x) or pixels as bitmap and using IFrameCallback for these target is over-speced because the pixel format conversion to RGB(x)/RGB565 is already executed for preview display.
So for these target, as my experience, getting pixels using OpenGL|ES(glReadPixels method) and/or TextureView#getBitmap()/#getBitmap(Bitmap) etc. are better and faster. Sample codes are available in UVCCameraTextureView.java in UsbCameraTest3(some of them are commented now).
If you actually need IFrameCallback for your image processing, RAW or YUV format is better because Internal pixel format of this library is YUV. RAW is same as YUV on current implementation but may be changed to actual pixel format from camera(YUV or MJPEG) on future release.

saki

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi Saki,
I am using the do_capture_callback function in UVCPreview.cpp with OpenCV, but can't find out what encoding the data filed in uvc_frame_t has. If I try to get it into a map like this:

Mat myuv(frame->height, frame->width, CV_8UC1, (unsigned char *)frame->data);
Mat mgray;
cvtColor(myuv, mgray, COLOR_YUV2GRAY_420);

both Mats contain a gray image with wrong dimensions. Please let me know how I can access the data field as a Mat with OpenCV.
Thank you!

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi,
I'm not so familiar to opencv and I don't test yet and this is just estimation.
I assume you may use wrong channel number. channels of source mat will be 2. So please try to use CV_8UC2 instead of CV_8UC1.

saki

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi,
thanks for your answer, but 2 channels doesn't seem to be right either.
I found some documentation in the original uvclib that uses opencv, but it always converts the frame to RGB first. I don't want to convert to RGB since I only need grayscale images.
I will look into this deeper and get back if I find a solution

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi,
I didn't get any good CV Type for the Mat so I used an IPlImage and converted that one to gray:

IplImage *pYuvImage;
    pYuvImage = cvCreateImage(cvSize(frame->width, frame->height), IPL_DEPTH_8U, 2);
    pYuvImage->imageData = (char*)frame->data;
    cv::Mat mipl = cv::cvarrToMat(pYuvImage);
    Mat mgray;
    cvtColor(mipl, mgray, COLOR_YUV2GRAY_YUY2);

According to this article no memory is copied during the transformation so this should be pretty efficient.

Additional question: Can I edit the Image that will be shown in the preview from the do_capture_callback function?

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi,

if you want to apply image processing/effect to video images and show them on screen, I think it is better to do in do_preview function.

BTW, if you always need to convert images to gray scale, I think it will be better do do directly from mjpeg without converting to YUV because libjpeg/libjpeg-turbo support JCS_GRAYSCALE, although Surface of Android does not support gray scale so we need to convert them to either RGB565 or RGBX to show on screen.

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi,
In the callback function I get a yuyv422 frame and don't have to do any real conversion since yuyv422 contains a gray channel that I only need to extract.
I see that do_preview is called only once when the camera is connected, and not for every frame, so I cannot use that function to edit the preview. I see that copyToSurface or draw_preview_one would be better places to implement the edits. I put some logging into the functions and saw that they are called twice for every execution of do_preview_callback. Why is that? Is the callback only called for every second frame?
Thanks for your help!

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi,
the function draw_preview_one is perfect for my needs. It receives a YUYV image from the caller, from which I can easily extract the gray channel without causing a huge delay.
Afterwards the image is converted into RGBX by the convert function, so that the image can be displayed.
I use haar cascades on the gray image and then draw onto the converted RGBX image, which is the copied to the screen.

from uvccamera.

pricsko avatar pricsko commented on August 14, 2024

Hi, the code below works fine :
IplImage pYuvImage;
pYuvImage = cvCreateImage(cvSize(frame->width, frame->height), IPL_DEPTH_8U, 2);
pYuvImage->imageData = (char
)frame->data;
cv::Mat mipl = cv::cvarrToMat(pYuvImage);
But, how can i convert it back to the original buffer, so i can record, the modified image?

from uvccamera.

btut avatar btut commented on August 14, 2024

Hi,
just apply whatever you want to do to converted->data. It will be copied to the screen afterwards.

from uvccamera.

pricsko avatar pricsko commented on August 14, 2024

So if i want to add text overlay i just do :
IplImage pYuvImage;
pYuvImage = cvCreateImage(cvSize(callback_frame->width, callback_frame->height), IPL_DEPTH_8U, 2);
pYuvImage->imageData = (char)callback_frame->data;
cv::Mat _yuv = cv::cvarrToMat(pYuvImage);
cv::putText(_yuv, text, textOrg, fontFace, fontScale, cv::Scalar(0, 0, 255), thickness, 8);
memcpy(callback_frame->data, _yuv.data, (int) (_yuv.total() * _yuv.channels()));

and leave the rest :
jobject buf = env->NewDirectByteBuffer(callback_frame->data, callbackPixelBytes);
env->CallVoidMethod(mFrameCallbackObj, iframecallback_fields.onFrame, buf);
env->ExceptionClear();
env->DeleteLocalRef(buf);
...
in do_capture_callback (as i dont want the text to be appear on the preview, just in the recorded video)

from uvccamera.

pricsko avatar pricsko commented on August 14, 2024

However i am modifying the original frame (not the converted one, called callback_frame) like this :
IplImage pYuvImage = cvCreateImage(cvSize(frame->width, frame->height), IPL_DEPTH_8U, 2);
pYuvImage->imageData = (char
)frame->data;
cv::Mat _yuv = cv::cvarrToMat(pYuvImage);
std::string text = "Test text";
cv::putText(_yuv, text, textOrg, fontFace, fontScale, cv::Scalar(0, 0, 255), thickness, 8);
int callbackPixelBytes2 = (int) (_yuv.total() * _yuv.channels());
memcpy(frame->data, _yuv.data, callbackPixelBytes2);
_yuv.release();

rest of the code is untouched in do_capture_callback from :
uvc_frame_t *callback_frame = frame;
if (mFrameCallbackObj) {
if (mFrameCallbackFunc) {...

and i am getting a negative image (bgr instead of rgb) in the output video. How can i solve it?

from uvccamera.

 avatar commented on August 14, 2024

Hi. I have the same issue. I read somewhere to use this to get bitmap from IFrameCallback:

    private final Bitmap usbBitmap = Bitmap.createBitmap(
        UVCCamera.DEFAULT_PREVIEW_WIDTH, 
        UVCCamera.DEFAULT_PREVIEW_HEIGHT, 
        Bitmap.Config.RGB_565);

then:

    public void onFrame(ByteBuffer frame){
        synchronized (usbBitmap) {
                usbBitmap.copyPixelsFromBuffer(frame);
       }
   }

I get something in usbBitmap, but not a clear image. Here is the result:
iframecallbackvssurfaceview

In the left side is what I get from ByteBufer frame, on the right is the expected result.
I tried each PIXEL_FORMAT option available, always same result.
Any idea how to get through this?

from uvccamera.

pricsko avatar pricsko commented on August 14, 2024

mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_RGB565);
instead of :
mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_NV21);
in your CameraHandler.java / handleStartRecording() method

from uvccamera.

 avatar commented on August 14, 2024

I got my error, I was overriding setFrameCallback(mIFrameCallback, 0). Thanks for help.

from uvccamera.

dknee avatar dknee commented on August 14, 2024

Hi Saki,

Great UVC Camera library.
I am using the do_capture_callback function in C++ and have a 2 camera setup based on example usbCameraTest7. I have two separate callback functions for each camera, mIFrameCallBack_L and mIFrameCallback_R, on the Java side. How do I distinguish the different cameras on the C++ side so that I can process the images uniquely? Apologies if this is dumb question, I am a hardware guy and not a software programmer.

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

Hi,

usbCameraTest7 has each CameraHandler (and UVCCamera and native object for camera access) instance for each camera. So if you pass each IFrameCallback instance to UVCCamera(via CameraHandler), each IFrameCallback is called only for the camera related to the CameraHandler.
Currently only CameraViewInterface instance can be pass to CameraHandler. But you can change CameraHandler#createHandler and constructor of CameraThread so that you can pass IFrameCallback to internal UVCCamera.

For example, you will be able to modify CameraHandler like this (I don't confirm this can actually work well though),

public static final CameraHandler createHandler(final MainActivity parent, final CameraViewInterface cameraView, final IFrameCallback callback) {
    final CameraThread thread = new CameraThread(parent, cameraView, callback);
    thread.start();
    return thread.getHandler();
}
...
private static final class CameraThread extends Thread {
    ...
    private final IFrameCallback mCallback
    private CameraThread(final MainActivity parent, final CameraViewInterface cameraView, final IFrameCallback callback) {
        super("CameraThread");
        mWeakParent = new WeakReference<MainActivity>(parent);
        mWeakCameraView = new WeakReference<CameraViewInterface>(cameraView);
        mCallback = callback;
        loadShutterSound(parent);
    }
...
    public void handleOpen(final USBMonitor.UsbControlBlock ctrlBlock) {
        if (DEBUG) Log.v(TAG_THREAD, "handleOpen:");
        handleClose();
        mUVCCamera = new UVCCamera();
        mUVCCamera.open(ctrlBlock);
        mUVCCamera.setFrameCallback(mCallback, ${PIXEL FORMAT AS YOU WANT});
        if (DEBUG) Log.i(TAG, "supportedSize:" + mUVCCamera.getSupportedSize());
    }
...
}

or if you want to pass frame data from each camera to other C/C++ library(libraries) directly on native side, you just pass frame data in do_capture_callback or do_preview to other library(I assume it will be better to pass something identifier with frame data to determine which camera the frame data come from).

with best regards,
saki

from uvccamera.

SAKSBP avatar SAKSBP commented on August 14, 2024

Hi saki,

I am using IFramecallback with help of USBCameraTest5.

private final IFrameCallback mIFrameCallback = new IFrameCallback() {

            @Override
            public void onFrame(final ByteBuffer frame) {
                Log.d("test","test");
                synchronized (usbBitmap) {
                    usbBitmap.copyPixelsFromBuffer(frame);
                }
                if (mVideoEncoder != null) {
                    mVideoEncoder.frameAvailableSoon();
                    mVideoEncoder.encode(frame);
                }
            }

log inside OnFrame is not executing. I'm totally new to this. Please help me.

And also please specify me about devices, I am using
1.samsung J7
2.Logitech

from uvccamera.

 avatar commented on August 14, 2024

SAKSP, maybe you have to put call log inside main thread, maybe like this:

((Activity) getContext()).runOnUiThread(new Runnable() {
        @Override
         public void run() {
              Log.d("test", "test");
        }
}

from uvccamera.

SAKSBP avatar SAKSBP commented on August 14, 2024

thanks muCodes, but I didn't do like that.

what i did is
enabled mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_NV21); in preview
and put that log in OnFrame ,which is not executing and also mUVCCamera is also blank.
Please share the code which uses IFramecallback.

from uvccamera.

 avatar commented on August 14, 2024

I had the same issue, Logs inside onFrame method didn't show on Android Monitor, unless I pass my message to main thread and print log from there. In my case I do this:

private final IFrameCallback iFrameCallback = new IFrameCallback() {
        @Override
        public void onFrame(ByteBuffer frame) {
            if (pixels == null) {
                pixels = new byte[frame.limit()];
            }
            frame.get(pixels);
            Image barcode = new Image(UVCCamera.DEFAULT_PREVIEW_WIDTH, UVCCamera.DEFAULT_PREVIEW_HEIGHT, "Y800");
            barcode.setData(pixels);
            int result = qrScanner.scanImage(barcode);
            if (result != 0) {
                Log.e(this.getClass().getCanonicalName(), "Code detected!!!");

                SymbolSet syms = qrScanner.getResults();
                for (Symbol sym : syms) {
                    currCustomerRef = sym.getData();
                    showToast("barcode result " + sym.getData());
                    isScanned = true;
                }
            } else {
                Log.w(this.getClass().getCanonicalName(), "Could not decode the image :( :(");
            }
     }
};

···

private void showToast(final String text) {
    ((Activity) getContext()).runOnUiThread(new Runnable() {
        @Override
        public void run() {
            Toast.makeText(getContext(), text, Toast.LENGTH_LONG).show();
        }
    });
 }

···
onFrame method is called but no Log is printed out.

In my case, instead of _showToast("barcode result " + sym.getData());_ I had _Log.w(this.getClass().getCanonicalName(), "barcode result " + sym.getData());_, but message didn't show I had to send it to main thread.

from uvccamera.

SAKSBP avatar SAKSBP commented on August 14, 2024

Hi muCodes,

Its working, thanks for your code.

very very thanks.

// if you need frame data as byte array on Java side, you can use this callback method with UVCCamera#setFrameCallback
private final IFrameCallback mIFrameCallback = new IFrameCallback() {
@OverRide
public void onFrame(final ByteBuffer frame) {
showToast("barcode result ");
frame.clear();
synchronized (bitmap) {
bitmap.copyPixelsFromBuffer(frame);
}
uvc_image.post(mUpdateImageTask);

    }
};
private void showToast(final String text) {
    runOnUiThread(new Runnable() {
        @Override
        public void run() {
            Toast.makeText(getApplicationContext(), text, Toast.LENGTH_LONG).show();
        }
    });
}
private final Runnable mUpdateImageTask = new Runnable() {
    @Override
    public void run() {
        synchronized (bitmap) {
            uvc_image.setImageBitmap(bitmap);
        }
    }
};

Toast is executing but image view is not changing.

from uvccamera.

saki4510t avatar saki4510t commented on August 14, 2024

@mucodes, @SAKSBP
Hi, it is strange behavior that I never met. As I tested on my several devices, I can see all log from onFrame callback...it may depends on the device/Android version.

The callback method is called from native pthread attached to JavaVM. Currently I don't provide any argument when calling AttachCurrentThread.
The code is as followings.

void *UVCPreview::capture_thread_func(void *vptr_args) {
    int result;

    ENTER();
    UVCPreview *preview = reinterpret_cast<UVCPreview *>(vptr_args);
    if (LIKELY(preview)) {
        JavaVM *vm = getVM();
        JNIEnv *env;
        // attach to JavaVM
        vm->AttachCurrentThread(&env, NULL);
        preview->do_capture(env);
        // detach from JavaVM
        vm->DetachCurrentThread();
        MARK("DetachCurrentThread");
    }
    PRE_EXIT();
    pthread_exit(NULL);
}

I'm not sure now why it occur but I assume your issue may solve by passing argument to AttachCurrentThread. If you have time, could you try following code?

void *UVCPreview::capture_thread_func(void *vptr_args) {
    int result;

    ENTER();
    UVCPreview *preview = reinterpret_cast<UVCPreview *>(vptr_args);
    if (LIKELY(preview)) {
        JavaVM *vm = getVM();
        JNIEnv *env;
        JavaVMAttachArgs args;
        args.name = "capture_thread";
        args.group = NULL;
        args.version = JNI_VERSION_1_6;
        // attach to JavaVM
        vm->AttachCurrentThread(&env, &args);
        preview->do_capture(env);
        // detach from JavaVM
        vm->DetachCurrentThread();
        MARK("DetachCurrentThread");
    }
    PRE_EXIT();
    pthread_exit(NULL);
}

saki

from uvccamera.

SAKSBP avatar SAKSBP commented on August 14, 2024

Hi saki,

i have tried this
private final IFrameCallback mIFrameCallback = new IFrameCallback() {
@OverRide
public void onFrame(final ByteBuffer frame) {
showToast("text");
frame.clear();
synchronized (bitmap) {
bitmap.copyPixelsFromBuffer(frame);
}
uvc_image.post(mUpdateImageTask);

    }
};

but the bitmap is black.

from uvccamera.

SAKSBP avatar SAKSBP commented on August 14, 2024

Its working. Thank you guys.

from uvccamera.

saikrishna76 avatar saikrishna76 commented on August 14, 2024

Hi Saki,

I want to apply facedetection for uvc camera by using openCV.For this i am using Iframecallback ....And i am not getting how to pass the obtained frame to openCV-facedetection.please help me out.
Here a small snippet how we use facedetection in in openCV.
cv::std::vector faces;
cv::face_cascade.detectMultiScale( mGrrayimage, faces, 1.2, 2, 0|CV_HAAR_SCALE_IMAGE, Size(20, 20) )

from uvccamera.

sappho192 avatar sappho192 commented on August 14, 2024

@filipetrocadoferreira Not a complete method, but try to set size of surface to 1dp * 1dp and align back to ImageView, then you can see only ImageView.

from uvccamera.

Sanjay2802 avatar Sanjay2802 commented on August 14, 2024

How to use this in the following library: change input from phones Camera to usb camera

package org.tensorflow.lite.examples.detection;

import android.Manifest;
import android.app.Fragment;
import android.content.Context;
import android.content.pm.PackageManager;
import android.hardware.Camera;
import android.hardware.camera2.CameraAccessException;
import android.hardware.camera2.CameraCharacteristics;
import android.hardware.camera2.CameraManager;
import android.hardware.camera2.params.StreamConfigurationMap;
import android.media.Image;
import android.media.Image.Plane;
import android.media.ImageReader;
import android.media.ImageReader.OnImageAvailableListener;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Trace;
import androidx.appcompat.app.AppCompatActivity;
import androidx.appcompat.widget.SwitchCompat;
import androidx.appcompat.widget.Toolbar;
import android.util.Size;
import android.view.Surface;
import android.view.View;
import android.view.ViewTreeObserver;
import android.view.WindowManager;
import android.widget.CompoundButton;
import android.widget.ImageView;
import android.widget.LinearLayout;
import android.widget.TextView;
import android.widget.Toast;
import androidx.annotation.NonNull;
import com.google.android.material.bottomsheet.BottomSheetBehavior;
import java.nio.ByteBuffer;
import org.tensorflow.lite.examples.detection.env.ImageUtils;
import org.tensorflow.lite.examples.detection.env.Logger;

public abstract class CameraActivity extends AppCompatActivity
implements OnImageAvailableListener,
Camera.PreviewCallback,
CompoundButton.OnCheckedChangeListener,
View.OnClickListener {
private static final Logger LOGGER = new Logger();

private static final int PERMISSIONS_REQUEST = 1;

private static final String PERMISSION_CAMERA = Manifest.permission.CAMERA;
protected int previewWidth = 0;
protected int previewHeight = 0;
private boolean debug = false;
private Handler handler;
private HandlerThread handlerThread;
private boolean useCamera2API;
private boolean isProcessingFrame = false;
private byte[][] yuvBytes = new byte[3][];
private int[] rgbBytes = null;
private int yRowStride;
private Runnable postInferenceCallback;
private Runnable imageConverter;

private LinearLayout bottomSheetLayout;
private LinearLayout gestureLayout;
private BottomSheetBehavior sheetBehavior;

protected TextView frameValueTextView, cropValueTextView, inferenceTimeTextView;
protected ImageView bottomSheetArrowImageView;
private ImageView plusImageView, minusImageView;
private SwitchCompat apiSwitchCompat;
private TextView threadsTextView;

@OverRide
protected void onCreate(final Bundle savedInstanceState) {
LOGGER.d("onCreate " + this);
super.onCreate(null);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

setContentView(R.layout.tfe_od_activity_camera);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
getSupportActionBar().setDisplayShowTitleEnabled(false);

if (hasPermission()) {
  setFragment();
} else {
  requestPermission();
}

threadsTextView = findViewById(R.id.threads);
plusImageView = findViewById(R.id.plus);
minusImageView = findViewById(R.id.minus);
apiSwitchCompat = findViewById(R.id.api_info_switch);
bottomSheetLayout = findViewById(R.id.bottom_sheet_layout);
gestureLayout = findViewById(R.id.gesture_layout);
sheetBehavior = BottomSheetBehavior.from(bottomSheetLayout);
bottomSheetArrowImageView = findViewById(R.id.bottom_sheet_arrow);

ViewTreeObserver vto = gestureLayout.getViewTreeObserver();
vto.addOnGlobalLayoutListener(
    new ViewTreeObserver.OnGlobalLayoutListener() {
      @Override
      public void onGlobalLayout() {
        if (Build.VERSION.SDK_INT < Build.VERSION_CODES.JELLY_BEAN) {
          gestureLayout.getViewTreeObserver().removeGlobalOnLayoutListener(this);
        } else {
          gestureLayout.getViewTreeObserver().removeOnGlobalLayoutListener(this);
        }
        //                int width = bottomSheetLayout.getMeasuredWidth();
        int height = gestureLayout.getMeasuredHeight();

        sheetBehavior.setPeekHeight(height);
      }
    });
sheetBehavior.setHideable(false);

sheetBehavior.setBottomSheetCallback(
    new BottomSheetBehavior.BottomSheetCallback() {
      @Override
      public void onStateChanged(@NonNull View bottomSheet, int newState) {
        switch (newState) {
          case BottomSheetBehavior.STATE_HIDDEN:
            break;
          case BottomSheetBehavior.STATE_EXPANDED:
            {
              bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_down);
            }
            break;
          case BottomSheetBehavior.STATE_COLLAPSED:
            {
              bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_up);
            }
            break;
          case BottomSheetBehavior.STATE_DRAGGING:
            break;
          case BottomSheetBehavior.STATE_SETTLING:
            bottomSheetArrowImageView.setImageResource(R.drawable.icn_chevron_up);
            break;
        }
      }

      @Override
      public void onSlide(@NonNull View bottomSheet, float slideOffset) {}
    });

frameValueTextView = findViewById(R.id.frame_info);
cropValueTextView = findViewById(R.id.crop_info);
inferenceTimeTextView = findViewById(R.id.inference_info);

apiSwitchCompat.setOnCheckedChangeListener(this);

plusImageView.setOnClickListener(this);
minusImageView.setOnClickListener(this);

}

protected int[] getRgbBytes() {
imageConverter.run();
return rgbBytes;
}

protected int getLuminanceStride() {
return yRowStride;
}

protected byte[] getLuminance() {
return yuvBytes[0];
}

/** Callback for android.hardware.Camera API */
@OverRide
public void onPreviewFrame(final byte[] bytes, final Camera camera) {
if (isProcessingFrame) {
LOGGER.w("Dropping frame!");
return;
}

try {
  // Initialize the storage bitmaps once when the resolution is known.
  if (rgbBytes == null) {
    Camera.Size previewSize = camera.getParameters().getPreviewSize();
    previewHeight = previewSize.height;
    previewWidth = previewSize.width;
    rgbBytes = new int[previewWidth * previewHeight];
    onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), 90);
  }
} catch (final Exception e) {
  LOGGER.e(e, "Exception!");
  return;
}

isProcessingFrame = true;
yuvBytes[0] = bytes;
yRowStride = previewWidth;

imageConverter =
    new Runnable() {
      @Override
      public void run() {
        ImageUtils.convertYUV420SPToARGB8888(bytes, previewWidth, previewHeight, rgbBytes);
      }
    };

postInferenceCallback =
    new Runnable() {
      @Override
      public void run() {
        camera.addCallbackBuffer(bytes);
        isProcessingFrame = false;
      }
    };
processImage();

}

/** Callback for Camera2 API */
@OverRide
public void onImageAvailable(final ImageReader reader) {
// We need wait until we have some size from onPreviewSizeChosen
if (previewWidth == 0 || previewHeight == 0) {
return;
}
if (rgbBytes == null) {
rgbBytes = new int[previewWidth * previewHeight];
}
try {
final Image image = reader.acquireLatestImage();

  if (image == null) {
    return;
  }

  if (isProcessingFrame) {
    image.close();
    return;
  }
  isProcessingFrame = true;
  Trace.beginSection("imageAvailable");
  final Plane[] planes = image.getPlanes();
  fillBytes(planes, yuvBytes);
  yRowStride = planes[0].getRowStride();
  final int uvRowStride = planes[1].getRowStride();
  final int uvPixelStride = planes[1].getPixelStride();

  imageConverter =
      new Runnable() {
        @Override
        public void run() {
          ImageUtils.convertYUV420ToARGB8888(
              yuvBytes[0],
              yuvBytes[1],
              yuvBytes[2],
              previewWidth,
              previewHeight,
              yRowStride,
              uvRowStride,
              uvPixelStride,
              rgbBytes);
        }
      };

  postInferenceCallback =
      new Runnable() {
        @Override
        public void run() {
          image.close();
          isProcessingFrame = false;
        }
      };

  processImage();
} catch (final Exception e) {
  LOGGER.e(e, "Exception!");
  Trace.endSection();
  return;
}
Trace.endSection();

}

@OverRide
public synchronized void onStart() {
LOGGER.d("onStart " + this);
super.onStart();
}

@OverRide
public synchronized void onResume() {
LOGGER.d("onResume " + this);
super.onResume();

handlerThread = new HandlerThread("inference");
handlerThread.start();
handler = new Handler(handlerThread.getLooper());

}

@OverRide
public synchronized void onPause() {
LOGGER.d("onPause " + this);

handlerThread.quitSafely();
try {
  handlerThread.join();
  handlerThread = null;
  handler = null;
} catch (final InterruptedException e) {
  LOGGER.e(e, "Exception!");
}

super.onPause();

}

@OverRide
public synchronized void onStop() {
LOGGER.d("onStop " + this);
super.onStop();
}

@OverRide
public synchronized void onDestroy() {
LOGGER.d("onDestroy " + this);
super.onDestroy();
}

protected synchronized void runInBackground(final Runnable r) {
if (handler != null) {
handler.post(r);
}
}

@OverRide
public void onRequestPermissionsResult(
final int requestCode, final String[] permissions, final int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == PERMISSIONS_REQUEST) {
if (allPermissionsGranted(grantResults)) {
setFragment();
} else {
requestPermission();
}
}
}

private static boolean allPermissionsGranted(final int[] grantResults) {
for (int result : grantResults) {
if (result != PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}

private boolean hasPermission() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
return checkSelfPermission(PERMISSION_CAMERA) == PackageManager.PERMISSION_GRANTED;
} else {
return true;
}
}

private void requestPermission() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
if (shouldShowRequestPermissionRationale(PERMISSION_CAMERA)) {
Toast.makeText(
CameraActivity.this,
"Camera permission is required for this demo",
Toast.LENGTH_LONG)
.show();
}
requestPermissions(new String[] {PERMISSION_CAMERA}, PERMISSIONS_REQUEST);
}
}

// Returns true if the device supports the required hardware level, or better.
private boolean isHardwareLevelSupported(
CameraCharacteristics characteristics, int requiredLevel) {
int deviceLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
if (deviceLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
return requiredLevel == deviceLevel;
}
// deviceLevel is not LEGACY, can use numerical sort
return requiredLevel <= deviceLevel;
}

private String chooseCamera() {
final CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
for (final String cameraId : manager.getCameraIdList()) {
final CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId);

    // We don't use a front facing camera in this sample.
    final Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
    if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
      continue;
    }

    final StreamConfigurationMap map =
        characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);

    if (map == null) {
      continue;
    }

    // Fallback to camera1 API for internal cameras that don't have full support.
    // This should help with legacy situations where using the camera2 API causes
    // distorted or otherwise broken previews.
    useCamera2API =
        (facing == CameraCharacteristics.LENS_FACING_EXTERNAL)
            || isHardwareLevelSupported(
                characteristics, CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL);
    LOGGER.i("Camera API lv2?: %s", useCamera2API);
    return cameraId;
  }
} catch (CameraAccessException e) {
  LOGGER.e(e, "Not allowed to access camera");
}

return null;

}

protected void setFragment() {
String cameraId = chooseCamera();

Fragment fragment;
if (useCamera2API) {
  CameraConnectionFragment camera2Fragment =
      CameraConnectionFragment.newInstance(
          new CameraConnectionFragment.ConnectionCallback() {
            @Override
            public void onPreviewSizeChosen(final Size size, final int rotation) {
              previewHeight = size.getHeight();
              previewWidth = size.getWidth();
              CameraActivity.this.onPreviewSizeChosen(size, rotation);
            }
          },
          this,
          getLayoutId(),
          getDesiredPreviewFrameSize());

  camera2Fragment.setCamera(cameraId);
  fragment = camera2Fragment;
} else {
  fragment =
      new LegacyCameraConnectionFragment(this, getLayoutId(), getDesiredPreviewFrameSize());
}

getFragmentManager().beginTransaction().replace(R.id.container, fragment).commit();

}

protected void fillBytes(final Plane[] planes, final byte[][] yuvBytes) {
// Because of the variable row stride it's not possible to know in
// advance the actual necessary dimensions of the yuv planes.
for (int i = 0; i < planes.length; ++i) {
final ByteBuffer buffer = planes[i].getBuffer();
if (yuvBytes[i] == null) {
LOGGER.d("Initializing buffer %d at size %d", i, buffer.capacity());
yuvBytes[i] = new byte[buffer.capacity()];
}
buffer.get(yuvBytes[i]);
}
}

public boolean isDebug() {
return debug;
}

protected void readyForNextImage() {
if (postInferenceCallback != null) {
postInferenceCallback.run();
}
}

protected int getScreenOrientation() {
switch (getWindowManager().getDefaultDisplay().getRotation()) {
case Surface.ROTATION_270:
return 270;
case Surface.ROTATION_180:
return 180;
case Surface.ROTATION_90:
return 90;
default:
return 0;
}
}

@OverRide
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
setUseNNAPI(isChecked);
if (isChecked) apiSwitchCompat.setText("NNAPI");
else apiSwitchCompat.setText("TFLITE");
}

@OverRide
public void onClick(View v) {
if (v.getId() == R.id.plus) {
String threads = threadsTextView.getText().toString().trim();
int numThreads = Integer.parseInt(threads);
if (numThreads >= 9) return;
numThreads++;
threadsTextView.setText(String.valueOf(numThreads));
setNumThreads(numThreads);
} else if (v.getId() == R.id.minus) {
String threads = threadsTextView.getText().toString().trim();
int numThreads = Integer.parseInt(threads);
if (numThreads == 1) {
return;
}
numThreads--;
threadsTextView.setText(String.valueOf(numThreads));
setNumThreads(numThreads);
}
}

protected void showFrameInfo(String frameInfo) {
frameValueTextView.setText(frameInfo);
}

protected void showCropInfo(String cropInfo) {
cropValueTextView.setText(cropInfo);
}

protected void showInference(String inferenceTime) {
inferenceTimeTextView.setText(inferenceTime);
}

protected abstract void processImage();

protected abstract void onPreviewSizeChosen(final Size size, final int rotation);

protected abstract int getLayoutId();

protected abstract Size getDesiredPreviewFrameSize();

protected abstract void setNumThreads(int numThreads);

protected abstract void setUseNNAPI(boolean isChecked);
}

from uvccamera.

Sanjay2802 avatar Sanjay2802 commented on August 14, 2024

For full coding https://github.com/tensorflow/examples/tree/66f60ebc3dd2e8527b7bbbb280fe0657d54f20f4/lite/examples/object_detection/android

from uvccamera.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.