Giter Site home page Giter Site logo

basler / pypylon Goto Github PK

View Code? Open in Web Editor NEW
545.0 545.0 206.0 6.34 MB

The official python wrapper for the pylon Camera Software Suite

Home Page: http://www.baslerweb.com

License: BSD 3-Clause "New" or "Revised" License

Shell 0.71% Python 50.42% C++ 0.21% Batchfile 0.14% SWIG 48.51%
camera computer-vision machine-vision

pypylon's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pypylon's Issues

Is there a trigger feedback signal from camera through GPIO ?

Hi,

I am using hardware trigger to send the frame start signal through GPIO. I wonder if there's a feedback signal from the camera to indicate that the frame has been read-out and it's able to receive the next frame start signal ?

Thanks very much!

Simultaneous capturing on two 1920-25uc cameras with software signals

Hi we have two acA1920-25uc USB cameras and we want to take a sequence of images simultaneously with the two cameras for stereo vision. The camera model should support the software trigger, signal pulse, trigger source and other related features according to the documentation:
https://docs.baslerweb.com/#t=en%2Ftrigger_software.htm
https://docs.baslerweb.com/#t=en%2Fsoftware_signal_pulse.htm
https://docs.baslerweb.com/#t=en%2Ftrigger_source.htm

According to the sample code, we can call software trigger on the camera:
https://github.com/basler/pypylon/blob/master/samples/grabusinggrabloopthread.py

We have modified the code and execute the software trigger on two cameras sequentially:

# Example of an image event handler.
class SampleImageEventHandler(pylon.ImageEventHandler):
    def __init__(self):
        super().__init__()
        self.grab_times = [0.0]

    def OnImageGrabbed(self, camera, grabResult):
        self.grab_times.append(time.time())
        if len(self.grab_times) % 50 == 0:
            logging.info("camera %s, CSampleImageEventHandler::OnImageGrabbed called.",
                         camera.DeviceInfo.GetSerialNumber())


def test_pylon_multiple_grab_loop_thread(num_cameras=None):
    num_cameras = parse_args(num_cameras)
    cameras = []
    handlers = []
    device_info_list = pylon.TlFactory.GetInstance().EnumerateDevices()
    for info in device_info_list[:num_cameras]:
        cam = pylon.InstantCamera(
            pylon.TlFactory.GetInstance().CreateFirstDevice(info))
        handlers.append(SampleImageEventHandler())
        cam.RegisterConfiguration(pylon.SoftwareTriggerConfiguration(),
                                  pylon.RegistrationMode_ReplaceAll,
                                  pylon.Cleanup_Delete)
        cam.RegisterConfiguration(ConfigurationEventPrinter(),
                                  pylon.RegistrationMode_Append,
                                  pylon.Cleanup_Delete)
        cam.RegisterImageEventHandler(handlers[-1],
                                      pylon.RegistrationMode_Append,
                                      pylon.Cleanup_Delete)
        cam.StartGrabbing(pylon.GrabStrategy_OneByOne,
                          pylon.GrabLoop_ProvidedByInstantCamera)
        cameras.append(cam)

    ind = 0
    while True:
        ind += 1
        t0 = time.time()
        for cam in cameras:
            # Execute the software trigger.
            # Wait up to 100 ms for the camera to be ready for trigger.
            if cam.WaitForFrameTriggerReady(
                    100, pylon.TimeoutHandling_ThrowException):
                pass
        for h in handlers:
            # all cameras should have been triggered the same number of times
            assert len(h.grab_times) == len(handlers[0].grab_times)
        last_grab_times = [h.grab_times[-1] for h in handlers]
        t1 = time.time()
        for cam in cameras:
            cam.ExecuteSoftwareTrigger()
        t2 = time.time()
        if ind % 20 == 0:
            logging.info("waiting for cameras took %.2f ms, trigger takes %.2f ms, "
                         "total %.2f ms, last_grab_times_span %.2f ms",
                         (t1-t0) * 1000, (t2-t1) * 1000, (t2-t0) * 1000,
                         (max(last_grab_times) - min(last_grab_times)) * 1000)

However, we find that each trigger takes 0.5ms, which means the cameras won't capture images at exactly the same time.

So our questions are:

  1. What's the right way to take sequence of images simultaneously on two acA1920-25uc (or other Basler USB3 cameras) with minimum time delay through software trigger, signal pulse, etc?
  2. We understand that software signals might not occur at exactly the same time, so what's the expected time delay between images captured by the two cameras using this approach?

Thanks!

Using the software trigger on 4 different acA1920-40gc

Hello everybody,

I have 4 different acA1920-40gc cameras on the same PoE switch and I would like to trigger the cameras at the same time.

I first used an InstantCameraArray and then going through it doing a GrabOne(100) on each of the cameras but this is not optimal and there is some delay.

What would be the best to do it? I saw that there is function called ExecuteSoftwareTrigger() but the example is only for a single camera case. How should I go through the InstantCameraArray and make sure all the cameras are taking a picture at the same time?

Thank you very much for your help,

Not able to save image

I am trying to save the image on the disk , but i am getting lot of errors. I tried to convert the array to an image and that doesnt work either. Tried running the samples and even that generates error.

I am using a linux with a basler dart

Frame acquisition delay

Hello,
We are using the Basler acA2040-55um camera and we are experiencing delay of ~1s during frame acquisition using PyPylon, on Ubuntu 16.04, running this script https://github.com/basler/pypylon/blob/master/samples/grab.py
We experience the same result on both of the specified model cameras.
Using the Pylon Viewer, there is no delay.
Can you help us solve this issue?

Best regards,
Vadim Fintinari

Exception handling

The attribute 'GetDescription' is not present for all exceptions handled by genicam. E. g. when trying to grab an image while the camera is controlled by another application, a RuntimeException is thrown. If handled as in the samples via:

except genicam.GenericException as e: 
    print("An exception occurred. ", e.GetDescription())`

, python throws an AttributeException

Use pyinstaller to pack into exe file and get no device.

Get the transport layer factory.

tlFactory = pylon.TlFactory.GetInstance()

Get all attached devices and exit application if no device is found.

devices = tlFactory.EnumerateDevices()

if len(devices) == 0:
raise pylon.RuntimeException("No camera present.")

The above code runs normally.
But when pyinstaller is packaged into exe, the exception "RuntimeException No camera present." is thrown.

Activating Automatic Image Adjustment using pypylon

Hello,
I really like the Automatic Image Adjustment feature from the Pylon Viewer:

auto_image_adjustment

Is there a way to activate the same feature directly from pypylon? I looked quickly in the examples but I didn't find anything.

Thanks a lot for your time,

  • Lucas

How to read out / set the parameters (or features) on pypylon?

As the title showed, I want to get all the parameters of the camera.
I have found some hint from the #21 ,
however, my output of deviceinfo() can not list the features, and I also can not print the ExposureTime which the error shows

Node not existing (file 'genicam_wrap.cpp', line 16600)

By the way, my camera model is acA2040-35gc.

Can anyone give me some suggestion or show me the resolution?
Thank you very much.

Make pyplon available on the Python Package Index (PyPI)

Since you already suggest installation via pip for this package I believe it would make sense to distribute the wheels directly via the Python Package Index (https://pypi.org/). That way users would no longer be required to figure out and download the correct wheel and manually install. pip would take care of that. At the same time users could use pip to update the package easily without having to manually check the releases page on this github page.

As far as I can tell there is no package with the name pypylon on PyPI yet. So that should also be available to you.

Segmentation fault (core dumped)

whenever I try to assign individual objects of InstantCameraArray into their own objects.
and then access object's properties or methods. Segmentation Fault occurs.

here's the example code to reproduce the error.

import logging
from pypylon import pylon
from utils.event_handlers import ConfigurationEventListener

MAX_CAMERAS = 2

formatter = logging.Formatter(
    '%(asctime)s:%(levelname)s:%(process)d:%(thread)d:%(module)s:%(funcName)s:%(lineno)s:%(message)s')
logger = logging.getLogger(__name__)


class Camera:

    def __init__(self, register_event_listener=False):
        tlFactory = pylon.TlFactory.GetInstance()
        # Get all attached devices and raise an error if no devices found
        devices = tlFactory.EnumerateDevices()
        if not devices:
            raise pylon.RUNTIME_EXCEPTION("No camera present.")

        # Create an array of instant cameras for the found devices and avoid
        # exceeding a maximum number of devices.
        cameras = pylon.InstantCameraArray(min(len(devices), MAX_CAMERAS))

        # Create and attach all Pylon Devices.
        for i, cam in enumerate(cameras):
            cam.Attach(tlFactory.CreateDevice(devices[i]))
            if register_event_listener:
                cam.RegisterConfiguration(ConfigurationEventListener(), pylon.RegistrationMode_Append,
                                          pylon.Cleanup_Delete)
            # Print the model name of the camera.
            logger.info("Initalizing device : {}".format(cam.GetDeviceInfo().GetModelName()))
            if "NIR" in cam.GetDeviceInfo().GetModelName().upper()[-3:]:
                self.nir = cam
            else:
                self.rgb = cam


if __name__ == '__main__':
    logging.basicConfig(level='DEBUG', format=formatter._fmt)
    cameras = Camera()
    
    # Segmentation fault (core dumped)
    print (cameras.nir.GetDeviceInfo().GetModelName())

Problem with image.GetArray() after using converter

I am looking to convert a pylon image into a compatible OpenCV image. I am working on Linux and installed 'pypylon-1.1.0+pylon5.0.11.10914-cp35-cp35m-linux_x86_64.whl'. To work with OpenCV, the example script gives the lines of code:

image = converter.Convert(grabResult)
img = image.GetArray()

However I get the following error:
AttributeError: 'PylonImage' object has no attribute 'GetArray'

The workaround we implemented is manually convert to a numpy array using image.GetBuffer():
image = converter.Convert(grabResult)
img = numpy.frombuffer(image.GetBuffer(), dtype=numpy.uint8)
img = numpy.reshape(img,(image.GetHeight(), image.GetWidth(), 3))
This works very well.

Accessing user-defined values on Ace camera

There doesn't seem to be a way to get/set the user-defined values on an Ace camera using pypylon.

  1. Am I missing something?
  2. If this is indeed the case, then how hard would it be to add this functionality? (My organization might be willing to submit a PR here, but I'd like to know what we would be in for.)

NOTE: It looks like the underlying Plyon C++ SDK implements this functionality in the individual USB vs. GigE vs. ... Instant Camera classes, and I wonder why this isn't generalized, e.g., GenApi::IInteger& Basler_UsbCameraParams::CUsbCameraParams_Params::UserDefinedValue.

Possible bug with ExposureTime.Min changing

I'm using pypylon 1.3.1 with Anaconda Python 3.6.6 and Pylon 5.0.12.11830 64-bit on Windows 10.

I am seeing an issue when I query a USB Ace acA1920-155um camera for its minimum exposure time. I first configure an imager object that holds the camera object, which is created using:

self.camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())

Then I store the minimum exposure time using (camera opening and closing not shown):

self.exposure_time_min = self.camera.ExposureTime.Min

Later, I rely on this value for a lower bound in a search routine, but when setting the exposure time to the stored minimum value (using self.camera.ExposureTime.SetValue) the camera raises an exception that self.exposure_time_min is too small. Here is the exception:

Value 20.000000 must be greater than or equal 21.000000. : OutOfRangeException thrown in node 'ExposureTime' while calling 'ExposureTime.SetValue()' (file 'FloatT.h', line 85)

This exception is only thrown the first time the program is run after plugging the camera in to the computer. It is not thrown subsequently, until it is plugged out and back in again. (Specifically, the initial minimum value is only returned as 20 us the first time, and 21 us subsequently, so my program only breaks on the first run after plugging in the camera.) This makes me think it is a bug somewhere in the Basler API, but perhaps the minimum exposure value is not truly fixed and changes depending on the circumstances?

Find List PixelFormat for Specific Camera

Hello,

As mentioned in #2 I am having trouble capturing images with my Ace acA1920-40gc. My biggest problem is that the camera is a color one but I can't figure out how to get a color image and all the images are in black and white.

Following the instruction on ImagingHub, I figured out that I should change the PixelFormat of the camera before starting the recording:

# Simply get the first available pylon device.
first_device = pylon.TlFactory.GetInstance().CreateFirstDevice()
camera = pylon.InstantCamera(first_device)
camera.Open()

# Optional if you set it in Pylon Viewer
camera.PixelFormat = 'RGB8'

camera.StartGrabbing(pylon.GrabStrategy_LatestImages)

However, I get the following error:

An exception occurred.
Traceback (most recent call last):
  File "demo1.py", line 111, in detect_defects
    camera.PixelFormat = 'RGB8'
  File "/home/lucas/anaconda3/envs/demo1/lib/python3.6/site-packages/pypylon/pylon.py", line 4199, in __setattr__
    self.GetNodeMap().GetNode(attribute).SetValue(val)
  File "/home/lucas/anaconda3/envs/demo1/lib/python3.6/site-packages/pypylon/genicam.py", line 2611, in SetValue
    return _genicam.IEnumeration_SetValue(self, entry)
_genicam.InvalidArgumentException: Feature 'PixelFormat' : cannot convert value 'RGB8', the value is invalid. : InvalidArgumentException thrown in node 'PixelFormat' while calling 'PixelFormat.FromString()' (file 'Enumeration.cpp', line 132)

My guess is that RGB8 is definitely not the right thing to use. Is there a list of the different PixelFormat I can use with my camera?

Thanks in advance for your time.

Have a good day,

Changing the Pixel Format

Hello,
I've got 4 daA2500-14um cameras. I already managed to set auto exposure and gain to "Off". Now I want to change the pixel format from default "Mono8" to "Mono12". The identifier for turning auto exposure of is a simple "Off", but the pixel format can't be turned to "Mono12" by cam.PixelFormat.SetValue("Mono12").
Can somebody tell me what the correct identifier for this action is?

Here is my code:

# imports
import os
os.environ["PYLON_CAMEMU"] = "3"
from pypylon import genicam
from pypylon import pylon
import sys
import numpy as np
import scipy.misc
import cv2

# preface
exitCode = 0
maxCamerasToUse = 4
countOfImagesToGrab = 1
try:

    # get the transport layer factory
    tlFactory = pylon.TlFactory.GetInstance()

    devices = tlFactory.EnumerateDevices()
    if len(devices) == 0:
        raise pylon.RUNTIME_EXCEPTION("No camera present")

    cameras = pylon.InstantCameraArray(min(len(devices), maxCamerasToUse))

    for ii, cam in enumerate(cameras):
        cam.Attach(tlFactory.CreateDevice(devices[ii]))

        print(cam.GetDeviceInfo().GetModelName(), "-", cam.GetDeviceInfo().GetSerialNumber())

        # start grabbing
        cam.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
        
        
        # set parameters
        # pixel format
        print(cam.PixelFormat.GetValue())
        cam.PixelFormat.SetValue("Mono 12")
        print(cam.PixelFormat.GetValue())
        # exposure time
        cam.ExposureAuto.SetValue("Off")
        cam.ExposureTime.SetValue(100.0)
        # gain
        cam.GainAuto.SetValue("Off")
        cam.Gain.SetValue(1.0)
        
        cam.Gain.SetValue(float(11))
        print(cam.Gain.GetValue())
        print(cam.ExposureTime.GetValue())

        grabResult = cam.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
        cameraContextValue = grabResult.GetCameraContext()
        print("Grab succeeded: ", grabResult.GrabSucceeded())

        if grabResult.GrabSucceeded() == True:
            img = grabResult.GetArray()
            path = "D:\Python\PyBaslerMultiCam\PyBaslerMultiCam"
            filename = cam.GetDeviceInfo().GetSerialNumber()
            filetyp = "png"
            fullpath = os.path.join(path, filename + "." + filetyp)
            scipy.misc.imsave(fullpath, img)

        cam.StopGrabbing()

except genicam.GenericException as e:
    # error handling
    print("An exception occurred.", e.GetDescription())
    exitCode = 1

sys.exit(exitCode)

Saving images grabbed from camera

Hello;

I'm trying to take image from my new basler camera and save them to a certain folder. I can capture them but I'm not quite sure on how to save them. If anyone has anyone info on this topic please let me know. Thank you.

Format converter not working

I'm trying to deBayer images received from a Basler camera. It seems pylon.ImageFormatConverter() should be able to do this.

However, even the sample in utilityimageformatconverter.py doesn't work for me.

Initially when executed it generates a code rot error:
AttributeError: 'PylonImage' object has no attribute 'Array'

Replacing the script's reference to image.Array with image.GetArray() resolves this, but the resulting output array is all 0s.

Any suggestions? Thanks!

Unable to grab images when accessing a remote camera

I am setting the 'IpAddress' property in device info as follows:

    ptl = factory.CreateTl('BaslerGigE')
    empty_camera_info = ptl.CreateDeviceInfo()
    empty_camera_info.SetPropertyValue('IpAddress', ip)
    camera_device = factory.CreateDevice(empty_camera_info)
    camera = pylon.InstantCamera(camera_device)
    # Print the model name of the camera.
    print("Using device ", camera.GetDeviceInfo().GetModelName())

I notice that if I am inside the same subnet as the camera, I am able to connect to the camera and grab images. But if I am outside the subnet of the camera, I am unable to grab images. I am still able to discover the camera though.

python3 grab_ip.py

Using device acA2440-20gc
An exception occurred.
Traceback (most recent call last):
File "grab_ip.py", line 50, in
grabResult = camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
File "/usr/local/lib/python3.5/dist-packages/pypylon/pylon.py", line 2951, in RetrieveResult
return _pylon.InstantCamera_RetrieveResult(self, *args)
_genicam.TimeoutException: Grab timed out. The acquisition is not started. : TimeoutException thrown (file 'InstantCameraImpl.h', line 1048)

Do I need any other setting? I can provide tcpdumps of the communication between the camera and my pc.

run_coroutine doesn't work in the set a callback for hardware triggers from multiple cameras? #12

Hi @asavpatel92

I tried the script that you provided above, and I am getting an error for the line
" return run_coroutine(self.callback, img=img, camera_model_name=camera.GetDeviceInfo().GetModelName(),
img_id=grabResult.GetID(), img_timestamp=grabResult.GetTimeStamp())"

'run_coroutine' is not defined
I am missing another function or package?

Second question,
can you indicate where in the code are you setting the camera parameters to the hardware trigger?

Thank you
Regards
Merwan

Device can't be opened error.

I am having a strange problem. I got camera running in Python after running it in Pylon Viewer, but when I try to run camera later after unplugging and plugin the camera, I get following error:

camera.Open()
  File "C:\devel\Anaconda3\envs\blink\lib\site-packages\pypylon\pylon.py", line 2764, in Open
    return _pylon.InstantCamera_Open(self)
SystemError: <built-in function InstantCamera_Open> returned NULL without setting an error

What would cause this? For some reason there is no instance of camera to be open?

How to grab image from multiple cameras synchronously

Hello @basler-oss

Inside your samples directory there is a grabmultiplecameras.py file in which have the following comment:

    # Starts grabbing for all cameras starting with index 0. The grabbing
    # is started for one camera after the other. That's why the images of all
    # cameras are not taken at the same time.
    # However, a hardware trigger setup can be used to cause all cameras to grab images synchronously.
    # According to their default configuration, the cameras are
    # set up for free-running continuous acquisition.

My question is how to setup hardware trigger ? Could we use software trigger solution found in grabusinggrabloopthread.py for that ?

I tried using that but it lead to longer execution time than normal for loop as showed in grabmultiplecameras.py

I paste parts of our code here. Hope you guys can help us.

How we init all camera


countOfImagesToGrab = 1
maxCamerasToUse = 2

# The exit code of the sample application.
exitCode = 0

# Init all camera
try :
    # Get the transport layer factory.
    tlFactory = pylon.TlFactory.GetInstance()

    # Get all attached devices and exit application if no device is found.
    devices = tlFactory.EnumerateDevices()
    if len(devices) == 0:
        raise pylon.RUNTIME_EXCEPTION("No camera present.")

    # Create an array of instant cameras for the found devices and avoid exceeding a maximum number of devices.
    cameras = pylon.InstantCameraArray(min(len(devices), maxCamerasToUse))

    for i, camera in enumerate(cameras):
        camera.Attach(tlFactory.CreateDevice(devices[i]))
        
        camera.RegisterConfiguration(pylon.SoftwareTriggerConfiguration(), pylon.RegistrationMode_ReplaceAll,
                                     pylon.Cleanup_Delete)
        camera.RegisterImageEventHandler(ImageEventPrinter(), pylon.RegistrationMode_Append, pylon.Cleanup_Delete)
        camera.GetDeviceInfo().SetPropertyValue("Expose time", '80')
        print("Expose time {}".format(camera.GetDeviceInfo().GetPropertyValue("Expose time")))
    
except genicam.GenericException as e:
    # Error handling
    print("An exception occurred. {}".format(e))
    exitCode = 1

Normal loop

now = datetime.datetime.now()

try:
    # Create and attach all Pylon Devices.
    for i, camera in enumerate(cameras):
        # Start grabbing
        camera.StartGrabbing(pylon.GrabStrategy_OneByOne, pylon.GrabLoop_ProvidedByInstantCamera)
        for i in range(countOfImagesToGrab):
            if not cameras.IsGrabbing():
                break
            grabResult = cameras.RetrieveResult(100, pylon.TimeoutHandling_ThrowException)
            img = grabResult.GetArray()
        
        # Stop grabbing
        camera.StopGrabbing()
        
except genicam.GenericException as e:
    # Error handling
    print("An exception occurred. {}".format(e))
    exitCode = 1

then = datetime.datetime.now()
execution_time = then - now

print("Execution time {} ms".format(execution_time.microseconds/1000))
Execution time 85 ms

grabusinggrabloopthread

now = datetime.datetime.now()

try:
    # Create and attach all Pylon Devices.
    for i, camera in enumerate(cameras):
        # Start grabbing
        camera.StartGrabbing(pylon.GrabStrategy_OneByOne, pylon.GrabLoop_ProvidedByInstantCamera)

        if camera.WaitForFrameTriggerReady(100, pylon.TimeoutHandling_ThrowException):
            grabResult = camera.ExecuteSoftwareTrigger();

        # Stop grabbing
        camera.StopGrabbing()
            
except genicam.GenericException as e:
    # Error handling
    print("An exception occurred. {}".format(e))
    exitCode = 1


then = datetime.datetime.now()
execution_time = then - now

print("Execution time {} ms".format(execution_time.microseconds/1000))
Execution time 157 ms

We want to use about 10 cameras, so is there any solution that we help us take 10 images in 200ms.

The acA3800-14uc can't adjust 1us per unit

Hi Sir

As title, I use the acA3800-14uc camera, and then I run unittest "./pypylon/tests/pylon_tests/usn/calltest.py", but it failed.

The failed message was:
Traceback (most recent call last):
File "C:\Users\linka\Desktop\dip_aoi\pypylon_github\pypylon\tests\pylon_tests\usb\calltest.py", line 58, in test_exposure_time
self.assertEqual(cam.ExposureTime.Max, cam.ExposureTime.Value)
AssertionError: 1600000.0 != 1599990.0

and then I hard code to change "cam.ExposureTime.Value+10.0", it still failed.

The failed message was:
Traceback (most recent call last):
File "C:\Users\linka\Desktop\dip_aoi\pypylon_github\pypylon\tests\pylon_tests\usb\calltest.py", line 61, in test_exposure_time
self.assertEqual(1000, cam.ExposureTime.Value)
AssertionError: 1000 != 1015.0

I opened "pylon 5.1.0 Camera Software Suite Windows" to adjust camera exposure time.
The exposure time seems 35us unit of the time.

I checked the document "https://graftek.biz/system/files/2576/original/Basler_Ace_USB_3.0_Manual.pdf?1479057814"
The acA3800-14uc is 1us of the time, and the maximum value is 1600000

Were they inconsistent? or I install wrong windows driver?

saving images from multiple cameras

Hi , i was able to save an image form a single camera. Can you point me in the right direction to do the same for multiple cameras ? I am using the grabmulitplecamera.py

Image Conversion Error

Hello, I'm working on a project where we are taking images and converting them to be saved. However, every once and a while the camera will throw the error: "Pylon Error: Cannot convert image. The passed source image is invalid. : InvalidArgumentException thrown (file 'ImageFormatConverter.cpp', line 77)". When this error happens the camera does not save the taken image and I'm not to sure on why this is happening and also how to fix it. I talked to Basler and they thought it might be a bandwidth issue however that is not the case. I optimized the bandwidth on the camera and it is still throwing this error. If anyone could give me some insight on this issue that would be great, thanks.

Unable to get color images using ImageFormatConverter

hi there,

I'm unable to run ImageFormatConverter on GrabResult :

NotImplementedError: Wrong number or type of arguments for overloaded function 'ImageFormatConverter_Convert'. Possible C/C++ prototypes are: Pylon::CImageFormatConverter::Convert(Pylon::IReusableImage &,Pylon::IImage const &) Pylon::CImageFormatConverter::Convert(Pylon::IReusableImage &,Pylon::CGrabResultPtr const &)

Is this function still incomplete ?

How to get frame exposure time?

How can I get exposure time for a particular frame after grab is finished? I can get exposure time from the camera object but if camera exposure is changed by the time grab is finished. it would be a wrong exposure time for the frame.

Configure pixel format, frame rate, bandwidth

I have a C++ program that I would like to convert to pypylon. In the C++ code I do the following:
cout << "Registering Event handlers" << endl;
_cameras[0].RegisterImageEventHandler( new CImageEventPrinter, RegistrationMode_Append, Cleanup_Delete);
_cameras[1].RegisterImageEventHandler( new CImageEventPrinter, RegistrationMode_Append, Cleanup_Delete);

    cout << "Setting pixel format" << endl;
    _cameras[0].PixelFormat.SetValue(Basler_UsbCameraParams::PixelFormat_BayerGR8);
    _cameras[1].PixelFormat.SetValue(Basler_UsbCameraParams::PixelFormat_BayerGR8);

    cout << "Disabling overlap mode" << endl;
    _cameras[0].OverlapMode.SetValue(Basler_UsbCameraParams::OverlapMode_Off);
    _cameras[1].OverlapMode.SetValue(Basler_UsbCameraParams::OverlapMode_Off);

    cout << "Setting frame rate" << endl;
    _cameras[0].AcquisitionFrameRate.SetValue(_desiredFPS);
    _cameras[1].AcquisitionFrameRate.SetValue(_desiredFPS);

    cout << "Setting bandwidth limit" << endl;
    _cameras[0].DeviceLinkThroughputLimitMode = Basler_UsbCameraParams::DeviceLinkThroughputLimitMode_On;
    _cameras[0].DeviceLinkThroughputLimit = _bandwidthLimit;
    _cameras[1].DeviceLinkThroughputLimitMode = Basler_UsbCameraParams::DeviceLinkThroughputLimitMode_On;
    _cameras[1].DeviceLinkThroughputLimit = _bandwidthLimit;

But I can't find a similar method of doing this in the Python code. I've tried the following:
self.cameras.PixelFormat = "BayerGR8"

This doesn't throw any errors, but I don't believe it is working correctly.

Camera parameters (i.e. AcquisitionStatusSelector) not available

When consulting the C++ programming guide from https://docs.baslerweb.com some camera parameters are used to control the "single frame acquisition" of the camera. One of the parameters used is:
camera.AcquisitionStatusSelector.SetValue(AcquisitionStatusSelector_FrameTriggerWait);

To check if a trigger has been received and a frame is available. In pypylon however, this attribute is not accessible. Is there a particular reason for this or an alternative?
Thanks,
Ludwig

Bandwidth errors on Raspberry Pi

I'm running PyPylon on a Pi 3B+. I can capture images using the basic grab.py example, however if I plug another camera in and acquire, I'm no longer able to capture images. For example if I run this script:

import cv2
import time

cap = cv2.VideoCapture(0 + cv2.CAP_V4L2)

_, _ = cap.read()
time.sleep(10)

Even after the other camera has finished capturing (i.e. I wait for the delay), I get:

pi@raspberrypi:~/pypylon/samples $ python3 grab.py
Using device  daA1600-60uc
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.
Error:  3791651083 The image stream is out of sync.

I've also seen payload discarded errors. It's definitely not the same camera, because the Dart doesn't enumerate as a V4L device. Is there some configuration I can change? Could this be due to an undervoltage condition on the Pi or something to that effect?

The script works perfectly well when run on its own.

Camera trigger to capture image

Hello:

I'm trying to get my camera to take an image every time time a sensor is triggered. At the moment my set up is have a laser sensor in front of the camera and every time something triggers the sensor it would notify the camera to snap an image. What I need help with is how do I take in the sensor input. I have a version of this in C# where I take in the PLCamera.LineStatus value and just keep checking if it is high or not I'm just not quite sure how to do this in the python version. Any and all help would be appreciated, thank you.

Support for IEEE1394 camera

Should pypylon work with a IEEE 1394 "FireWire" camera?
I have two basler cameras, one for GigE (acA2500-14gm) and one for IEEE1394 (scA1400-17fm). Both work in pylon viewer (5.1.0.12681), but with pypylon I can only detect the GigE camera using tlFactory.EnumerateDevices() (in which only one camera gets enumerated) or TlFactory.GetInstance().CreateFirstDevice(). If I only connect the IEEE1394 camera it is not detected by pypylon as well.
I have pypylon-1.3.1 with Python 2.7.13 and 64-bit Windows 7.

Camera parameters not accessible (Node is not readable. : AccessException thrown in node ...)

This may be a beginners mistake, but I have installed PyPylon using the binaries from the release page (https://github.com/Basler/pypylon/releases) and I am able to run the "grab" sample fine. However, when I try access the cameras parameter as described in the Basler C++ guide, I get an error message.

Example:
camera.AutoGainLowerLimit.GetMin()

Causes the message:

Node is not readable. : AccessException thrown in node 'nA0BB' while calling 'AutoGainLowerLimit.GetMin()' (file 'IntegerT.h', line 149)

I am using a acA1600-20um (USB3.0) on Windows 10 with Spyder 3.2.8 and Pylon 5.0.12

Any help is much appreciated

Picture not processed chronologically with grabmultiplecamera.py

Hello,

I am using the code sample grabmultiplecameras.py with 3 GigE cameras. It works well as I can capture the images from the all three cameras but the pictures don't arrive in chronological order.

Is this something I should change in the software or is this due to the Packet-Delay parameter that I set to 5000 for each of the three cameras to have them work on the same switch.

Thanks a lot for your help,

OutputPixelFormat is not supported by the image format converter

Dear All,

I have a basler camera (acA1600-20uc) which supports the following PixelFormats:

BGR 8
BGRA 8
Bayer 12
Bayer 8
Mono 8
RGB 8

I want to process the captured images with opencv, so I convert the image with the converter. The only format the converter accepts is PixelType_BGR8packed otherwise I get this error:
Traceback (most recent call last):

  File "/home/terbe/rppg-online-python/run_application.py", line 26, in <module>
    converter.OutputPixelFormat = pylon.PixelType_BGR12packed
  File "/home/terbe/anaconda2/lib/python2.7/site-packages/pypylon/pylon.py", line 6347, in SetOutputPixelFormat
    return _pylon.ImageFormatConverter_SetOutputPixelFormat(self, pxl_fmt)
_genicam.RuntimeException: The set output format (36700187) is not a supported by the image format converter. : RuntimeException thrown (file 'ImageFormatConverterImpl.h', line 400)

For my work it is important to get 12bit depth image. Is it possible somehow? I attach the modified sample code below:

'''
A simple Program for grabing video from basler camera and converting it to opencv img.
Tested on Basler acA1300-200uc (USB3, linux 64bit , python 3.5)

'''
from pypylon import pylon
import cv2
import time

# conecting to the first available camera
camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
camera.Open()

camera.Width.Value = 1000
camera.Height.Value = 500
camera.OffsetX.Value = 200
camera.OffsetY.Value = 100
camera.ExposureTime.SetValue(10000)
camera.AcquisitionFrameRate.SetValue(20)

# Grabing Continusely (video) with minimal delay
camera.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
converter = pylon.ImageFormatConverter()

# converting to opencv bgr format
converter.OutputPixelFormat = pylon.PixelType_BGR12packed
converter.OutputBitAlignment = pylon.OutputBitAlignment_MsbAligned

while camera.IsGrabbing():
    startTime = time.time()
    grabResult = camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)

    if grabResult.GrabSucceeded():
        # Access the image data
        image = converter.Convert(grabResult)
        img = image.GetArray()
        cv2.namedWindow('title', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('title', img)
        k = cv2.waitKey(1)
        if k == 27:
            break
    grabResult.Release()

    runningTime = (time.time() - startTime)
    fps = 1.0/runningTime
    print "%f  FPS" % fps

# Releasing the resource    
camera.StopGrabbing()
cv2.destroyAllWindows()

set a callback for hardware triggers from multiple cameras?

Hello,

I'm planning to create a server which is waiting for event triggers from multiple cameras. and attach different callbacks for different triggers in a multi threaded environment.

for example, attach a callback after calling StartGrabbing method for each camera. so I can grab images in parallel from multiple cameras. is it possible to do that?

Thanks,
Asav

LogicalErrorException on genicam call

Hi,

When I started the callback.py from samples directory in the repository I gave an error in 18th line:

>>> genicam.Register(camera.GainRaw.Node, callback)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/dmirecki/miniconda3/envs/test/lib/python3.5/site-packages/pypylon/pylon.py", line 3768, in __getattr__
    return self.GetNodeMap().GetNode(attribute)
  File "/home/dmirecki/miniconda3/envs/test/lib/python3.5/site-packages/pypylon/genicam.py", line 1472, in GetNode
    return _genicam.INodeMap_GetNode(self, Name)
_genicam.LogicalErrorException: Node not existing (file 'genicam_wrap.cpp', line 16600)

Do you have any idea what went wrong?

how to connect remote device which not in the subnet

hello:
I need to connect remote device, I have the camera's IP, but the device not in the subnet, and I did not found "CBaslerGigEDeviceInfo" , "IGigETransportLayer" or other interface about GigE in pypylon.

So, how to connect?

Cannot import pypylon.pylon on Windows when running as system account

As part of a CI workflow we had tests trying to import pypylon.pylon, which crashed the interpreter. I had no success trying to recreate the crash until I ran my command line as the system user (using sysinternals psexec: psexec -i -s cmd.exe); our CI agent also runs as the system user. Simply starting the python interpreter and calling 'import pypylon.pylon' crashes the interpreter process. Attaching a debugger showed an uncaught exception:

Unhandled exception at 0x00007FFE2D7AA388 in python.exe: Microsoft C++ exception: GenICam_3_0_Basler_pylon_v5_0::RuntimeException at memory location 0x00000000009DDEA0.

The last (non c runtime or kernel) call on the stack was to Log_MD_VC120_v3_0_Basler_pylon_v5_0.dll!00007ffe0cf6261d()

Anyway, not sure if this is a pypylon thing or Pylon in general, and also not sure if this is a major issue, but it seems things do not work as expected when run using a system user account. Note that there was no Pylon software installed on the system where this crashed (it's a test server).

Saving color image

Hi,

In my project I'm able to snap a picture and save to a directory of my choosing however, I would like the image to be saved as a color image. But, it keeps saving in black and white. I looked this issue up and people suggested I add this line camera.PixelFormat.SetValue("BayerBG8") to be able to take color images I looked at the camera manual and it does support "BayerBG8" as the color image and "Mono8" as the BnW option (camera: ac2500-20gc) but for some reason it still takes BnW images. Please help?!

Saving converted image

Hello all,
I am trying to save a color image but after I convert the image from the raw "BayerBG8" format to a RGB Pixelformat it wont save. My question is after I convert the image to a RGB Pixelformat how do I save the image. As of right now I am saving the image using(python) Image.fromarray(result.Array).save(save_image_location + "tmp_img" + ".tiff") but after I convert the image it throws an error saying "AttributeError: 'PylonImage' object has no attribute 'Array'". Is there a way I can call ImagePersistence.Save() to save the image like so(c#), ImagePersistence.Save(ImageFileFormat.Tiff, path + "\\" + filename + ".tiff", image) in python?
Camera is: ac2500-20gc

Getting thread Issue while running the python program

I have written a program to extract frame form camera using capture.GrabOne and also by continuously grabbing frame from the thread.
My problem is that many time my program terminate giving error of that unable to execute thread when other thread is running.
I am not posting the code, can post it if required but i just wanted to get an example of thread safe program in python.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.