fsphil / fswebcam Goto Github PK
View Code? Open in Web Editor NEWA neat and simple webcam app
Home Page: http://www.sanslogic.co.uk/fswebcam/
License: GNU General Public License v2.0
A neat and simple webcam app
Home Page: http://www.sanslogic.co.uk/fswebcam/
License: GNU General Public License v2.0
fswebcam - Small and simple webcam software for *nix. Created by Philip Heron <[email protected]> http://www.sanslogic.co.uk/fswebcam/ This is the program used to generate images for a webcam. It captures a number of frames from any V4L or V4L2 compatible device, averages them to reduce noise and draws the details on it using the GD Graphics Library which also handles compressing the image to PNG, JPEG or WEBP. INSTALLING Run the following commands in the source folder to build and install fswebcam: ./configure --prefix=/usr make make install It's only requirements are that the GD library be installed with JPEG, PNG and FreeType support.
I need to run multiple cameras simultaneously on raspberry
Message error:
Insufficient buffer memory.
Unable to use mmap. Using read instead.
Unable to use read.
I've been playing around with a few methods to capture images from a USB OV2710-based camera. I liked the idea of fswebcam
(instead of using mplayer
or ffmpeg
). I've been suffering with some issues though:
Regularly, most frequently when recently switched on, the image progressively gets darker and darker, even though the command line parameters don't change at all. After maybe 10-15 exposures the image stored is completely black. Suddenly, after some time, things start working, and continue to work for hours.
--set brightness=xx
doesn't seem to have any effect. Brightness does appear in the controls list
setting exposure to manual, then setting certain exposure values, doesn't seem to have any effect either.
The camera comes with a 3m cable. I have some reservations about that, with the USB specs stating 1.8m as a maximum. Still, no signs of protocol errors. No errors in the downloaded images either.
I'd appreciate suggestions for further testing!
I would like fswebcam to have a 2 seconds initialization delay, then take 6 images with different names and a delay of 0.5 seconds between each image and then quit. From reading the man page, i think i can only start an infinite loop and each image gets overwriten by the next one.
Right now i am using http://linux.die.net/man/1/streamer with streamer -c /dev/video0 -q -s 1920x1080 -t 10 -r 2 -j 100 -o ~/00.ppm
to create 00.ppm to 09.ppm, delete the first 4 (too dark or broken) and convert the last 6 to JPG. If i use streamer to directly create JPGs, there are strange artefacts in red colored image parts, which doesnt happen when using PPM. Maybe its related to MJPG vs. YUYV !? So streamer doesnt support an initial delay nor produce usable JPGs.
This feature would help us get rid of hot pixels and dust motes
I hope the title says it all? I think this could be a great enhancement.
Thanks for all your hard work!
I'm trying to configure fswebcam on a RaspberryPi: it gives me:
configure: error: GD graphics library not found
What do I have to install? I tried with libgd2-xpm
Please tag 20140113 release
Add support to fswebcam for libv4l. This library supports more frame types and formats than fswebcam does natively.
The help page says that PNG compression factor values are in the range 0-10, but they should be 0-9.
It is correct in the man page.
Hi Phil,
you think you can make a new release of the software on this website ?
http://www.sanslogic.co.uk/fswebcam/
The current one is pretty old..
Thanks !
Hello,
I'm testing fswebcam with a Logitech Webcam C930e on ubuntu 16.04 LTS.
Running :
fswebcam -r 1920x1080 --no-banner --no-title --jpeg 95 -d /dev/video1 /tmp/output.jpg -v
produce the following verbose log :
--- Opening /dev/video1...
Trying source module v4l2...
/dev/video1 opened.
src_v4l2_get_capability,87: /dev/video1 information:
src_v4l2_get_capability,88: cap.driver: "uvcvideo"
src_v4l2_get_capability,89: cap.card: "Logitech Webcam C930e"
src_v4l2_get_capability,90: cap.bus_info: "usb-0000:00:14.0-2"
src_v4l2_get_capability,91: cap.capabilities=0x84200001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video1: Input 0 information:
src_v4l2_set_input,182: name = "Camera 1"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,520: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,533: 0: [0x56595559] 'YUYV' (YUYV 4:2:2)
src_v4l2_set_pix_format,533: 1: [0x47504A4D] 'MJPG' (Motion-JPEG)
Using palette MJPEG
src_v4l2_set_mmap,672: mmap information:
src_v4l2_set_mmap,673: frames=4
src_v4l2_set_mmap,722: 0 length=4147200
src_v4l2_set_mmap,722: 1 length=4147200
src_v4l2_set_mmap,722: 2 length=4147200
src_v4l2_set_mmap,722: 3 length=4147200
--- Capturing frame...
Captured frame in 0.00 seconds.
--- Processing captured image...
Disabling banner.
Clearing title.
Setting output format to JPEG, quality 95
Writing JPEG image to '/tmp/output.jpg'.
Up to here everything looks fine but the output's pixels appears shifted :
Do someone have any idea of what's happening ?
On the same environmet, the webcam works fine with FFmpeg and web apps that make use of the device through navigator.
In fswebcam.c, the help message will tell you to use:
-o, --output
To write the log to a file.
But from the manpage (and reality), we should use --log instead
Bug opened against fswebcam on Ubuntu
https://bugs.launchpad.net/ubuntu/+source/fswebcam/+bug/1475179
First off, thanks for such a great little utility.
Is there a way to emit the output image to stdout? I don't see a way with the options - this would be super useful for piping into another program.
Hello,
I am trying to take photos from a web cam (Logitech c920) with manually selected exact camera settings. I am loading configuration file with --config flag.
device /dev/video0
delay 1
skip 20
jpeg 80
resolution 1600x896
no-banner
set "White Balance Temperature, Auto"=False
set "White Balance Temperature"=6500
set "Focus, Auto"=False
set "Focus (absolute)"=72
set "Exposure, Auto"=False
set "Exposure (absolute)"=215
set "Gain"=1
frames 1
I think these setting are not loading correctly, since I get different results at every try.
Is there a missing configuration parameter? Or is it a hardware problem?
Thank you
Edit:
Taking a few pictures before adjusting camera settings solved my problem.
Additionally, setting -S (frames skip) argument a large value (100 for my case), gives absolutely stable results.
Hi.
I have this project:
https://github.com/morphex/surveil
Where I just added some code to re-call fswebcam if an error is detected in the output. However, I guess it should be an option of fswebcam, that it re-gets an image from the webcam, if an error is detected.
The error in question is
GD Error: gd-jpeg: JPEG library reports unrecoverable error: Unsupported
marker type 0x28Captured frame in 0.00 seconds.
Regards,
Morten
Is there a way to get all frames captured or atleast enough frames to make the output look real time? The current --loop
flag doesn't allow fractions, so that route doesn't work
You are using integer math in summing frames to increase processing speed. Integer math has the advantage that it can keep the high precision of floating point values even though you use integers. But then (e.g. when adding YUYV images), you artificially reduce the precision of your result to 8bit before you sum (average) your frames. You would get a much better result if you first summed them and later clip them. I have produced a version of the code which uses 16bit integer precision combined with 32bit buffers to achieve a better image quality when adding images that internally are anyways available in 16bit precision.
Is it possible to capture images in loop mode with a delay of less than one second? I'd like to capture images at say 15 fps.
Hi, I am using fswebcam to capture some images from usb cameras by command line, but the effect of image is a little different from the image which I captureed by using some manual software, the effect is not good, but I don't now how to adjust itγI need some help, thanks
It would be a nice feature to be able to run a command after each frame is captured when in --loop mode so we can do external processing on it, without having to jump through hoops to determine if the image is in the process of being written to or not.
I'm working with a Logitech webcam, and the controls have complex names:
griscom@dell:~$ fswebcam --list-controls
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
Available Controls Current Value Range
------------------ ------------- -----
Brightness 128 (50%) 0 - 255
Contrast 32 (12%) 0 - 255
Saturation 32 (12%) 0 - 255
White Balance Temperature, Auto True True | False
Gain 45 (17%) 0 - 255
Power Line Frequency 60 Hz Disabled | 50 Hz | 60 Hz
White Balance Temperature 2800 (0%) 2800 - 6500
Sharpness 22 (8%) 0 - 255
Backlight Compensation 1 0 - 1
Exposure, Auto Aperture Priority Mode Manual Mode | Aperture Priority Mode
Exposure (Absolute) 83 (3%) 3 - 2047
Exposure, Auto Priority True True | False
Pan (Absolute) 0 (50%) -36000 - 36000
Tilt (Absolute) 0 (50%) -36000 - 36000
Focus (absolute) 0 (0%) 0 - 255
Focus, Auto False True | False
Zoom, Absolute 1 1 - 5
Adjusting resolution from 384x288 to 352x288.
--- Capturing frame...
Captured frame in 0.00 seconds.
--- Processing captured image...
There are unsaved changes to the image.
griscom@dell:~$
A big barrier to figuring this out is that the --set
option for fswebcam fails silently if a control isn't known or supported by the camera. Since I didn't know that fswebcam would report each properly set control, and the control effects weren't clear (Logitech's fault, but it made it worse) I was just fishing around with different styled names.
Suggestion: have fswebcam either output a warning, or fail completely, if the user tries to set a control that doesn't exist.
Thanks,
Dan
Just that: to have an option to take the photo and put it in clipboard. No file.
There seem to be cases in v4l2 devices where the menu options are not contiguous. For example (Logitech C920 v4l2-ctl -L output):
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=1
1: Manual Mode
3: Aperture Priority Mode
When trying to change this setting here it cycles through each menu item to compare it with the input text. Since indexes 0 and 2 don't have test, the error message is printed out.
I'd suggest not printing out an error there since it's not really a problem and if the value doesn't match it will be caught below. Maybe a warning or a message would be more appropriate if something must be printed out at all.
File: src_v41_set_read.c
Bug Function: src_v4l_open
Version: Git-master
In line 788-792
if(src->use_read && src_v4l_set_read(src))
{
src_v4l_close(src);
return(-1);
}
In the src_v4l_set_read
, if in line 667 s->buffer = malloc(s->buffer_length);
failed, it will release src in line 671 and return -1. So in line 788, if src->use_read != NULL, line 790 will be executed, which will lead to UAF
I kept getting blank images, but guvcview worked fine. I couldn't figure it out until I read somewhere in the issues about --skip 10. That solved it, but I wasted 2-3 hours trying to figure out why I wasn't getting images. You really ought to suggest --skip 10 near the beginning of the description instead of burried down in the list of options.
Hey guys so this is what I get. I am trying to use my Logitec camera and I cannot seem to get my image.
pi@raspberrypi /dev $ fswebcam -f image.mjpg -v
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,87: /dev/video0 information:
src_v4l2_get_capability,88: cap.driver: "uvcvideo"
src_v4l2_get_capability,89: cap.card: "HD Pro Webcam C920"
src_v4l2_get_capability,90: cap.bus_info: "usb-bcm2708_usb-1.3"
src_v4l2_get_capability,91: cap.capabilities=0x04000001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video0: Input 0 information:
src_v4l2_set_input,182: name = "Camera 1"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,541: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,554: 0: [0x56595559] 'YUYV' (YUV 4:2:2 (YUYV))
src_v4l2_set_pix_format,554: 1: [0x34363248] 'H264' (H.264)
src_v4l2_set_pix_format,554: 2: [0x47504A4D] 'MJPG' (MJPEG)
Unable to find a compatible palette format.
Any suggestions?
I've used "aptitude install fswebcam" on my raspberrypi (debian)
and "fswebcam -r 640x480 -d /dev/video0 -v /tmp/test.jpg", but test.jpg looks pretty messy (green pixel areas, some areas switched positions so that the picture looks more like a puzzle). Picture looks just the same with different resolutions.
When I plug the webcam into my windows pc, the image is just fine, so that it should be a software/driver problem.
Here is a short log from dmesg:
[ 2.515488] usb 1-1.2: new high-speed USB device number 4 using dwc_otg
[ 2.727173] usb 1-1.2: New USB device found, idVendor=093a, idProduct=2700
[ 2.733221] usb 1-1.2: New USB device strings: Mfr=16, Product=96, SerialNumber=0
[ 2.740215] usb 1-1.2: Product: USB2.0_Camera
[ 2.746176] usb 1-1.2: Manufacturer: PixArt Imaging Inc.
[ 3.356516] udevd[142]: starting version 175
[ 4.386705] Linux video capture interface: v2.00
[ 4.500902] uvcvideo: Found UVC 1.00 device USB2.0_Camera (093a:2700)
[ 4.590241] input: USB2.0_Camera as /devices/platform/bcm2708_usb/usb1/1-1/1-1.2/1-1.2:1.0/input/input0
[ 4.712367] usbcore: registered new interface driver uvcvideo
[ 4.905205] USB Video Class driver (1.1.1)
I'm using an Alcor OMEA allsky camera, which seems to be based on a DMK 51AU02.AS camera by The Imaging Source. This camera (and others in its family) has the unfortunate feature that it lies about its pixel format: it reports BA81
(a BGGR pattern) while in fact the correct format would be SGBRG8
. This leads to the debayering being done with the wrong pattern and messed-up colours.
Since there's no way to force the correct pattern when the camera itself is wrong, I had to made a fork with an ugly hack that basically just skips the check against the value given by the camera. That works fine for me, so it's not a big deal now. This is more of a FYI.
The same issue is dealt by the oaCapture software with a similar hack, https://github.com/openastroproject/openastro/blob/master/liboacam/v4l2/V4L2getState.c#L117
Hello,
I am reading the library and I want to get raw data before data gos to encoder(jpg,png). So the variable src.img is the raw data of the image?
The busybox implementation of gzip doesn't support the alias option --best
. It does support -9
, which is equivalent. The Makefile would be more portable (e.g. would work in alpine linux) if the more universal flag -9
was used instead.
fswebcam asks for sudo permission even if I run it with sudo. This makes it useless in any script.
Having sudo permission is fine. But asking with a pop up window even when it has been run with sudo is weird.
Last issues and PR seems not to be taken into account...is there any problem?
Thanks.
I see there's a delay option, but it's not working for me to allow the camera to autofocus. It seems like it delays the program, then opens the webcam channel and takes a picture.
I think I need the channel open (camera on) for a delay of X seconds, before it captures the image. This allows autofocus to complete.
My camera is the Microsoft LifeCam Cinema
Is there a way to do this?
Preferably via something like --eval which would work like --exec but then treat the output as a config file. It should be eval'ed for each loop iteration. (so that you can add non-static titles, like temperature, or any other dynamic information)
(I thought that --exec/--config would work, but it seems that --config is only loaded once at startup, and you can't have config files import other config files)
Reading the code, it seems that --config/--exec would sort of work if the scripts sent a sighup, the updates would be 1 capture behind :-(
eval sounds cooler :-)
Hey guys,
the fswebcam worked pretty well until a few days.
My Programm-code:
import os
import time
i=0
range=1000
while i<=range:
os.system('sudo fswebcam -v /home/pi/pictures/%Y-%m-%d_%H:%M:%S.jpg')
i=i+1
time.sleep(300)
Pretty simple and it worked fine until few days ago. After taken around 10 Pictures the following error occures:
--- Opening /dev/video0...
stat: Mo such file or directory
Turning my Raspberry on and off worked, but its now the third time this error occures, and I didnt get the pictures through the night.
Anybody any suggestions?
Its running on an Raspberry Pi 3 B+ and Raspberry Pi OS.
Thanks in advance,
bugy186
Hi,
I'm trying to set a non-English (Hungarian) locale for the timestamp, when the timestamp displays the weekday.
Even when the linux locale is set to Hungarian, so the "date" command in command line shows the weekdays in Hungarian the timestamp in fswebcam keep using the English locale.
Any idea to force a locale or get fswebcam to use the system default?
Thanks
Hi guys, thanks for this great tool.
I'm a newbie and trying this fswebcam tool in the raspberian linux smart mirror project . Although USB webcam works without any issue, Raspberry camera module gives error. Can you please help?
Best regards,
fswebcam -r 1280x1024 --no-banner image1.jpeg+
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
Error starting stream.
VIDIOC_STREAMON: Invalid argument
Unable to use mmap. Using read instead.
--- Capturing frame...
VIDIOC_DQBUF: Invalid argument
Segmentation fault
ozgur@raspberrypi:~$ v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
/dev/video10
/dev/video11
/dev/video12
/dev/video18
/dev/video31
/dev/media4
bcm2835-isp (platform:bcm2835-isp):
/dev/video13
/dev/video14
/dev/video15
/dev/video16
/dev/video20
/dev/video21
/dev/video22
/dev/video23
/dev/media1
/dev/media3
unicam (platform:fe801000.csi):
/dev/video0
/dev/video1
/dev/media0
rpivid (platform:rpivid):
/dev/video19
/dev/media2
djh@djh-dell-p5510:~/fswebcam$ fswebcam -q -d /dev/video10
stat: No such file or directory
djh@djh-dell-p5510:~/fswebcam$ echo $?
0
Use case is a cron:
fswebcam -q -d /dev/video1 -r 1920x1080 /home/djh/Dropbox/webcam/\%Y\%m\%d/\%H\%M.png || fswebcam -q -d /dev/video0 -r 1920x1080 /home/djh/Dropbox/webcam/\%Y\%m\%d/\%H\%M.png
The idea is if the external webcam isn't hooked up, use the portable computer's internal webcam. But when fswebcam fails to open video1, it simply returns 0.
src_v4l1.c:101]: (style) Boolean result is used in bitwise operation. Clarify expression with parentheses.
Source code is
if(!vd->type & VID_TYPE_CAPTURE)
Maybe better code
if ( ! (vd->type & VID_TYPE_CAPTURE))
Also:
src_v4l2.c:106]: (style) Boolean result is used in bitwise operation. Clarify expression with parentheses.
Duplicate.
If a resolution is specified using just a width and no height, fswebcam will freeze and the host system becomes unresponsive.
$ fswebcam -d test -r 640 output.jpeg
I loved fswebcam
as soon as I see its man page full of options and features. Unfortunately problems started as soon as I tested it. Initially I had a palette/format issue:
$ fswebcam -f /var/www/image.jpg -v
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,87: /dev/video0 information:
src_v4l2_get_capability,88: cap.driver: "pac7302"
src_v4l2_get_capability,89: cap.card: "USB Camera (093a:262c)"
src_v4l2_get_capability,90: cap.bus_info: "usb-bcm2708_usb-1.2"
src_v4l2_get_capability,91: cap.capabilities=0x05000001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,101: - READWRITE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video0: Input 0 information:
src_v4l2_set_input,182: name = "pac7302"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,541: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,554: 0: [0x47504A50] 'PJPG' (PJPG)
Unable to find a compatible palette format.
Seems my webcam supports only PJPG
format which is not supported in your application. I had to run it using:
$ LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libv4l/v4l2convert.so fswebcam -f /var/www/image.jpg -v
This way things starts to work. But fswebcam
is capable to capture frame every 1 attempt out of 10 (circa). Most of the times I have a VIDIOC_DQBUF: Resource temporarily unavailable
:
$ LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libv4l/v4l2convert.so fswebcam -v -f /var/www/image.jpg
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,87: /dev/video0 information:
src_v4l2_get_capability,88: cap.driver: "pac7302"
src_v4l2_get_capability,89: cap.card: "USB Camera (093a:262c)"
src_v4l2_get_capability,90: cap.bus_info: "usb-bcm2708_usb-1.2"
src_v4l2_get_capability,91: cap.capabilities=0x05000001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,101: - READWRITE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video0: Input 0 information:
src_v4l2_set_input,182: name = "pac7302"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,541: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,554: 0: [0x33424752] 'RGB3' (RGB3)
src_v4l2_set_pix_format,554: 1: [0x33524742] 'BGR3' (BGR3)
src_v4l2_set_pix_format,554: 2: [0x32315559] 'YU12' (YU12)
src_v4l2_set_pix_format,554: 3: [0x32315659] 'YV12' (YV12)
Using palette RGB24
Adjusting resolution from 384x288 to 640x480.
src_v4l2_set_mmap,693: mmap information:
src_v4l2_set_mmap,694: frames=4
src_v4l2_set_mmap,741: 0 length=16777216
src_v4l2_set_mmap,741: 1 length=16777216
src_v4l2_set_mmap,741: 2 length=16777216
src_v4l2_set_mmap,741: 3 length=16777216
--- Capturing frame...
VIDIOC_DQBUF: Resource temporarily unavailable
No frames captured.
I tried using everything (high delay with -D
option, high frame skipping -S
, high number of frames -F
, I tried -R
too), but all of these does not improve anything. Capture still work 10% of the times I run it. Any idea?
I am capturing images with a webcam that supports up to 2048x1536 resolution in Linux. (The camera is capable of 2560x1920).
When I capture images at 2048x1536, it appears to capture a portion (starting from the bottom left) of the 2560x1920. See the image below, on the left the eagle is the same size as on the right, showing that the image was cropped.
The only happens at resolutions over 1280x960.
I have two cameras, one that works, and one that crops over 1280. Both cameras are identical models.
I did a lsusb -v on the device and the only difference I saw between the two cameras was this:
Working camera:
bmVideoStandards 0xff
None
NTSC - 525/60
PAL - 625/50
SECAM - 625/50
NTSC - 625/50
PAL - 525/60
Camera that crops:
bmVideoStandards 0x7f
None
NTSC - 525/60
PAL - 625/50
SECAM - 625/50
NTSC - 625/50
PAL - 525/60
Any Ideas?
Thanks!
It's in the manpage, but not in the usage text.
I have the same problem as another issue that was closed. The image gets corrupted but it is blocked that are out of place. Sometimes the blocks have different brightness, suggesting that the blocks comes from different frames. A corrupted image can be seen at: http://www.i2cchip.com/fswebcam/lastImage.jpg Initially I thought the voltage dips while taking an image. I used a powered hub and used an oscilloscope to confirm that the voltage does not dip but it still happens. I am using fswebcom on a TP Link WR703N running Openwrt.
Hi,
I'm unable to capture an image from my web cam with fswebcam. It always says "Unable to find a compatible palette format.". I've tried the package from Ubuntu 10.04 (lucid) and I've compiled the latest source code myself.
This is what I get when I try to run fswebcam:
user@host:~/webcam$ sudo fswebcam -v
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,82: /dev/video0 information:
src_v4l2_get_capability,83: cap.driver: "STV06xx"
src_v4l2_get_capability,84: cap.card: "Camera"
src_v4l2_get_capability,85: cap.bus_info: "usb-0000:00:01.2-1"
src_v4l2_get_capability,86: cap.capabilities=0x05000001
src_v4l2_get_capability,87: - VIDEO_CAPTURE
src_v4l2_get_capability,96: - READWRITE
src_v4l2_get_capability,98: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,176: /dev/video0: Input 0 information:
src_v4l2_set_input,177: name = "STV06xx"
src_v4l2_set_input,178: type = 00000002
src_v4l2_set_input,180: - CAMERA
src_v4l2_set_input,181: audioset = 00000000
src_v4l2_set_input,182: tuner = 00000000
src_v4l2_set_input,183: status = 00000000
src_v4l2_set_pix_format,536: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,549: 0: [0x47425247] 'GRBG' (GRBG)
Unable to find a compatible palette format.
(EDIT: It seems that the markdown format is not working right, so I pasted it at pastie for better reading)
I've tried every available palette format, but nothing works.
My web cam is an older Logitech QuickCam.
Do you have an idea what I can do to get the cam working?
I see a request for dark frame removal in the list of issues, is it possible to tweak your averaging function to add a multiplying feature between frames.
I'm not an expert here, but I'm guessing it'd need a threshold value that could be customised, so that dark areas become darker (negative multiplication), and light areas become lighter.
A rudimentary imaging "stacking" feature, if you will.
Love your software, I use it all the time!!!
I have realized that writing png files with the highest level of compression is pretty slow (actually almost 5 times slower than when I compress an raw image for example with ImageMagick). I believe that the reason are unnecessary (and maybe even unwanted) conversions in libgd. I see the beauty of libgd as you need to run with only one libarary, but as libgd already links with libpng anyways, it will be available. I would volunteer to implement the possibility to write images directly with libpng as I believe it will be significantly faster.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.