Giter Site home page Giter Site logo

swharden / python-gui-examples Goto Github PK

View Code? Open in Web Editor NEW
238.0 30.0 166.0 578 KB

A growing collection of bare-bones GUI examples for python (PyQt4, Python 3)

Home Page: http://www.SWHarden.com

License: MIT License

Batchfile 0.56% Python 99.44%

python-gui-examples's Introduction

Python-GUI-examples

This repository is a collection of minimal-case examples for performing common GUI-related tasks in python. Emphasis is placed on tasks involving linear data plotting, image processing, and audio/frequency analysis.

Description Screenshot
scrolling live data with PlotWidget - extremely high speed graphing designed for realtime updates in GUI applications
PyQt4 scrolling live data with MatplotlibWidget - pretty graphs with the MatPlotLib library (which many people already know how to use), but likely too slow for realtime / interactive graphing
live PCM and FFT plotting with QtGraph (based on PlotWidget)
quick and simple pyqtgraph example to launch an interactive chart starting with a set of X/Y data

Useful Links:

python-gui-examples's People

Contributors

jduchniewicz avatar shen-cs avatar swharden avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-gui-examples's Issues

SWHear: input_device_index not used in stream_start

In SWHear.py, there is possibility to set the device by id in constructor/init. This id is verified (valid_input_devices function). Actually half of this class deals with selecting the right audio source. Finally, in stream_start, the following parameter to self.p.open() is missing:

input_device_index=self.device

...leading to opening the default (?) audio device, not the one specified.

So I suggest the following change in swhear.py:

replace self.stream=self.p.open(format=pyaudio.paInt16,channels=1, rate=self.rate,input=True,frames_per_buffer=self.chunk)
with self.stream=self.p.open(format=pyaudio.paInt16,channels=1, rate=self.rate,input=True,frames_per_buffer=self.chunk, input_device_index=self.device)
in line 146ff.

Btw. thanks for providing the audio/fft/graph sample. It is a nice starting point for visualizing the data captured by my "analog accelerometer connected to usb sound card" setup

[Errno -9981] Input overflowed

Hello,

I am actually using your python program in folder "2016-07-37_qt_audio_monitor" and I am getting the following error after a while working: "[Errno -9981] Input overflowed".

How can I fixed it? The program is running on a Raspberry Pi 3 B.

Thank you.

Improve usage of the sounddevice module

Sorry for spamming here, I wanted to comment on https://www.swharden.com/wp/2016-07-19-realtime-audio-visualization-in-python/ but I couldn't manage to comment over there ...

You are showing this code snippet:

import sounddevice #pip install sounddevice

for i in range(30): #30 updates in 1 second
    rec = sounddevice.rec(44100/30)
    sounddevice.wait()
    print(rec.shape)

The function sounddevice.rec() is really not meant to be used repeatedly in quick succession.
It's really just a high-level wrapper for the case where you want to record one piece of audio and you know the duration beforehand. It's no wonder that it creates glitches if you use it like in your example, because for each call a new "stream" object is created, audio processing is started, then the (very short) recording is done, audio processing is stopped and the "stream" object is destroyed. Just to create a new "stream" object an instance later ...

Since you are using stream.read() from PyAudio, you could as well use stream.read() from the sounddevice module: https://python-sounddevice.readthedocs.io/en/latest/api.html#sounddevice.Stream.read.
This uses the exact same underlying PortAudio function call, it just does the conversion to a NumPy array internally without you having to worry about it.

This should take about the same amount of resources than PyAudio (only a bit more for the very little overhead of CFFI calls in Python code, probably not really measurable).
And it should definitely not produce any glitches (given that the same block size and latency settings are used as in PyAudio).

As a (not necessarily better) alternative to using stream.read() you can also implement your own callback function (as you could also do in PyAudio, but again the NumPy array conversion is done for you). I've provided an example program for recordings where the duration is not known in advance: https://github.com/spatialaudio/python-sounddevice/blob/master/examples/rec_unlimited.py.

BTW, I'm the author of the sounddevice module, in case you didn't know.

ImportError: No module named 'matplotlibwidget'

When running the Matplotlib examples I got the following error:

ImportError: No module named 'matplotlibwidget'

The problem is that the MatplotlibWidget class is not included with MatPlotLib but is part of WinPython. This means that most users cannot run your examples. See for instance this question on Stack Overflow.

I've found the code of the MatplotlibWidget class here. Please make it clear to readers that they need to download it, or somehow include it in your repository.

Register audio

Hi, your work is very interesting! I would like to register the audio in a file.wav after stopping the real time stream. Hence, i add a frames list in SWHear.py and i update it in this way:

def stream_readchunk(self):
"""reads some audio and re-launches itself"""

    try:
        self.data = np.fromstring(self.stream.read(self.chunk, exception_on_overflow=False),dtype=np.int16)
        self.fftx, self.fft = getFFT(self.data,self.rate)
        for i in range(0, int(self.rate / self.chunk*5)):
            frames.append(self.stream.read(self.chunk, exception_on_overflow=False))

Then, i pass frames to go.py and save the audio in this way:
waveFile = wave.open("test_file.wav", 'wb')
waveFile.setnchannels(1)
waveFile.setsampwidth(pyaudio.PyAudio().get_sample_size(pyaudio.paInt16))
waveFile.setframerate(44100)
waveFile.writeframes(b''.join(SWHear.frames))
waveFile.close()

Now, my problem is that the for loop that i added in SWHear.py creates a great delay in the graphic interface. If i remove it, the saved test_file.wav is sampled uncorrectly. Do you have some idea to solve this problem? Or a simpler way to implement this thing?

Moreover, sometimes i observe the following error:

ALSA lib pcm_dsnoop.c:638:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dsnoop.c:638:(snd_pcm_dsnoop_open) unable to open slave
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736

Using the microphone of my pc, it seems that only sometimes the program does not recognize it. Why?
Thanks in advance,
Giorgio

Improve the automatically-detected audio sample rate

Hi,
first of all thanks for publishing. It's a great starting point and saved me some time in my own project.

One thing I noticed:
in valid_low_rate() we are looping through [44100] to find a testrate

for testrate in [44100]:
    if self.valid_test(device,testrate):
         return testrate

Then, in valid_test(), we want to open a stream to see if the given testrate works with the given device. However, we are not opening the stream with the given testrate, but with the defaultSampleRate of the device.

stream=self.p.open(format=pyaudio.paInt16,channels=1,
                                input_device_index=device,frames_per_buffer=self.chunk,
                                rate=int(self.info["defaultSampleRate"]),input=True)

I suggest not to look for the lowest possible rate, but only test the user-specified rate. If it is not working or the user didn't specify it, I recommend testing the defaultSampleRate of the device (which should probably work) and then setting self.rate to this value.

[Question] - Python Gui Examples - SWHear - getFFT()

@swharden
Hi Scott !

I'm trying to understand your code, more specifically the math behind it as I am trying to build an audio vizualiser for fun (generate various patterns / colors / shapes on a screen based on real-time analysis of PCM data from a playing song).

I have come to the realisation that I absolutely need to lean how to use FFT and I'm stil figuring it out, I have basic knowledge of math / physics in that regard so I'm not too good but I understand some high level concepts. My question is related to the Hamming Window you seem to be applying on the data chunks before applying the FFT to it, in the getFFT function of the SWHear.py class.

I have included my questions in the comments of the code but please let me add more detail.
My understanding of the Hamming Window is that it helps reduce the noise around the frequency peaks that are generated by the FFT. What I don't understand is why you decided to apply it "out of the blue" instead of analyzing the data first to discover where the actual peaks where before applying the Hamming Window to them. Now this question probably arises because of my lack of knowledge in this field so please don't hesitate to point out where I'm wrong.

Also, I don't get why you do both the FFT and the FFTFreq, I obviously need to do more research on the matter but if you can explain it in a few sentences it would really help :) !

    # Why calculate the hamming on the LENGTH and not the data itself ?
    data=data*np.hamming(len(data))
    fft=np.fft.fft(data)
    fft=np.abs(fft)
    #fft=10*np.log10(fft)
    # Why ffrfreq AND fft ? 
    freq=np.fft.fftfreq(len(fft),1.0/rate)

Thanks for making this, its helping me a lot !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.