Hi Rob! Interesting problem. To get the discussion flowing, and because I'm right now thinking out aloud, might as well write things down: I think these are the osmocom gr-iqbal blocks, right? Let's look at what "IQ Bal Optimize" does! It's my understanding [1] that what they do is take an FFT of signal, look every positive frequency j bin, conjugate-multiply it with the opposite negative frequency -j bin; sum up over all positive j. So, assume that the *signal* in the FFT's bin j and bin -j (or fft_size-j, but that's semantics) are uncorrelated. Let's say we observe the signal with an ideal signal chain, Then, the conjugate complex product FFT[j]*FFT'[-j] is 0 on expectation, by definition of "uncorrelated"¹. (Note: this means that one *must not* have a signal that's conj symmetric to f=0, or it would get detected as IQ imbalance!) If, however, there's IQ imbalance, then there's some component in FFT[-j] that contains a (weighted) mirror image of FFT[j], i.e., FFT[-j] = FFT[j]·b + actual_signal_fft[-j], with b being some complex factor. Thus, let us define a function with p as parameter, and the FFT-ed signal as input; here, the expectation operator E{…} is meant to be understood pretty empirically, "across all j": f_p(FFT) := E{FFT[j]·p·FFT'[-j]} = E{FFT[j]·FFT'[j]·p·b + p·FFT[j]·actual_signal_fft'[-j]} = E{|FFT[j]|²·p·b} + p·E{FFT[j]·actual_signal_fft'[-j} = p·b·E{|FFT[j]|²} + 0 What then happens is that a bit of convex optimization is thrown at this f_p(FFT) := p·E{FFT[j]·FFT'[-j]}. Since the signal energy |…|² is constant, the p that is conj(b) maximizes the function. Neat; don't overdo it on iterations on that optimization, because this is all noisy and you can't know how uncorrelated signal at opposite frequencies really is in this instant! Then, that optimization is repeated when there's new input that can be transformed, using the previous p as initialization. Thus, assuming signal fluctuates, and noise is uncorrelated, thus converging against a good estimate of the actual conj(b). Now, your problem is that there's not a single b for all bins, but a different b[j] for each bin j>=0. Huh! That suddenly looks a lot like a frequency-domain equalizer problem, doesn't it? In the presence of enough time (and the absence of intended, f=0-mirrored signals), there's, as far as I can tell, nothing wrong about using numerical optimization to optimize a whole parameter vector p[0…fft_len/2] instead of just a single complex p. You'd only need to understand the "E{…}" in the definion of f_p above as "across time" instead of "across bins". Does that give you a start on how to implement this? Now, you'll notice a peculiarity: What you do with that p is use its inverse to calculate the image of positive frequencies mirrored onto negative frequencies (didn't check, tbh), and subtract that. Huh! That's literally an equalizer. So, LMS equalizer methods, trained on the FFT[j]·FFT'[-j]_{j=0…fft_len/2} vector, should work, too, potentially even better. I bet *someone* has implemented that before, but I didn't find, on short notice, someone in the realm of GNU Radio that did so. Best regards, Marcus ——————————————————————————————————————————————————————————————————————————————————————— ¹ If you're not happy with me saying "by definition of": think about white noise. At any two frequency bins, you should be getting independent random complex numbers, zero mean; if the complex number is not zero mean (attention not magnitude-zero-mean!), then there'd be a "tone" at that frequency, not white noise. Now, multiply a complex number with the conjugate of another one: the result should be zero-mean, because from probability theory we know that E{X·Y} = E{X}·E{Y} if X,Y independent (and correlation is weaker than independence, but says E{X·Y} = 0). Our two variables have E{X}=E{Y}=0. [1] https://gitea.osmocom.org/sdr/libosmo-dsp/src/branch/master/src/iqbal.c#L168 On 2026-04-13 4:02 AM, Rob Frohne wrote: > Hi All, > > I want to help my students fix an annual problem we have because our QSD SDR hardware > designs can never get good enough image rejection. I want to do that by adding a GnuRadio > flowgraph that uses some kind of LMS or other automatic algorithm (with more than two > parameters) to automatically balance the I and Q signals. My students are designing QSD > SDR radios that downconvert HF signals to audio for a sound card to digitize often using a > Tayloe mixer. I want something like this: > My problem with the above flowgraph is that it only optimizes the phase and amplitude at > one frequency. I want it to optimize over the whole IF, so there are no glaring images in > our spectragrams. I have seen a number of papers on doing this, but I'm looking for a > quick GnuRadio solution I can put between the soundcard and Quisk. Does anyone have > something like this we could try? > > Thanks, > > Rob > > -- > > Rob Frohne, PhD PE > > School of Engineering > > Walla Walla University > > 100 SW 4th St > > College Place, WA 99324 > > (509) 527-2075<http://voice.google.com/calls?a=nc,%2B15095272075> >
GNU Radio, One Step at a Time
Read the mailing list of the GNU project right here! The information here is regarding the GNU radio project for USRP radios.
Wednesday, April 15, 2026
Sunday, April 12, 2026
Does anyone have an automatic Audio IQ Balancer that works for multiple frequencies for GnuRadio?
Hi All,
I want to help my students fix an annual problem we have because our QSD SDR hardware designs can never get good enough image rejection. I want to do that by adding a GnuRadio flowgraph that uses some kind of LMS or other automatic algorithm (with more than two parameters) to automatically balance the I and Q signals. My students are designing QSD SDR radios that downconvert HF signals to audio for a sound card to digitize often using a Tayloe mixer. I want something like this:
My problem with the above flowgraph is that it only optimizes the phase and amplitude at one frequency. I want it to optimize over the whole IF, so there are no glaring images in our spectragrams. I have seen a number of papers on doing this, but I'm looking for a quick GnuRadio solution I can put between the soundcard and Quisk. Does anyone have something like this we could try?
Thanks,
Rob
Wednesday, April 8, 2026
GR4 Dev Call / Getting Started with GR4 Development - April 9, 12PM EDT
We have an Architecture Working Group monthly call tomorrow, April 9th at 12PM EDT. It would be great to have you join and see how to get started / get involved in GR4 development, especially with RC1 out and pushing closer to an official GR4 release.
For this meeting I plan to go through:
- my GR4 development workflow
- getting set up in an isolated prefix for development
- some of the tools I've been working towards to help understand and iron out any necessary API changes/additions
- including a POC UI for GR4.
We’ll also discuss what’s still needed to get to an official 4.0 release, and how we can most effectively accelerate progress.
Time: Thursday, April 9
- 12:00 PM EDT
- 11:00 AM Central
Video call link: https://meet.google.com/owd-bstv-xda
Or dial: (US) +1 402-724-0159 PIN: 852 787 210#
More phone numbers: https://tel.meet/owd-bstv-xda?pin=1404845118840
Or dial: (US) +1 402-724-0159 PIN: 852 787 210#
More phone numbers: https://tel.meet/owd-bstv-xda?pin=1404845118840
Look forward to seeing you there.
Josh
Friday, April 3, 2026
NEWSDR 2026 on June 4 & 5 at WPI
The 16th annual New England Workshop on Software Defined Radio (NEWSDR) event will be held at Worcester Polytechnic Institute (WPI), in Worcester, Massachusetts, USA. The event will take place on Friday June 5, and there will be evening tutorials on Thursday June 4.
Please see the event webpage for details.
https://newsdr.org/workshops/newsdr-2026/
Advance registration is required (so that we can get a head-count), but it is completely free.
https://forms.gle/VvUVnhZtBPZRsxsT6
We are looking for poster presentations, and we encourage anyone interested to submit a poster for the event:
https://forms.gle/hobTwXv5cKN8gxhK7
If you are interested in exhibiting or sponsoring the workshop, please reach out to us at "gr-newsdr-info@wpi.edu".
We look forward to seeing you at the event!!
Thursday, April 2, 2026
Software for SDRplay device on a Mac
Hello,
I have an SDRplay RSPdx. SDRplay has released software (SDRconnect) for use on a Mac and have recently enabled rig control. However, they have no SIMPLE software for enabling use of the RSPdx as a receiver, while transmitting on a conventional ham radio transceiver.
Does GNU have any SIMPLE solution for this that does not require extensive software expertise?
Thanks,
John
WA1EAZ
I have an SDRplay RSPdx. SDRplay has released software (SDRconnect) for use on a Mac and have recently enabled rig control. However, they have no SIMPLE software for enabling use of the RSPdx as a receiver, while transmitting on a conventional ham radio transceiver.
Does GNU have any SIMPLE solution for this that does not require extensive software expertise?
Thanks,
John
WA1EAZ
Wednesday, April 1, 2026
Re: GSoC 2026 Proposal - Hardware in the loop CI
Hi Marcus,
Thanks for the reality check! You make a great point about the GitHub runner essentially acting as its own queue, treating it as a 1:1 runner-to-node-pair mapping cuts out a massive amount of unnecessary orchestration logic.
I will definitely take your advice to avoid the Labgrid and keep the architecture as lean and native as possible.
Thanks again for the sanity check and the practical insights!
Best,
Joseph
On Wed, 1 Apr, 2026, 18:59 Marcus Müller, <mmueller@gnuradio.org> wrote:
Hey Joseph,
I'm relieved if you've already been discussing things with Cyrille. Don't get rabbithole'd
early on :)
I'm not sure what Labgrid brings to the table in terms of making sure that concurrent PR
jobs don't collide; the usual paradigm here, and how for example the github runner
interface (but also other CI runners) handle this is that there's a daemon that accepts a
job, and only when it's done starts the next. The CI runner you need to have running to
accept the job coming from the forge (Github) already does that!
> Crash-safe orphan detection
Not a bad thing to have, but since any test needs to fail after a timeout has happened,
I'd assume the timing-out and yielding the node would be included in what you run on the
nodes themselves.
Other than that, I'd feel fairly confident that in case the controller crashes, you have a
complete system failure, at which point you can just automatically stop all jobs you're
running, and start fresh.
> Hardware-agnostic YAML environments: Keeps test scripts decoupled from CorteXlab's
> specific node identifiers.
Not a bad motive! Yes, but it locks you into the specific format of a different specific
software, and it's just a few YAML files, so while I think this is a good argument, not
one that is super urgent!
> I noted in the proposal that the plan is to assess this during Community Bonding
Very fair!
Best regards,
Marcus
On 2026-04-01 1:12 PM, Joseph George wrote:
> Hi Marcus,
> Thanks for taking a look and for the feedback!
> I completely agree that CorteXlab's native Minus API is fantastic for handling the core
> node management and scheduling.
> After discussing the architecture with Cyrille on the list earlier this week, I actually
> updated the final proposal to clarify this exact relationship.
>
> My main reason for exploring Labgrid was actually to fill a few specific CI orchestration
> gaps that I wasn't sure CorteXlab's Minus API handled natively for unattended runners.
> Specifically:
>
> * Blocking reservation queue: Ensures concurrent PR jobs don't collide.
> * Crash-safe orphan detection: Uses a heartbeat so a killed runner doesn't hold a node
> locked indefinitely.
> * Hardware-agnostic YAML environments: Keeps test scripts decoupled from CorteXlab's
> specific node identifiers.
>
>
> *I noted in the proposal that the plan is to assess this during Community Bonding. If
> introducing Labgrid is too heavy or the wrong fit for the GNU Radio CI ecosystem, I am
> 100% on board with dropping it and just building a lightweight custom shim around the
> Minus API to handle the queueing and heartbeats.*
> *
> *
> Really appreciate you taking the time to review the concept! I'd love to hear your
> thoughts on this layered approach.
> Best,
> Joseph
>
> On Wed, 1 Apr, 2026, 16:20 Marcus Müller, <mmueller@gnuradio.org
> <mailto:mmueller@gnuradio.org>> wrote:
>
> Don't think labgrid is the kind of thing that helps here, much.
>
> On 2026-03-31 2:09 PM, Philip Balister wrote:
> > On 3/30/26 4:16 AM, Cyrille Morin wrote:
> >> Hello Joseph,
> >>
> >> I read trough your document.
> >> Overall, it looks good, it appears to have everything required of the proposal
> document.
> >>
> >> A couple of thoughts:
> >>
> >> The proposed integrated tests look good and feel like what we would like to head
> >> towards, but being integration tests, they involve a lot of moving parts, so they
> might
> >> require a lot of tweaking and debugging time to work reliably, which might push
> back the
> >> integration into the CI pipeline.
> >>
> >> I've never used Labgrid so I don't know much about what it can or cannot help
> with. But
> >> it does sound in your proposal to perform many task already done by the platform's
> >> systems (booking, health check, ...) You might want to detail where specifically
> Labgrid
> >> would offer new and required capabilities
> >
> > Labgrid would offer a general API to the hardware so the work could extend beyond
> > CorteXlab. It is certainly worth a look to see if it is straight forward to
> abstract the
> > interface to the underlying hardware.
> >
> > Philip
> >
> >
> >>
> >> Best
> >>
> >> *Cyrille MORIN*
> >> /Ingénieur SED/
> >> /Équipe MARACAS/
> >>
> >> Logo Inria
> >> Centre Inria de Lyon
> >>
> >> Laboratoire CITI
> >> Campus La Doua - Villeurbanne
> >> 6 avenue des Arts
> >> F-69621 Villeurbanne
> >>
> >> https://team.inria.fr/maracas/ <https://team.inria.fr/maracas/>
> >> Le 28/03/2026 à 14:49, Joseph George a écrit :
> >>>
> >>> Hi Cyrille,
> >>>
> >>> I have completed the first draft of my GSoC 2026 proposal for the "Hardware in
> the Loop
> >>> CI" project.
> >>>
> >>> Draft : Hardware in the Loop CI <https://drive.google.com/file/ <https://
> drive.google.com/file/>
> >>> d/1ATLOxq_bvPpG7fizTQtZK-8w_BwadVeF/view?usp=drive_link>
> >>>
> >>> A huge thank you to Larry and Philip for the insights. I have explicitly
> integrated the
> >>> LBNL Node Health Check paradigm to isolate hardware failures from software
> regressions,
> >>> and I've adopted Labgrid as the core hardware orchestration layer to manage the
> >>> CorteXlab USRPs.
> >>>
> >>> I would greatly appreciate any feedback from the community,
> >>>
> >>> Thanks for your time and guidance!
> >>>
> >>> Best, Joseph George
> >>>
> >>>
> >>> On Thu, 26 Mar 2026 at 22:23, Cyrille Morin <cyrille.morin@inria.fr
> <mailto:cyrille.morin@inria.fr>> wrote:
> >>>
> >>> Hi Joseph,
> >>>
> >>> Welcome!
> >>>
> >>> Feel free to share your draft here on the mailing list, for
> >>> feedback by members of the community, that's the right place
> >>>
> >>> I don't have a specific format for the tests scenarios, choose
> >>> what you think is best/more readable/most relevant.
> >>> But do look at the GSoC Student info on the wiki if you haven't
> >>> already: https://wiki.gnuradio.org/index.php?title=GSoCStudentInfo <https://
> wiki.gnuradio.org/index.php?title=GSoCStudentInfo>
> >>> <https://wiki.gnuradio.org/index.php?title=GSoCStudentInfo <https://
> wiki.gnuradio.org/index.php?title=GSoCStudentInfo>>
> >>>
> >>> *Cyrille MORIN*
> >>> Le 26/03/2026 à 15:56, Joseph George a écrit :
> >>>> Hi Cyrille,
> >>>> I'm Joseph, an ECE student and the Chair of the IEEE Signal
> >>>> Processing Society at my college. I'm putting together a GSoC
> >>>> proposal for the "Hardware in the loop CI" project and wanted to
> >>>> quickly say hello.
> >>>>
> >>>> I have a strong background in bridging DSP theory with physical
> >>>> hardware. I recently placed 7th globally in the ICASSP 2026 ALS
> >>>> challenge by building domain-driven acoustic biomarker pipelines,
> >>>> and I regularly build hardware projects (like ESP32 navigation
> >>>> systems using Kalman filtering for sensor fusion). I'd love to
> >>>> help bring GNU Radio's CI tests out of software only simulation
> >>>> and onto the physical CorteXlab hardware.
> >>>>
> >>>> I am drafting my 12-week timeline right now. Is there a specific
> >>>> format you prefer for the test scenarios, or a good place to drop
> >>>> a link to my draft for a quick sanity check before Tuesday's
> >>>> deadline?
> >>>
> >
> >
>
>
Subscribe to:
Comments (Atom)