Initial, Doggo walking.

This commit is contained in:
Christoph Sterz 2023-07-20 11:39:45 +02:00
commit 0ed602f61b
8 changed files with 1037 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
.pio

39
include/README Normal file
View File

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

46
lib/README Normal file
View File

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

16
platformio.ini Normal file
View File

@ -0,0 +1,16 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:wemos_d1_mini32]
board = wemos_d1_mini32
platform = espressif32
framework = arduino
monitor_speed = 115200
lib_deps = ledc

669
src/AudioAnalysis.h Normal file
View File

@ -0,0 +1,669 @@
#ifndef AudioAnalysis_H
#define AudioAnalysis_H
#include "Arduino.h"
// arduinoFFT V2
// See the develop branch on GitHub for the latest info and speedups.
// https://github.com/kosme/arduinoFFT/tree/develop
// if you are going for speed over percision uncomment the lines below.
//#define FFT_SPEED_OVER_PRECISION
//#define FFT_SQRT_APPROXIMATION
#include <arduinoFFT.h>
#ifndef SAMPLE_SIZE
#define SAMPLE_SIZE 1024
#endif
#ifndef BAND_SIZE
#define BAND_SIZE 8
#endif
class AudioAnalysis
{
public:
enum falloff_type
{
NO_FALLOFF,
LINEAR_FALLOFF,
ACCELERATE_FALLOFF,
EXPONENTIAL_FALLOFF,
};
AudioAnalysis();
/* FFT Functions */
void computeFFT(int32_t *samples, int sampleSize, int sampleRate); // calculates FFT on sample data
float *getReal(); // gets the Real values after FFT calculation
float *getImaginary(); // gets the imaginary values after FFT calculation
/* Band Frequency Functions */
void setNoiseFloor(float noiseFloor); // threshold before sounds are registered
void computeFrequencies(uint8_t bandSize = BAND_SIZE); // converts FFT data into frequency bands
void normalize(bool normalize = true, float min = 0, float max = 1); // normalize all values and constrain to min/max.
void autoLevel(falloff_type falloffType = ACCELERATE_FALLOFF, float falloffRate = 0.01, float min = 10, float max = -1); // auto ballance normalized values to ambient noise levels.
// min and max are based on pre-normalized values.
void setEqualizerLevels(float low = 1, float mid = 1, float high = 1 ); // adjust the frequency levels for a given range - low, medium and high.
// 0.5 = 50%, 1.0 = 100%, 1.5 = 150% the raw value etc.
void setEqualizerLevels(float *bandEq); // full control over each bands eq value.
float *getEqualizerLevels(); // gets the last bandEq levels
bool
isNormalize(); // is normalize enabled
bool isAutoLevel(); // is auto level enabled
bool isClipping(); // is values exceding max
void bandPeakFalloff(falloff_type falloffType = ACCELERATE_FALLOFF, float falloffRate = 0.05); // set the falloff type and rate for band peaks.
void vuPeakFalloff(falloff_type falloffType = ACCELERATE_FALLOFF, float falloffRate = 0.05); // set the falloff type and rate for volume unit peak.
float *getBands(); // gets the last bands calculated from computeFrequencies()
float *getPeaks(); // gets the last peaks calculated from computeFrequencies()
float getBand(uint8_t index); // gets the value at bands index
float getBandAvg(); // average value across all bands
float getBandMax(); // max value across all bands
int getBandMaxIndex(); // index of the highest value band
int getBandMinIndex(); // index of the lowest value band
float getPeak(uint8_t index); // gets the value at peaks index
float getPeakAvg(); // average value across all peaks
float getPeakMax(); // max value across all peaks
int getPeakMaxIndex(); // index of the highest value peak
int getPeakMinIndex(); // index of the lowest value peak
/* Volume Unit Functions */
float getVolumeUnit(); // gets the last volume unit calculated from computeFrequencies()
float getVolumeUnitPeak(); // gets the last volume unit peak calculated from computeFrequencies()
float getVolumeUnitMax(); // value of the highest value volume unit
float getVolumeUnitPeakMax(); // value of the highest value volume unit
protected:
/* Library Settings */
bool _isAutoLevel = false;
bool _isClipping = false;
float _autoMin = 10; // lowest raw value the autoLevel will fall to before stopping. -1 = will auto level down to 0.
float _autoMax = -1; // highest raw value the autoLevel will rise to before clipping. -1 = will not have any clipping.
bool _isNormalize = false;
float _normalMin = 0;
float _normalMax = 1;
falloff_type _bandPeakFalloffType = ACCELERATE_FALLOFF;
float _bandPeakFalloffRate = 0.05;
falloff_type _vuPeakFalloffType = ACCELERATE_FALLOFF;
float _vuPeakFalloffRate = 0.05;
falloff_type _autoLevelFalloffType = ACCELERATE_FALLOFF;
float _autoLevelFalloffRate = 0.01;
float calculateFalloff(falloff_type falloffType, float falloffRate, float currentRate);
template <class X>
X mapAndClip(X x, X in_min, X in_max, X out_min, X out_max);
/* FFT Variables */
int32_t *_samples;
int _sampleSize;
int _sampleRate;
float _real[SAMPLE_SIZE];
float _imag[SAMPLE_SIZE];
float _weighingFactors[SAMPLE_SIZE];
/* Band Frequency Variables */
float _noiseFloor = 0;
int _bandSize = BAND_SIZE;
float _bands[BAND_SIZE];
float _peaks[BAND_SIZE];
float _peakFallRate[BAND_SIZE];
float _peaksNorms[BAND_SIZE];
float _bandsNorms[BAND_SIZE];
float _bandEq[BAND_SIZE];
float _bandAvg;
float _peakAvg;
int8_t _bandMinIndex;
int8_t _bandMaxIndex;
int8_t _peakMinIndex;
int8_t _peakMaxIndex;
float _bandMin;
float _bandMax; // used for normalization calculation
float _peakMin;
float _autoLevelPeakMax; // used for normalization calculation
// float _peakMinFalloffRate;
float _autoLevelPeakMaxFalloffRate; // used for auto level calculation
/* Volume Unit Variables */
float _vu;
float _vuPeak;
float _vuPeakFallRate;
float _vuMin;
float _vuMax; // used for normalization calculation
float _vuPeakMin;
float _autoLevelVuPeakMax; // used for normalization calculation
// float _vuPeakMinFalloffRate;
float _autoLevelMaxFalloffRate; // used for auto level calculation
ArduinoFFT<float> *_FFT = nullptr;
};
AudioAnalysis::AudioAnalysis()
{
// set default eq levels;
for (int i = 0; i < _bandSize; i++)
{
_bandEq[i] = 1.0;
}
}
void AudioAnalysis::computeFFT(int32_t *samples, int sampleSize, int sampleRate)
{
_samples = samples;
if (_FFT == nullptr || _sampleSize != sampleSize || _sampleRate != sampleRate)
{
_sampleSize = sampleSize;
_sampleRate = sampleRate;
_FFT = new ArduinoFFT<float>(_real, _imag, _sampleSize, _sampleRate, _weighingFactors);
}
// prep samples for analysis
for (int i = 0; i < _sampleSize; i++)
{
_real[i] = samples[i];
_imag[i] = 0;
}
_FFT->dcRemoval();
_FFT->windowing(FFTWindow::Hamming, FFTDirection::Forward, false); /* Weigh data (compensated) */
_FFT->compute(FFTDirection::Forward); /* Compute FFT */
_FFT->complexToMagnitude(); /* Compute magnitudes */
}
float *AudioAnalysis::getReal()
{
return _real;
}
float *AudioAnalysis::getImaginary()
{
return _imag;
}
void AudioAnalysis::setNoiseFloor(float noiseFloor)
{
_noiseFloor = noiseFloor;
}
float getPoint(float n1, float n2, float percent)
{
float diff = n2 - n1;
return n1 + (diff * percent);
}
void AudioAnalysis::setEqualizerLevels(float low, float mid, float high)
{
float xa, ya, xb, yb, x, y;
// low curve
float x1 = 0;
float lowSize = _bandSize / 4;
float y1 = low;
float x2 = lowSize / 2;
float y2 = low;
float x3 = lowSize;
float y3 = (low + mid)/2.0;
for (int i = x1; i < lowSize; i++)
{
float p = (float)i / (float)lowSize;
//xa = getPoint(x1, x2, p);
ya = getPoint(y1, y2, p);
//xb = getPoint(x2, x3, p);
yb = getPoint(y2, y3, p);
//x = getPoint(xa, xb, p);
y = getPoint(ya, yb, p);
_bandEq[i] = y;
}
// mid curve
x1 = lowSize;
float midSize = (_bandSize-lowSize) / 2;
y1 = y3;
x2 = x1 + midSize / 2;
y2 = mid;
x3 = x1 + midSize;
y3 = (mid + high) / 2.0;
for (int i = x1; i < x1+midSize; i++)
{
float p = (float)(i - x1) / (float)midSize;
// xa = getPoint(x1, x2, p);
ya = getPoint(y1, y2, p);
// xb = getPoint(x2, x3, p);
yb = getPoint(y2, y3, p);
// x = getPoint(xa, xb, p);
y = getPoint(ya, yb, p);
_bandEq[i] = y;
}
// high curve
x1 = lowSize + midSize;
float highSize = midSize;
y1 = y3;
x2 = x1 + highSize / 2;
y2 = high;
x3 = x1 + highSize;
y3 = high;
for (int i = x1; i < x1+highSize; i++)
{
float p = (float)(i - x1) / (float)highSize;
// xa = getPoint(x1, x2, p);
ya = getPoint(y1, y2, p);
// xb = getPoint(x2, x3, p);
yb = getPoint(y2, y3, p);
// x = getPoint(xa, xb, p);
y = getPoint(ya, yb, p);
_bandEq[i] = y;
}
}
void AudioAnalysis::setEqualizerLevels(float *bandEq)
{
// blind copy of eq percentages
for(int i = 0; i < _bandSize; i++) {
_bandEq[i] = bandEq[i];
}
}
float *AudioAnalysis::getEqualizerLevels()
{
return _bandEq;
}
void AudioAnalysis::computeFrequencies(uint8_t bandSize)
{
// TODO: use maths to calculate these offset values. Inputs being _sampleSize and _bandSize output being similar exponential curve below.
// band offsets helpers based on 1024 samples
const static uint16_t _2frequencyOffsets[2] = {24, 359};
const static uint16_t _4frequencyOffsets[4] = {6, 18, 72, 287};
const static uint16_t _8frequencyOffsets[8] = {2, 4, 6, 12, 25, 47, 92, 195};
const static uint16_t _16frequencyOffsets[16] = {1, 1, 2, 2, 2, 4, 5, 7, 11, 14, 19, 28, 38, 54, 75, 120}; // initial
// 32 and 64 frequency offsets are low end biased because of int math... the first 4 and 8 buckets should be 0.5f but we cant do that here.
const static uint16_t _32frequencyOffsets[32] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 5, 5, 7, 7, 8, 8, 14, 14, 19, 19, 27, 27, 37, 37, 60, 60};
const static uint16_t _64frequencyOffsets[64] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 7, 7, 7, 7, 8, 8, 8, 8, 13, 13, 13, 13, 18, 18, 18, 18, 30, 30, 30, 30}; // low end biased because of int
const uint16_t *_frequencyOffsets;
try_frequency_offsets_again:
switch (bandSize)
{
case 2:
_frequencyOffsets = _2frequencyOffsets;
break;
case 4:
_frequencyOffsets = _4frequencyOffsets;
break;
case 8:
_frequencyOffsets = _8frequencyOffsets;
break;
case 16:
_frequencyOffsets = _16frequencyOffsets;
break;
case 32:
_frequencyOffsets = _32frequencyOffsets;
break;
case 64:
_frequencyOffsets = _64frequencyOffsets;
break;
default:
bandSize = BAND_SIZE;
goto try_frequency_offsets_again;
}
_bandSize = bandSize;
_isClipping = false;
// for normalize falloff rates
if (_isAutoLevel)
{
if (_autoLevelPeakMax > _autoMin)
{
_autoLevelPeakMaxFalloffRate = calculateFalloff(_autoLevelFalloffType, _autoLevelFalloffRate, _autoLevelPeakMaxFalloffRate);
_autoLevelPeakMax -= _autoLevelPeakMaxFalloffRate;
}
if (_autoLevelVuPeakMax > _autoMin * 1.5)
{
_autoLevelMaxFalloffRate = calculateFalloff(_autoLevelFalloffType, _autoLevelFalloffRate, _autoLevelMaxFalloffRate);
_autoLevelVuPeakMax -= _autoLevelMaxFalloffRate;
}
}
_vu = 0;
_bandMax = 0;
_bandAvg = 0;
_peakAvg = 0;
_bandMaxIndex = -1;
_bandMinIndex = -1;
_peakMaxIndex = -1;
_peakMinIndex = -1;
int offset = 2; // first two values are noise
for (int i = 0; i < _bandSize; i++)
{
_bands[i] = 0;
// handle band peaks fall off
_peakFallRate[i] = calculateFalloff(_bandPeakFalloffType, _bandPeakFalloffRate, _peakFallRate[i]);
if (_peaks[i] - _peakFallRate[i] <= _bands[i])
{
_peaks[i] = _bands[i];
}
else
{
_peaks[i] -= _peakFallRate[i]; // fall off rate
}
for (int j = 0; j < _frequencyOffsets[i]; j++)
{
// scale down factor to prevent overflow
int rv = (_real[offset + j] / (0xFFFF * 0xFF));
int iv = (_imag[offset + j] / (0xFFFF * 0xFF));
// some smoothing with imaginary numbers.
rv = sqrt(rv * rv + iv * iv);
// add eq offsets
rv = rv * _bandEq[i];
// combine band amplitudes for current band segment
_bands[i] += rv;
_vu += rv;
}
offset += _frequencyOffsets[i];
// remove noise
if (_bands[i] < _noiseFloor)
{
_bands[i] = 0;
}
if (_bands[i] > _peaks[i])
{
_peakFallRate[i] = 0;
_peaks[i] = _bands[i];
}
// handle min/max band
if (_bands[i] > _bandMax && _bands[i] > _noiseFloor)
{
_bandMax = _bands[i];
_bandMaxIndex = i;
}
if (_bands[i] < _bandMin)
{
_bandMin = _bands[i];
_bandMinIndex = i;
}
// handle min/max peak
if (_peaks[i] > _autoLevelPeakMax)
{
_autoLevelPeakMax = _peaks[i];
if (_isAutoLevel && _autoMax != -1 && _peaks[i] > _autoMax)
{
_isClipping = true;
_autoLevelPeakMax = _autoMax;
}
_peakMaxIndex = i;
_autoLevelPeakMaxFalloffRate = 0;
}
if (_peaks[i] < _peakMin && _peaks[i] > _noiseFloor)
{
_peakMin = _peaks[i];
_peakMinIndex = i;
}
// handle band average
_bandAvg += _bands[i];
_peakAvg += _peaks[i];
} // end bands
// handle band average
_bandAvg = _bandAvg / _bandSize;
_peakAvg = _peakAvg / _bandSize;
// handle vu peak fall off
_vu = _vu / 8.0; // get it closer to the band peak values
_vuPeakFallRate = calculateFalloff(_vuPeakFalloffType, _vuPeakFalloffRate, _vuPeakFallRate);
_vuPeak -= _vuPeakFallRate;
if (_vu > _vuPeak)
{
_vuPeakFallRate = 0;
_vuPeak = _vu;
}
if (_vu > _vuMax)
{
_vuMax = _vu;
}
if (_vu < _vuMin)
{
_vuMin = _vu;
}
if (_vuPeak > _autoLevelVuPeakMax)
{
_autoLevelVuPeakMax = _vuPeak;
if (_isAutoLevel && _autoMax != -1 && _vuPeak > _autoMax)
{
_isClipping = true;
_autoLevelVuPeakMax = _autoMax;
}
_autoLevelMaxFalloffRate = 0;
}
if (_vuPeak < _vuPeakMin)
{
_vuPeakMin = _vuPeak;
}
}
template <class X>
X AudioAnalysis::mapAndClip(X x, X in_min, X in_max, X out_min, X out_max)
{
if (_isAutoLevel && _autoMax != -1 && x > _autoMax)
{
// clip the value to max
x = _autoMax;
}
else if (x > in_max)
{
// value is clipping
x = in_max;
}
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
void AudioAnalysis::normalize(bool normalize, float min, float max)
{
_isNormalize = normalize;
_normalMin = min;
_normalMax = max;
}
void AudioAnalysis::bandPeakFalloff(falloff_type falloffType, float falloffRate)
{
_bandPeakFalloffType = falloffType;
_bandPeakFalloffRate = falloffRate;
}
void AudioAnalysis::vuPeakFalloff(falloff_type falloffType, float falloffRate)
{
_vuPeakFalloffType = falloffType;
_vuPeakFalloffRate = falloffRate;
}
float AudioAnalysis::calculateFalloff(falloff_type falloffType, float falloffRate, float currentRate)
{
switch (falloffType)
{
case LINEAR_FALLOFF:
return falloffRate;
case ACCELERATE_FALLOFF:
return currentRate + falloffRate;
case EXPONENTIAL_FALLOFF:
if (currentRate == 0)
{
currentRate = falloffRate;
}
return currentRate + currentRate;
case NO_FALLOFF:
default:
return 0;
}
}
void AudioAnalysis::autoLevel(falloff_type falloffType, float falloffRate, float min, float max)
{
_isAutoLevel = falloffType != NO_FALLOFF;
_autoLevelFalloffType = falloffType;
_autoLevelFalloffRate = falloffRate;
_autoMin = min;
_autoMax = max;
}
bool AudioAnalysis::isNormalize()
{
return _isNormalize;
}
bool AudioAnalysis::isAutoLevel()
{
return _isAutoLevel;
}
bool AudioAnalysis::isClipping()
{
return _isClipping;
}
float *AudioAnalysis::getBands()
{
if (_isNormalize)
{
for (int i = 0; i < _bandSize; i++)
{
_bandsNorms[i] = mapAndClip(_bands[i], 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _bandsNorms;
}
return _bands;
}
float AudioAnalysis::getBand(uint8_t index)
{
if (index >= _bandSize || index < 0)
{
return 0;
}
if (_isNormalize)
{
return mapAndClip(_bands[index], 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _bands[index];
}
float AudioAnalysis::getBandAvg()
{
if (_isNormalize)
{
return mapAndClip(_bandAvg, 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _bandAvg;
}
float AudioAnalysis::getBandMax()
{
return getBand(getBandMaxIndex());
}
int AudioAnalysis::getBandMaxIndex()
{
return _bandMaxIndex;
}
int AudioAnalysis::getBandMinIndex()
{
return _bandMinIndex;
}
float *AudioAnalysis::getPeaks()
{
if (_isNormalize)
{
for (int i = 0; i < _bandSize; i++)
{
_peaksNorms[i] = mapAndClip(_peaks[i], 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _peaksNorms;
}
return _peaks;
}
float AudioAnalysis::getPeak(uint8_t index)
{
if (index >= _bandSize || index < 0)
{
return 0;
}
if (_isNormalize)
{
return mapAndClip(_peaks[index], 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _peaks[index];
}
float AudioAnalysis::getPeakAvg()
{
if (_isNormalize)
{
return mapAndClip(_peakAvg, 0.0f, _autoLevelPeakMax, _normalMin, _normalMax);
}
return _peakAvg;
}
float AudioAnalysis::getPeakMax()
{
return getPeak(getPeakMaxIndex());
}
int AudioAnalysis::getPeakMaxIndex()
{
return _peakMaxIndex;
}
int AudioAnalysis::getPeakMinIndex()
{
return _peakMinIndex;
}
float AudioAnalysis::getVolumeUnit()
{
if (_isNormalize)
{
return mapAndClip(_vu, 0.0f, _autoLevelVuPeakMax, _normalMin, _normalMax);
}
return _vu;
}
float AudioAnalysis::getVolumeUnitPeak()
{
if (_isNormalize)
{
return mapAndClip(_vuPeak, 0.0f, _autoLevelVuPeakMax, _normalMin, _normalMax);
}
return _vuPeak;
}
float AudioAnalysis::getVolumeUnitMax()
{
if (_isNormalize)
{
return mapAndClip(_vuMax, 0.0f, _autoLevelVuPeakMax, _normalMin, _normalMax);
}
return _vuMax;
}
float AudioAnalysis::getVolumeUnitPeakMax()
{
if (_isNormalize)
{
return _normalMax;
}
return _autoLevelVuPeakMax;
}
#endif // AudioAnalysis_H

87
src/AudioInI2S.h Normal file
View File

@ -0,0 +1,87 @@
#ifndef AudioInI2S_H
#define AudioInI2S_H
#include "Arduino.h"
#include <driver/i2s.h>
class AudioInI2S
{
public:
AudioInI2S(int bck_pin, int ws_pin, int data_pin, int channel_pin = -1, i2s_channel_fmt_t channel_format = I2S_CHANNEL_FMT_ONLY_RIGHT);
void read(int32_t _samples[]);
void begin(int sample_size, int sample_rate = 44100, i2s_port_t i2s_port_number = I2S_NUM_0);
private:
int _bck_pin;
int _ws_pin;
int _data_pin;
int _channel_pin;
i2s_channel_fmt_t _channel_format;
int _sample_size;
int _sample_rate;
i2s_port_t _i2s_port_number;
i2s_config_t _i2s_config = {
.mode = (i2s_mode_t)(I2S_MODE_MASTER | I2S_MODE_RX),
.sample_rate = 0, // set in begin()
.bits_per_sample = I2S_BITS_PER_SAMPLE_32BIT,
.channel_format = I2S_CHANNEL_FMT_ONLY_RIGHT,
.communication_format = I2S_COMM_FORMAT_I2S,
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL1,
.dma_buf_count = 4,
.dma_buf_len = 0, // set in begin()
.use_apll = false,
.tx_desc_auto_clear = false,
.fixed_mclk = 0};
i2s_pin_config_t _i2s_mic_pins = {
.bck_io_num = I2S_PIN_NO_CHANGE, // set in begin()
.ws_io_num = I2S_PIN_NO_CHANGE, // set in begin()
.data_out_num = I2S_PIN_NO_CHANGE,
.data_in_num = I2S_PIN_NO_CHANGE // set in begin()
};
};
AudioInI2S::AudioInI2S(int bck_pin, int ws_pin, int data_pin, int channel_pin, i2s_channel_fmt_t channel_format)
{
_bck_pin = bck_pin;
_ws_pin = ws_pin;
_data_pin = data_pin;
_channel_pin = channel_pin;
_channel_format = channel_format;
}
void AudioInI2S::begin(int sample_size, int sample_rate, i2s_port_t i2s_port_number)
{
if (_channel_pin >= 0)
{
pinMode(_channel_pin, OUTPUT);
digitalWrite(_channel_pin, _channel_format == I2S_CHANNEL_FMT_ONLY_RIGHT ? LOW : HIGH);
}
_sample_rate = sample_rate;
_sample_size = sample_size;
_i2s_port_number = i2s_port_number;
_i2s_mic_pins.bck_io_num = _bck_pin;
_i2s_mic_pins.ws_io_num = _ws_pin;
_i2s_mic_pins.data_in_num = _data_pin;
_i2s_config.sample_rate = _sample_rate;
_i2s_config.dma_buf_len = _sample_size;
_i2s_config.channel_format = _channel_format;
// start up the I2S peripheral
i2s_driver_install(_i2s_port_number, &_i2s_config, 0, NULL);
i2s_set_pin(_i2s_port_number, &_i2s_mic_pins);
}
void AudioInI2S::read(int32_t _samples[])
{
// read I2S stream data into the samples buffer
size_t bytes_read = 0;
i2s_read(_i2s_port_number, _samples, sizeof(int32_t) * _sample_size, &bytes_read, portMAX_DELAY);
int samples_read = bytes_read / sizeof(int32_t);
}
#endif // AudioInI2S_H

168
src/main.cpp Normal file
View File

@ -0,0 +1,168 @@
/*
ESP32 I2S Microphone Sample
esp32-i2s-mic-sample.ino
Sample sound from I2S microphone, display on Serial Plotter
Requires INMP441 I2S microphone
DroneBot Workshop 2022
https://dronebotworkshop.com
*/
// Include I2S driver
#include <driver/i2s.h>
#include <HardwareSerial.h>
// Connections to INMP441 I2S microphone
#define I2S_WS 23
#define I2S_SD 33
#define I2S_SCK 27
// Use I2S Processor 0
#define I2S_PORT I2S_NUM_0
// Define input buffer length
#define bufferLen 64
int16_t sBuffer[bufferLen];
// Settings for LEDC PWM
#include "driver/ledc.h"
#include "esp_err.h"
#define LEDC_TIMER LEDC_TIMER_0
#define LEDC_MODE LEDC_LOW_SPEED_MODE
#define LEDC_OUTPUT_IO (0) // Define the output GPIO
#define LEDC_CHANNEL LEDC_CHANNEL_0
#define LEDC_DUTY_RES LEDC_TIMER_10_BIT // Set duty resolution to 13 bits
#define LEDC_DUTY (4095) // Set duty to 50%. ((2 ** 13) - 1) * 50% = 4095
#define LEDC_FREQUENCY (20000) // Frequency in Hertz. Set frequency at 20 kHz
void i2s_install() {
// Set up I2S Processor configuration
const i2s_config_t i2s_config = {
.mode = i2s_mode_t(I2S_MODE_MASTER | I2S_MODE_RX),
.sample_rate = 44100,
.bits_per_sample = i2s_bits_per_sample_t(16),
.channel_format = I2S_CHANNEL_FMT_ONLY_LEFT,
.communication_format = i2s_comm_format_t(I2S_COMM_FORMAT_STAND_I2S),
.intr_alloc_flags = 0,
.dma_buf_count = 8,
.dma_buf_len = bufferLen,
.use_apll = false
};
i2s_driver_install(I2S_PORT, &i2s_config, 0, NULL);
}
void i2s_setpin() {
// Set I2S pin configuration
const i2s_pin_config_t pin_config = {
.bck_io_num = I2S_SCK,
.ws_io_num = I2S_WS,
.data_out_num = -1,
.data_in_num = I2S_SD
};
i2s_set_pin(I2S_PORT, &pin_config);
}
static void example_ledc_init(void)
{
// Prepare and then apply the LEDC PWM timer configuration
ledc_timer_config_t ledc_timer = {
.speed_mode = LEDC_MODE,
.duty_resolution = LEDC_DUTY_RES,
.timer_num = LEDC_TIMER,
.freq_hz = LEDC_FREQUENCY, // Set output frequency at 5 kHz
.clk_cfg = LEDC_AUTO_CLK
};
ESP_ERROR_CHECK(ledc_timer_config(&ledc_timer));
// Prepare and then apply the LEDC PWM channel configuration
ledc_channel_config_t ledc_channel = {
.gpio_num = LEDC_OUTPUT_IO,
.speed_mode = LEDC_MODE,
.channel = LEDC_CHANNEL,
.intr_type = LEDC_INTR_DISABLE,
.timer_sel = LEDC_TIMER,
.duty = 0, // Set duty to 0%
.hpoint = 0
};
ESP_ERROR_CHECK(ledc_channel_config(&ledc_channel));
}
//static void save_power(void)
//{
//
//
// ESP_ERROR_CHECK(esp_pm_configure(&power_configuration));
//}
void setup() {
// Set up Serial Monitor
Serial.begin(115200);
Serial.println(" ");
delay(500);
// Set up I2S
i2s_install();
i2s_setpin();
i2s_start(I2S_PORT);
delay(500);
// Set the LEDC peripheral configuration
example_ledc_init();
}
float thresholdMax = 1000;
float thresholdMin = 30;
float threshold=thresholdMax;
void loop() {
// Get I2S data and place in data buffer
size_t bytesIn = 0;
esp_err_t result = i2s_read(I2S_PORT, &sBuffer, bufferLen, &bytesIn, portMAX_DELAY);
if (result == ESP_OK)
{
// Read I2S data buffer
int16_t samples_read = bytesIn / 8;
if (samples_read > 0) {
float mean = 0;
for (int16_t i = 0; i < samples_read; ++i) {
mean += (sBuffer[i]);
}
// Average the data reading
mean /= samples_read;
// Print to serial plotter
Serial.print(-threshold);
Serial.print(" ");
Serial.print(threshold);
Serial.print(" ");
Serial.println(mean);
if ( mean < threshold) threshold = std::max(thresholdMin, threshold * 0.9999f);
auto dogOnDuty = std::max(0.0f, threshold - thresholdMin);
if(dogOnDuty < 500) dogOnDuty = 1;
ESP_ERROR_CHECK(ledc_set_duty(LEDC_MODE, LEDC_CHANNEL, dogOnDuty));
// Update duty to apply the new value
ESP_ERROR_CHECK(ledc_update_duty(LEDC_MODE, LEDC_CHANNEL));
if ( std::abs(mean) > threshold) {
// DOG WAKEUP
threshold = thresholdMax;
// Set the LEDC peripheral configuration
// Set duty to 50%
//ESP_ERROR_CHECK(ledc_set_duty(LEDC_MODE, LEDC_CHANNEL, LEDC_DUTY));
// Update duty to apply the new value
//ESP_ERROR_CHECK(ledc_update_duty(LEDC_MODE, LEDC_CHANNEL));
}
}
}
}

11
test/README Normal file
View File

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Test Runner and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/en/latest/advanced/unit-testing/index.html