Android 11 balance process and principle

Time：2021-6-26

It is mentioned in a document of Qualcomm that Android 10 audio has introduced a balance function (for reasons of confidentiality, the specific document name should not be posted), and the content of the document is simple, so the setting interface and dumpsys view value are proposed,

Let’s study how Android implements this thing and how it works.

<!– more –>

[Platform:Android 11]
http://aosp.opersys.com/xref/…

Balance is actually used to set the left and right balance. Now there are more stereo speakers on mobile phones. The effect of intuitive point is to set the volume of left and right speakers.

In addition, the function of volume balance is also required in the car. Combined with fade, the sound field effect can be achieved.
For this reason, Google introduced audiocontrol to set the left and right balance through setbalancetowardright() setfadetowardfront() interfaces, so as to achieve the effect of setting the sound field.
However, these two interfaces need to be implemented by the chip manufacturer in the Hal layer. In other words, the chip manufacturer may or may not have implemented them. For example, the function is not implemented in the Hal layer of Qualcomm 8155.

1. Setting interface

< center > Figure 1. Left right balance setting interface

In the above interface, drag the bar to the far left to turn the sound to the left completely; Similarly, drag the bar to the far right, and the sound is fully tuned to the right.
The current value of the drag bar above is [0, 200], and then it will be mapped to [- 1.0F, 1.0F] and saved to the database,
From the point of view of the code, it is also a little considerate, that is, it is set to the middle value when the center is + / – 6.

Drag bar key code:

packages/apps/Settings/src/com/android/settings/accessibility/BalanceSeekBar.java
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
if (fromUser) {
// Snap to centre when within the specified threshold
//Msnapthreshold is currently 6, that is, it is set to the middle when the middle position is + / - 6
if (progress != mCenter
&& progress > mCenter - mSnapThreshold
&& progress < mCenter + mSnapThreshold) {
progress = mCenter;
seekBar.setProgress(progress); // direct update (fromUser becomes false)
}
//Map 0 ~ 200 to - 1.0F ~ 1.0F
final float balance = (progress - mCenter) * 0.01f;
//Finally, it is set in the database
Settings.System.putFloatForUser(mContext.getContentResolver(),
Settings.System.MASTER_BALANCE, balance, UserHandle.USER_CURRENT);
}

We can also adjust the value directly from the command line

# MASTER_ Balance definition
# frameworks/base/core/java/android/provider/Settings.java
public static final String MASTER_BALANCE = "master_balance";

#Command line setting master balance
adb shell settings put system master_ Balance value
#Command line to get master balance
adb shell settings get system master_balance

So who is receiving this value?

2. setMasterBalance()

Through the analysis of master_ Balance search finds that in the audioservice constructor, a settingsobserver object will be created. This class is specially used for audioservice to listen to the settings database. When master_ When the balance value changes, call updatemasterbalance() >? Audiosystem. Setmasterbalance() to update,
In other words, in fact, audioserver is further set down through audiosystem.

frameworks/base/services/core/java/com/android/server/audio/AudioService.java
...
//Audioservice creates settingsobserver object
mSettingsObserver = new SettingsObserver();

private class SettingsObserver extends ContentObserver {
SettingsObserver() {
...
//On the function of master in the settingsobserver constructor_ Barrance monitoring
mContentResolver.registerContentObserver(Settings.System.getUriFor(
Settings.System.MASTER_BALANCE), false, this);
...
}

@Override
public void onChange(boolean selfChange) {
...
//When the monitored data changes, call this function to update master balance
//It should be noted that when booting and the audioserver is dead, the function will also be called to set the balance value to audioflinger
updateMasterBalance(mContentResolver);
...
}

private void updateMasterBalance(ContentResolver cr) {
//Get value
final float masterBalance = System.getFloatForUser(
cr, System.MASTER_BALANCE, 0.f /* default */, UserHandle.USER_CURRENT);
...
//Set it through audiosystem
if (AudioSystem.setMasterBalance(masterBalance) != 0) {
Log.e(TAG, String.format("setMasterBalance failed for %f", masterBalance));
}
}

Audiosystem will eventually be set in audioflinger. The process in the middle is relatively simple. It’s just a few bind calls around. If you’re not familiar with it, just look at the process of my column.

frameworks/base/media/java/android/media/AudioSystem.java
setMasterBalance()
+ --> JNI
+ android_media_AudioSystem_setMasterBalance() / android_media_AudioSystem.cpp
+ AudioSystem::setMasterBalance(balance)
+ setMasterBalance() / AudioSystem.cpp
+ const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
+Af - > setmasterbalance (balance) // call audioflinger's setmasterbalance
+ setMasterBalance() / AudioFlinger.cpp
+ mMasterBalance.store(balance);

In audioflinger, you will first check the permissions, the validity of parameters, and whether the settings are the same as before, and finally set them to the playback thread through the for loop,
It should be noted that the duplicating thread is skipped, that is to sayMaster balance is not valid for duplicating playback mode

Tips:
Duplicating is used for duplicating and playing ringtones simultaneously with Bluetooth and loudspeaker.
frameworks/av/services/audioflinger/AudioFlinger.cpp
status_t AudioFlinger::setMasterBalance(float balance)
{
... // permission check
// check calling permissions
if (!settingsAllowed()) {
... // parameter validity check
// check range
if (isnan(balance) || fabs(balance) > 1.f) {
... // is it the same as the previous value
// short cut.
if (mMasterBalance == balance) return NO_ERROR;

mMasterBalance = balance;

for (size_t i = 0; i < mPlaybackThreads.size(); i++) {
//If it is duplicating, it will not be processed
continue;
}
}

return NO_ERROR;
}

As people familiar with audio know, Android divides playback thread into fast thread, mixer thread, direct thread and other threads to achieve fast, mixing, direct offload playback and other purposes. Therefore, the setmasterbalance() and subsequent balance processing of each playback thread may be different. Here we take a typical mixer thread as an example for analysis, The rest of the way if useful to their own look at the code.

Save the value in playbackthread, and it’s over

frameworks/av/services/audioflinger/Threads.cpp
{
mMasterBalance.store(balance);
}

In threads, mmasterbalance is defined as atomic type
std::atomic<float>              mMasterBalance{};

Mmasterbalance is an atomic type, and its store / read method is store () / load (). Setmasterbalance () finally stores the balance value with store (). If you want to continue to see the balance process, you have to find out where to use the value.

3. Balance principle

There are several places to use mmasterbalance. We also use playbackthread for analysis. If you need direct method, you can see for yourself.

Playbackthread’s threadloop() is a main function of audio processing, and the code is also very long. The main work is event processing, preparation of audio track, mixing, sound chain processing, and the left-right balance processing we want to talk about. Finally, we write the data to Hal. Other processes can be studied if you are interested. This paper mainly focuses on balance processing.

bool AudioFlinger::PlaybackThread::threadLoop()
{... // loop processing until the thread needs to exit
for (int64_t loopCount = 0; !exitPending(); ++loopCount)
{... // event handling
processConfigEvents_l();
... // prepare the track
mMixerStatus = prepareTracks_l(&tracksToRemove);
... // mixing
... // sound chain processing
effectChains[i]->process_l();
... // balance left and right
if (!hasFastMixer()) {
// Balance must take effect after mono conversion.
// We do it here if there is no FastMixer.
// mBalance detects zero balance within the class for speed (not needed here).
//Read the balance value and assign it to audio through the setbalance() method_ utils::Balance
//Balance buffer
mBalance.process((float *)mEffectBuffer, mNormalFrameCount);
}
... // write the processed data to Hal
...
}
...
}

Mbalance definition
audio_utils::Balance            mBalance;

As can be seen from the above code, if there is a fast mixer in the thread, it will not be balanced, and then a new class audio is introduced_ Utils:: balance is specially used for balance processing. The relevant method is setbalance() process(). Intuitively, we can understand the principle by looking at the process() function. Let’s look at the function first.

system/media/audio_utils/Balance.cpp
void Balance::process(float *buffer, size_t frames)
{
//Values in the middle and mono are not processed
if (mBalance == 0.f || mChannelCount < 2) {
return;
}

if (mRamp) {
... // ramp processing
// ramped balance
for (size_t i = 0; i < frames; ++i) {
const float findex = i;
for (size_t j = 0; j < mChannelCount; ++j) { // better precision: delta * i
//After changing the balance, the first call to process will carry out ramp processing
*buffer++ *= mRampVolumes[j] + mDeltas[j] * findex;
}
}
...
}
//Non ramp processing
// non-ramped balance
for (size_t i = 0; i < frames; ++i) {
for (size_t j = 0; j < mChannelCount; ++j) {
//Multiply each channel of the incoming buffer by a certain coefficient
*buffer++ *= mVolumes[j];
}
}
}

Process () does not deal with balance in the middle or mono channel, and then it is divided into ramp and non ramp modes. These two modes multiply each channel of the incoming buffer by a certain coefficient. We are mainly concerned with non ramp mode*buffer++ *= mVolumes[j];Next, let’s look at its mvolumes [J], that is, what are the left and right channel coefficients?

To find out the value of mvolumes, you need to look back at its setbalance() method,

system/media/audio_utils/Balance.cpp
void Balance::setBalance(float balance)
{... // validity check, code skipping
//Single channel without processing
if (mChannelCount < 2) { // if channel count is 1, mVolumes[0] is already set to 1.f
return;              // and if channel count < 2, we don't do anything in process().
}

//Common dual channel processing
// Handle the common cases:
// stereo and channel index masks only affect the first two channels as left and right.
== AUDIO_CHANNEL_REPRESENTATION_INDEX) {
//Calculate the balance coefficient of left and right channels
computeStereoBalance(balance, &mVolumes[0], &mVolumes[1]);
return;
}
//Processing of more than 2 channels
// For position masks with more than 2 channels, we consider which side the
// speaker position is on to figure the volume used.
float balanceVolumes[3]; // left, right, center
//Calculate the balance coefficient of left and right channels
computeStereoBalance(balance, &balanceVolumes[0], &balanceVolumes[1]);
//Intermediate fixation
balanceVolumes[2] = 1.f; // center  TODO: consider center scaling.

for (size_t i = 0; i < mVolumes.size(); ++i) {
mVolumes[i] = balanceVolumes[mSides[i]];
}
}

In setbalance(), mono, dual and multi-channel are processed. The mono coefficient is fixed at 1. F; Both dual and multichannel will be calledcomputeStereoBalance()The left and right balance coefficients are calculated; Multi channel should not be done well at present, among which the fixed value is 1. F.

Finally came to the key about the channel coefficient calculation function!

void Balance::computeStereoBalance(float balance, float *left, float *right) const
{
if (balance > 0.f) {
//Balance to the right
*left = mCurve(1.f - balance);
*right = 1.f;
} else if (balance < 0.f) {
//Balance to the left
*left = 1.f;
*right = mCurve(1.f + balance);
} else {
//Balance in the middle
*left = 1.f;
*right = 1.f;
}

// Functionally:
// *left = balance > 0.f ? mCurve(1.f - balance) : 1.f;
// *right = balance < 0.f ? mCurve(1.f + balance) : 1.f;
}

When counting coefficient:
Balance to the right, the right channel is fixed at 1. F, and the left channel is mcurve (1. F – balance);
Balance to the left, the left channel is fixed at 1. F, and the right channel is mcurve (1. F + balance);
in other words,
Which side of the balance is going, which side of the volume is fixed at 1. F, and the other side is multiplied by the coefficient mcurve (1. F – | balance |) (balance ∈ [- 1.0, 1.0])

Now let’s move on to the mcurve curve,

system/media/audio_utils/include/audio_utils/Balance.h
class Balance {
public:
/**
* \brief Balance processing of left-right volume on audio data.
*
* Allows processing of audio data with a single balance parameter from [-1, 1].
* For efficiency, the class caches balance and channel mask data between calls;
* hence, use by multiple threads will require caller locking.
*
* \param ramp whether to ramp volume or not.
* \param curve a monotonic increasing function f: [0, 1] -> [a, b]
*        which represents the volume steps from an input domain of [0, 1] to
*        an output range [a, b] (ostensibly also from 0 to 1).
*        If [a, b] is not [0, 1], it is normalized to [0, 1].
*        Curve is typically a convex function, some possible examples:
*        [](float x) { return expf(2.f * x); }
*        or
*        [](float x) { return x * (x + 0.2f); }
*/
explicit Balance(
bool ramp = true,
std::function<float(float)> curve = [](float x) { return x * (x + 0.2f); }) //  Curve function
: mRamp(ramp)
, mcurve (normalize (STD:: move (curve))) {} // mcurve is normalized

//Mcurve definition
const std::function<float(float)> mCurve; // monotone volume transfer func [0, 1] -> [0, 1]

In fact, the function annotation is very clear, and I also posted the annotation part. Mcurve is a function, and it has been normalized, so that its interval and value fall on [0, 1]. This function is a monotonic increasing function, which is currently usedx * (x + 0.2f)Of course, you can also use other functions.

Normalize is a template, and its comments are very clear,

/**
* \brief Normalizes f: [0, 1] -> [a, b] to g: [0, 1] -> [0, 1].
*
* A helper function to normalize a float volume function.
* g(0) is exactly zero, but g(1) may not necessarily be 1 since we
* use reciprocal multiplication instead of division to scale.
*
* \param f a function from [0, 1] -> [a, b]
* \return g a function from [0, 1] -> [0, 1] as a linear function of f.
*/
template<typename T>
static std::function<T(T)> normalize(std::function<T(T)> f) {
const T f0 = f(0);
const T r = T(1) / (f(1) - f0); // reciprocal multiplication

if (f0 != T(0) ||  // must be exactly 0 at 0, since we promise g(0) == 0
fabs(r - T(1)) > std::numeric_limits<T>::epsilon() * 3) { // some fudge allowed on r.
//We use the function x * (x + 0.2f), Fabs (R - t (1)) >... As true, which will come here
return [f, f0, r](T x) { return r * (f(x) - f0); };
}
// no translation required.
return f;
}

The function we use satisfiesfabs(r - T(1)) > std::numeric_limits<T>::epsilon() * 3Condition, so it will also do normalization, that is, using ther * (f(x) - f0)In combination, the mcurve curve is mathematically described as

$$f(x) = x^2 + 0.2 \times x; \\ mCurve(x) = {\frac{1.0}{f(1)-f(0)}} \times {(f(x)-f(0))} = {\frac{1.0}{1.2} \times f(x)}$$

That is to say

$$\mathbf{mCurve(x) = {\frac{(x^2 + 0.2x)}{1.2}}, x\in[0.0, 1.0], y\in[0.0, 1.0]}$$

1.2 is the normalization coefficient

$mcurve (1. F – | balance |), balance in [- 1.0, 1.0]$can be represented as follows:

< center > Figure 2. Balance curve

If there is a problem with the display of the figure, also use online matlab to view it, and open the followingwebsite, and then enter the following

https://octave-online.net/

x = [-1 : 0.1: 1];
z = 1 - abs(x)
y = (z.^2 + 0.2 * z)/1.2;

plot(x, y, 'r')
xlabel('balance')
ylabel('Y')
title('Balance Curve')

So far, the principle of regulating left-right balance is clear.

4. Commissioning

In addition to the above mentioned, use the command lineadb shell settings put system master_balanceIn addition to changing its value, we can also dump it to see if it works

$adb shell dumpsys media.audio_flinger //A thread of type mixer Output thread 0x7c19757740, name AudioOut_D, tid 1718, type 0 (MIXER): ... Thread throttle time (msecs): 6646 AudioMixer tracks: Master mono: off //Balance value Master balance: 0.500000 (balance 0.5 channelCount 2 volumes: 0.291667 1) //A thread of type unload (direct) Output thread 0x7c184b3000, name AudioOut_20D, tid 10903, type 4 (OFFLOAD): ... Suspended frames: 0 Hal stream dump: //Balance value Master balance: 0.500000 Left: 0.291667 Right: 1.000000 5. Summary 1. The UI setting interface is just a data storage process, and its value is converted to [- 1.0, 1.0] and stored in the database. After the Java layer audio service listens to the change of the value, it is finally stored in the audioflinger non copy playback thread through the setmasterbalance() interface; 2. For the playback thread without fast mixer, it will balance in threadloop(); 3. The principle of balance processing is also very simple, which side of balance and which channel remain unchanged. Multiply the other channel by a coefficient (pitch down, mcurve (1 – | balance |). For non ramp mode, the coefficient is a quadratic monotone function and normalized to [0,1]. At present, it is$mcurve (x) = x * (x + 0.2) / 1.2 \$.

Introduce regular expressions in ruby in detail

A regular expression is a special sequence of characters that matches or finds other strings or collections of strings by using patterns with special syntax. grammar A regular expression is literally a pattern between slashes or any separator after% R, as follows: ? 1 2 3 4 5 6 7 8 9 10 11 12 […]