Android 12 (s) multimedia learning (IX) mediacodec

Time:2022-5-9

In this section, learn the working principle of mediacodec and the relevant code path:

http://aospxref.com/android-12.0.0_r3/xref/frameworks/av/media/libstagefright/MediaCodec.cpp

 

1. Create mediacodec object

Mediacodec provides two static methods for creating mediacodec objects: createbytype and createbycomponentname. Let’s take a look at them respectively

CreateByTypeThe parameter of the method needs to give a mimeType and specify whether it is an encoder, then go to the mediacodeclist to query whether there is an appropriate codec, then create a mediacodec object, and finally initialize the mediacodec object with the found componentname

// static
sp MediaCodec::CreateByType(
        const sp &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
        uid_t uid) {
    sp format;
    return CreateByType(looper, mime, encoder, err, pid, uid, format);
}

sp MediaCodec::CreateByType(
        const sp &looper, const AString &mime, bool encoder, status_t *err, pid_t pid,
        uid_t uid, sp format) {
    Vector matchingCodecs;

    MediaCodecList::findMatchingCodecs(
            mime.c_str(),
            encoder,
            0,
            format,
            &matchingCodecs);

    if (err != NULL) {
        *err = NAME_NOT_FOUND;
    }
    for (size_t i = 0; i < matchingCodecs.size(); ++i) {
        sp codec = new MediaCodec(looper, pid, uid);
        AString componentName = matchingCodecs[i];
        status_t ret = codec->init(componentName);
        if (err != NULL) {
            *err = ret;
        }
        if (ret == OK) {
            return codec;
        }
        ALOGD("Allocating component '%s' failed (%d), try next one.",
                componentName.c_str(), ret);
    }
    return NULL;
}

CreateByComponentNameThe method is actually similar to the above method. Because the parameter specifies the componentname, the mediacodec lookup process is not required.

// static
sp MediaCodec::CreateByComponentName(
        const sp &looper, const AString &name, status_t *err, pid_t pid, uid_t uid) {
    sp codec = new MediaCodec(looper, pid, uid);

    const status_t ret = codec->init(name);
    if (err != NULL) {
        *err = ret;
    }
    return ret == OK ? codec : NULL; // NULL deallocates codec.
}

init

status_t MediaCodec::init(const AString &name) {
    //Save componentname
    mInitName = name;

    mCodecInfo.clear();

    bool secureCodec = false;
    const char *owner = "";
    //Get codecinfo corresponding to componentname from mediacodeclist
    if (!name.startsWith("android.filter.")) {
        status_t err = mGetCodecInfo(name, &mCodecInfo);
        if (err != OK) {
            mCodec = NULL;  // remove the codec.
            return err;
        }
        if (mCodecInfo == nullptr) {
            ALOGE("Getting codec info with name '%s' failed", name.c_str());
            return NAME_NOT_FOUND;
        }
        secureCodec = name.endsWith(".secure");
        Vector mediaTypes;
        mCodecInfo->getSupportedMediaTypes(&mediaTypes);
        for (size_t i = 0; i < mediaTypes.size(); ++i) {
            if (mediaTypes[i].startsWith("video/")) {
                mIsVideo = true;
                break;
            }
        }
        Get ownername
        owner = mCodecInfo->getOwnerName();
    }
    //Create a codecbase object based on the owner
    mCodec = mGetCodecBase(name, owner);
    if (mCodec == NULL) {
        ALOGE("Getting codec base with name '%s' (owner='%s') failed", name.c_str(), owner);
        return NAME_NOT_FOUND;
    }
    //Judge whether it is video according to codecinfo. If it is video, create a looper for it
    if (mIsVideo) {
        // video codec needs dedicated looper
        if (mCodecLooper == NULL) {
            mCodecLooper = new ALooper;
            mCodecLooper->setName("CodecLooper");
            mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
        }

        mCodecLooper->registerHandler(mCodec);
    } else {
        mLooper->registerHandler(mCodec);
    }

    mLooper->registerHandler(this);
    //Register callback for codecbase
    mCodec->setCallback(
            std::unique_ptr(
                    new CodecCallback(new AMessage(kWhatCodecNotify, this))));
    //Get the bufferchannel of codecbase
    mBufferChannel = mCodec->getBufferChannel();
    //Register callback for bufferchannel
    mBufferChannel->setCallback(
            std::unique_ptr(
                    new BufferCallback(new AMessage(kWhatCodecNotify, this))));

    sp msg = new AMessage(kWhatInit, this);
    if (mCodecInfo) {
        msg->setObject("codecInfo", mCodecInfo);
        // name may be different from mCodecInfo->getCodecName() if we stripped
        // ".secure"
    }
    msg->setString("name", name);

    // ......
    err = PostAndAwaitResponse(msg, &response);

    return err;
}

The init method does the following:

1. Get the codecinfo corresponding to the componentname from mediacodeclist (mgetcodecbase is a function pointer and defined in the constructor), then check the type of codecinfo, judge whether the current mediacodec object is used for video / audio, and get the owner corresponding to the component. Why find owner? Now there are two frameworks in Android for encoding and decoding, one is OMX, and the other is codec2 0, component owner is used to mark whether it belongs to OMX or codec2 0

//static
sp MediaCodec::GetCodecBase(const AString &name, const char *owner) {
    if (owner) {
        if (strcmp(owner, "default") == 0) {
            return new ACodec;
        } else if (strncmp(owner, "codec2", 6) == 0) {
            return CreateCCodec();
        }
    }

    if (name.startsWithIgnoreCase("c2.")) {
        return CreateCCodec();
    } else if (name.startsWithIgnoreCase("omx.")) {
        // at this time only ACodec specifies a mime type.
        return new ACodec;
    } else if (name.startsWithIgnoreCase("android.filter.")) {
        return new MediaFilter;
    } else {
        return NULL;
    }
}

The code for creating codecbase is not long, so I’ll post it here. You can see that there are two sets of judgment mechanisms, which can be judged according to the owner or the beginning of the component.

2. Set looper for codecbase. If it is video, create a new looper for it. If it is audio, use the looper from the upper layer. Mediacodec itself uses the looper from the upper layer

3. Register a callback for codecbase. The registered object is codeccallback object. The amessage target saved in codeccallback is mediacodec object. Therefore, codecbase sends a callback message, which will be transferred through codeccallback and sent to mediacodec for processing

4. Get the bufferchannel of codecbase

5. Register a callback for bufferchannel, which is the buffercallback object. The process is the same as that of codecbase callback

6. Send a kwhatinit message to onmessagereceived for processing. Repackage codecinfo and componentname to initialize codecbase object

setState(INITIALIZING);

            sp codecInfo;
            (void)msg->findObject("codecInfo", &codecInfo);
            AString name;
            CHECK(msg->findString("name", &name));

            sp format = new AMessage;
            if (codecInfo) {
                format->setObject("codecInfo", codecInfo);
            }
            format->setString("componentName", name);

            mCodec->initiateAllocateComponent(format);

Here, the creation of mediacodec is completed. We will learn how to create and initialize codecbase.

2、configure

The configure code is relatively long, but very simple! Only a little here

sp msg = new AMessage(kWhatConfigure, this);
    msg->setMessage("format", format);
    msg->setInt32("flags", flags);
    msg->setObject("surface", surface);

    if (crypto != NULL || descrambler != NULL) {
        if (crypto != NULL) {
            msg->setPointer("crypto", crypto.get());
        } else {
            msg->setPointer("descrambler", descrambler.get());
        }
        if (mMetricsHandle != 0) {
            mediametrics_setInt32(mMetricsHandle, kCodecCrypto, 1);
        }
    } else if (mFlags & kFlagIsSecure) {
        ALOGW("Crypto or descrambler should be given for secure codec");
    }
    err = PostAndAwaitResponse(msg, &response);

This method does two things:

1. Parse the parameter information in the incoming format and save it to mediacodec

2. Repackage format, surface, crypto and other information to onmessagereceived for processing

case kWhatConfigure:
        {
            sp obj;
            CHECK(msg->findObject("surface", &obj));

            sp format;
            CHECK(msg->findMessage("format", &format));
            // setSurface
            if (obj != NULL) {
                if (!format->findInt32(KEY_ALLOW_FRAME_DROP, &mAllowFrameDroppingBySurface)) {
                    // allow frame dropping by surface by default
                    mAllowFrameDroppingBySurface = true;
                }

                format->setObject("native-window", obj);
                status_t err = handleSetSurface(static_cast(obj.get()));
                if (err != OK) {
                    PostReplyWithError(replyID, err);
                    break;
                }
            } else {
                // we are not using surface so this variable is not used, but initialize sensibly anyway
                mAllowFrameDroppingBySurface = false;

                handleSetSurface(NULL);
            }

            uint32_t flags;
            CHECK(msg->findInt32("flags", (int32_t *)&flags));
            if (flags & CONFIGURE_FLAG_USE_BLOCK_MODEL) {
                if (!(mFlags & kFlagIsAsync)) {
                    PostReplyWithError(replyID, INVALID_OPERATION);
                    break;
                }
                mFlags |= kFlagUseBlockModel;
            }
            mReplyID = replyID;
            setState(CONFIGURING);
            //Get crypto
            void *crypto;
            if (!msg->findPointer("crypto", &crypto)) {
                crypto = NULL;
            }
            //Set crypto to bufferchannel
            mCrypto = static_cast(crypto);
            mBufferChannel->setCrypto(mCrypto);
            //Obtain descrambling information
            void *descrambler;
            if (!msg->findPointer("descrambler", &descrambler)) {
                descrambler = NULL;
            }
            //Set descrambling information to bufferchannel
            mDescrambler = static_cast(descrambler);
            mBufferChannel->setDescrambler(mDescrambler);

            //Judge whether it is an encoder from the flags
            format->setInt32("flags", flags);
            if (flags & CONFIGURE_FLAG_ENCODE) {
                format->setInt32("encoder", true);
                mFlags |= kFlagIsEncoder;
            }

            //Get CSD buffer
            extractCSD(format);

            //Determine whether tunnel mode is required
            int32_t tunneled;
            if (format->findInt32("feature-tunneled-playback", &tunneled) && tunneled != 0) {
                ALOGI("Configuring TUNNELED video playback.");
                mTunneled = true;
            } else {
                mTunneled = false;
            }

            int32_t background = 0;
            if (format->findInt32("android._background-mode", &background) && background) {
                androidSetThreadPriority(gettid(), ANDROID_PRIORITY_BACKGROUND);
            }
            //Call the configure method of codecbase
            mCodec->initiateConfigureComponent(format);
            break;
        }

It is important to configure the player and the tunnel, such as whether the encryption functions are required for the player and the tunnel

3、Start

After configuring mediacodec, the status is set to configured, and then you can start playing.

      setState(STARTING);
      mCodec->initiateStart();

The start method is relatively simple. Set the status to starting and call the start method of codecbase. It can be guessed that after codecbase start succeeds, a callback will set the status to started

4、setCallback

Setcallback should actually be placed before start, because the upper layer can use mediacodec normally only after callback is set. Callback will throw the events passed from the bottom layer to mediacodec to the next layer, and the upper layer will process the events, such as CB_ INPUT_ AVAILABLE

The method is simple:

sp callback;
      CHECK(msg->findMessage("callback", &callback));
      mCallback = callback;

5. Upper layer getbuffer

Two groups of four methods are involved: getinputbuffers / getoutputbuffers / getinputbuffer / getoutputbuffer.

Getinputbuffers / getoutputbuffers is used to obtain the input and output buffer array of the decoder at one time. The buffers created in codecbase are managed by the bufferchannel, so the getinputbufferarray method of the bufferchannel is called

status_t MediaCodec::getInputBuffers(Vector > *buffers) const {
    sp msg = new AMessage(kWhatGetBuffers, this);
    msg->setInt32("portIndex", kPortIndexInput);
    msg->setPointer("buffers", buffers);

    sp response;
    return PostAndAwaitResponse(msg, &response);
}

    case kWhatGetBuffers:
    {
        sp replyID;
        CHECK(msg->senderAwaitsResponse(&replyID));
        if (!isExecuting() || (mFlags & kFlagIsAsync)) {
            PostReplyWithError(replyID, INVALID_OPERATION);
            break;
        } else if (mFlags & kFlagStickyError) {
            PostReplyWithError(replyID, getStickyError());
            break;
        }

        int32_t portIndex;
        CHECK(msg->findInt32("portIndex", &portIndex));

        Vector > *dstBuffers;
        CHECK(msg->findPointer("buffers", (void **)&dstBuffers));

        dstBuffers->clear();
        if (portIndex != kPortIndexInput || !mHaveInputSurface) {
            if (portIndex == kPortIndexInput) {
                mBufferChannel->getInputBufferArray(dstBuffers);
            } else {
                mBufferChannel->getOutputBufferArray(dstBuffers);
            }
        }

        (new AMessage)->postReply(replyID);
        break;
    }

Getinputbuffer / getoutpubuffer finds the buffer in the mediacodec buffer queue according to the index. The element codecbase in the queue is added through the callback method

status_t MediaCodec::getOutputBuffer(size_t index, sp *buffer) {
    sp format;
    return getBufferAndFormat(kPortIndexOutput, index, buffer, &format);
}

status_t MediaCodec::getBufferAndFormat(
        size_t portIndex, size_t index,
        sp *buffer, sp *format) {

    if (buffer == NULL) {
        ALOGE("getBufferAndFormat - null MediaCodecBuffer");
        return INVALID_OPERATION;
    }

    if (format == NULL) {
        ALOGE("getBufferAndFormat - null AMessage");
        return INVALID_OPERATION;
    }

    buffer->clear();
    format->clear();

    if (!isExecuting()) {
        ALOGE("getBufferAndFormat - not executing");
        return INVALID_OPERATION;
    }

    Mutex::Autolock al(mBufferLock);

    std::vector &buffers = mPortBuffers[portIndex];
    if (index >= buffers.size()) {
        return INVALID_OPERATION;
    }

    const BufferInfo &info = buffers[index];
    if (!info.mOwnedByClient) {
        return INVALID_OPERATION;
    }

    *buffer = info.mData;
    *format = info.mData->format();

    return OK;
}

6. Process of buffers

Next, let’s look at the processing of input / output buffer

kPortIndexInput

Bufferchannel calls the oninpufferavailable method of buffercallback to add the input buffer to the queue

void BufferCallback::onInputBufferAvailable(
        size_t index, const sp &buffer) {
    sp notify(mNotify->dup());
    notify->setInt32("what", kWhatFillThisBuffer);
    notify->setSize("index", index);
    notify->setObject("buffer", buffer);
    notify->post();
}

The processing in onmessagereceived is not too long. Five things have been done:

case kWhatFillThisBuffer:
    {
        //Add buffer to mpotbuffers and index to mavailportbuffers
        /* size_t index = */updateBuffers(kPortIndexInput, msg);
        
        //If the status is flush, stop or release, clear the index in the availportbuffer and discard the contents in the buffer
        if (mState == FLUSHING
                || mState == STOPPING
                || mState == RELEASING) {
            returnBuffersToCodecOnPort(kPortIndexInput);
            break;
        }
        //If a CSD buffer is included, the buffer will be written to the decoder first, and then the CSD buffer will be cleared. The CSD buffer may be set again after the next seek / flush
        if (!mCSD.empty()) {
            ssize_t index = dequeuePortBuffer(kPortIndexInput);
            CHECK_GE(index, 0);

            status_t err = queueCSDInputBuffer(index);

            if (err != OK) {
                ALOGE("queueCSDInputBuffer failed w/ error %d",
                      err);

                setStickyError(err);
                postActivityNotificationIfPossible();

                cancelPendingDequeueOperations();
            }
            break;
        }
        //First deal with the buffer in mleftover, which is not used for the time being
        if (!mLeftover.empty()) {
            ssize_t index = dequeuePortBuffer(kPortIndexInput);
            CHECK_GE(index, 0);

            status_t err = handleLeftover(index);
            if (err != OK) {
                setStickyError(err);
                postActivityNotificationIfPossible();
                cancelPendingDequeueOperations();
            }
            break;
        }
        //If the buffer is processed asynchronously, that is, if callback is set, oninpufferavailable will be called to notify the upper layer processing, otherwise wait for synchronous call
        if (mFlags & kFlagIsAsync) {
            if (!mHaveInputSurface) {
                if (mState == FLUSHED) {
                    mHavePendingInputBuffers = true;
                } else {
                    onInputBufferAvailable();
                }
            }
        } else if (mFlags & kFlagDequeueInputPending) {
            CHECK(handleDequeueInputBuffer(mDequeueInputReplyID));

            ++mDequeueInputTimeoutGeneration;
            mFlags &= ~kFlagDequeueInputPending;
            mDequeueInputReplyID = 0;
        } else {
            postActivityNotificationIfPossible();
        }
        break;
    }

1. Call updatebuffers to save the sent inputbuffer to mpotbuffers [kportindexinput], and the corresponding index to mavailportbuffers

2. Judge whether to discard all buffers in the current state

3. If there is a CSD buffer, write the CSD buffer to the decoder first

4. First, finish the buffer processing in mleftover, which is not used for the time being

5. If callback is set, it indicates that it is an asynchronous call, then call oninpufferavailable to notify the upper layer of asynchronous processing, otherwise wait for synchronous call

void MediaCodec::onInputBufferAvailable() {
    int32_t index;
    //Cycle until there is no index in the mavailportbuffers
    while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
        sp msg = mCallback->dup();
        msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
        msg->setInt32("index", index);
        //Notify the upper layer for processing
        msg->post();
    }
}

ssize_t MediaCodec::dequeuePortBuffer(int32_t portIndex) {
    CHECK(portIndex == kPortIndexInput || portIndex == kPortIndexOutput);

    //Get the first available index in maportbuffers, and then get the buffer at the corresponding position in maportbuffers
    BufferInfo *info = peekNextPortBuffer(portIndex);
    if (!info) {
        return -EAGAIN;
    }

    List *availBuffers = &mAvailPortBuffers[portIndex];
    size_t index = *availBuffers->begin();
    CHECK_EQ(info, &mPortBuffers[portIndex][index]);
    //Erase first index
    availBuffers->erase(availBuffers->begin());
    //The ownedbyclient needs to be studied in codecbase
    CHECK(!info->mOwnedByClient);
    {
        Mutex::Autolock al(mBufferLock);
        info->mOwnedByClient = true;

        // set image-data
        if (info->mData->format() != NULL) {
            sp imageData;
            if (info->mData->format()->findBuffer("image-data", &imageData)) {
                info->mData->meta()->setBuffer("image-data", imageData);
            }
            int32_t left, top, right, bottom;
            if (info->mData->format()->findRect("crop", &left, &top, &right, &bottom)) {
                info->mData->meta()->setRect("crop-rect", left, top, right, bottom);
            }
        }
    }
    //Return index
    return index;
}

Oninpufferavailable will notify the upper layer of all the inputbuffer indexes in the queue at one time, and the upper layer can pass through after getting the indexgetInputBufferGet the buffer, fill in the buffer, and finally call queueinputbuffer to write the buffer to the decoder. Next, let’s see how to write it.

status_t MediaCodec::queueInputBuffer(
        size_t index,
        size_t offset,
        size_t size,
        int64_t presentationTimeUs,
        uint32_t flags,
        AString *errorDetailMsg) {
    if (errorDetailMsg != NULL) {
        errorDetailMsg->clear();
    }

    sp msg = new AMessage(kWhatQueueInputBuffer, this);
    msg->setSize("index", index);
    msg->setSize("offset", offset);
    msg->setSize("size", size);
    msg->setInt64("timeUs", presentationTimeUs);
    msg->setInt32("flags", flags);
    msg->setPointer("errorDetailMsg", errorDetailMsg);

    sp response;
    return PostAndAwaitResponse(msg, &response);
}

The queueinputbuffer packages and sends the index, PTS, flag, size and other information to onmessagereceive for actual processing

case kWhatQueueInputBuffer:
        {
            sp replyID;
            CHECK(msg->senderAwaitsResponse(&replyID));

            if (!isExecuting()) {
                PostReplyWithError(replyID, INVALID_OPERATION);
                break;
            } else if (mFlags & kFlagStickyError) {
                PostReplyWithError(replyID, getStickyError());
                break;
            }

            status_t err = UNKNOWN_ERROR;
            //Check whether mleftover is empty. If not, add it to mleftover first
            if (!mLeftover.empty()) {
                mLeftover.push_back(msg);
                size_t index;
                msg->findSize("index", &index);
                err = handleLeftover(index);
            } else {
                //Or directly call onqueueinputbuffer for processing
                err = onQueueInputBuffer(msg);
            }

            PostReplyWithError(replyID, err);
            break;
        }

There are two processing methods. One is to join the mlefover queue and call the handlelefover method for processing. The other is to call the onqueueinputbuffer for processing. Since we haven’t touched mleftover yet, let’s see how onqueueinputbuffer handles it first.

status_t MediaCodec::onQueueInputBuffer(const sp &msg) {
    size_t index;
    size_t offset;
    size_t size;
    int64_t timeUs;
    uint32_t flags;
    CHECK(msg->findSize("index", &index));
    CHECK(msg->findInt64("timeUs", &timeUs));
    CHECK(msg->findInt32("flags", (int32_t *)&flags));
    std::shared_ptr c2Buffer;
    sp memory;
    sp obj;
    //C2buffer / memory is used for queuecsdbuffer / queueencryptedbuffer
    if (msg->findObject("c2buffer", &obj)) {
        CHECK(obj);
        c2Buffer = static_cast> *>(obj.get())->value;
    } else if (msg->findObject("memory", &obj)) {
        CHECK(obj);
        memory = static_cast> *>(obj.get())->value;
        CHECK(msg->findSize("offset", &offset));
    } else {
        CHECK(msg->findSize("offset", &offset));
    }
    const CryptoPlugin::SubSample *subSamples;
    size_t numSubSamples;
    const uint8_t *key;
    const uint8_t *iv;
    CryptoPlugin::Mode mode = CryptoPlugin::kMode_Unencrypted;

    CryptoPlugin::SubSample ss;
    CryptoPlugin::Pattern pattern;

    if (msg->findSize("size", &size)) {
        if (hasCryptoOrDescrambler()) {
            ss.mNumBytesOfClearData = size;
            ss.mNumBytesOfEncryptedData = 0;

            subSamples = &ss;
            numSubSamples = 1;
            key = NULL;
            iv = NULL;
            pattern.mEncryptBlocks = 0;
            pattern.mSkipBlocks = 0;
        }
    } else if (!c2Buffer) {
        if (!hasCryptoOrDescrambler()) {
            return -EINVAL;
        }

        CHECK(msg->findPointer("subSamples", (void **)&subSamples));
        CHECK(msg->findSize("numSubSamples", &numSubSamples));
        CHECK(msg->findPointer("key", (void **)&key));
        CHECK(msg->findPointer("iv", (void **)&iv));
        CHECK(msg->findInt32("encryptBlocks", (int32_t *)&pattern.mEncryptBlocks));
        CHECK(msg->findInt32("skipBlocks", (int32_t *)&pattern.mSkipBlocks));

        int32_t tmp;
        CHECK(msg->findInt32("mode", &tmp));

        mode = (CryptoPlugin::Mode)tmp;

        size = 0;
        for (size_t i = 0; i < numSubSamples; ++i) {
            size += subSamples[i].mNumBytesOfClearData;
            size += subSamples[i].mNumBytesOfEncryptedData;
        }
    }

    if (index >= mPortBuffers[kPortIndexInput].size()) {
        return -ERANGE;
    }
    //Get the buffer of the corresponding index in mpotbuffers [kportindexinput]
    BufferInfo *info = &mPortBuffers[kPortIndexInput][index];
    sp buffer = info->mData;

    if (c2Buffer || memory) {
        sp tunings;
        CHECK(msg->findMessage("tunings", &tunings));
        onSetParameters(tunings);

        status_t err = OK;
        if (c2Buffer) {
            err = mBufferChannel->attachBuffer(c2Buffer, buffer);
        } else if (memory) {
            err = mBufferChannel->attachEncryptedBuffer(
                    memory, (mFlags & kFlagIsSecure), key, iv, mode, pattern,
                    offset, subSamples, numSubSamples, buffer);
        } else {
            err = UNKNOWN_ERROR;
        }

        if (err == OK && !buffer->asC2Buffer()
                && c2Buffer && c2Buffer->data().type() == C2BufferData::LINEAR) {
            C2ConstLinearBlock block{c2Buffer->data().linearBlocks().front()};
            if (block.size() > buffer->size()) {
                C2ConstLinearBlock leftover = block.subBlock(
                        block.offset() + buffer->size(), block.size() - buffer->size());
                sp>> obj{
                    new WrapperObject>{
                        C2Buffer::CreateLinearBuffer(leftover)}};
                msg->setObject("c2buffer", obj);
                mLeftover.push_front(msg);
                // Not sending EOS if we have leftovers
                flags &= ~BUFFER_FLAG_EOS;
            }
        }

        offset = buffer->offset();
        size = buffer->size();
        if (err != OK) {
            return err;
        }
    }

    if (buffer == nullptr || !info->mOwnedByClient) {
        return -EACCES;
    }

    if (offset + size > buffer->capacity()) {
        return -EINVAL;
    }
    //Pack the incoming offset and PTS into the buffer
    buffer->setRange(offset, size);
    buffer->meta()->setInt64("timeUs", timeUs);
    if (flags & BUFFER_FLAG_EOS) {
       //If it is EOS, set the flag in the buffer
        buffer->meta()->setInt32("eos", true);
    }
    //If it is a written CSD buffer, pull up the corresponding flag to notify codec
    if (flags & BUFFER_FLAG_CODECCONFIG) {
        buffer->meta()->setInt32("csd", true);
    }
    //I'm not sure what the flag here is for
    if (mTunneled) {
        TunnelPeekState previousState = mTunnelPeekState;
        switch(mTunnelPeekState){
            case TunnelPeekState::kEnabledNoBuffer:
                buffer->meta()->setInt32("tunnel-first-frame", 1);
                mTunnelPeekState = TunnelPeekState::kEnabledQueued;
                break;
            case TunnelPeekState::kDisabledNoBuffer:
                buffer->meta()->setInt32("tunnel-first-frame", 1);
                mTunnelPeekState = TunnelPeekState::kDisabledQueued;
                break;
            default:
                break;
        }
    }

    status_t err = OK;
    if (hasCryptoOrDescrambler() && !c2Buffer && !memory) {
        AString *errorDetailMsg;
        CHECK(msg->findPointer("errorDetailMsg", (void **)&errorDetailMsg));
        // Notify mCrypto of video resolution changes
        if (mTunneled && mCrypto != NULL) {
            int32_t width, height;
            if (mInputFormat->findInt32("width", &width) &&
                mInputFormat->findInt32("height", &height) && width > 0 && height > 0) {
                if (width != mTunneledInputWidth || height != mTunneledInputHeight) {
                    mTunneledInputWidth = width;
                    mTunneledInputHeight = height;
                    mCrypto->notifyResolution(width, height);
                }
            }
        }
        //Write encryption buffer
        err = mBufferChannel->queueSecureInputBuffer(
                buffer,
                (mFlags & kFlagIsSecure),
                key,
                iv,
                mode,
                pattern,
                subSamples,
                numSubSamples,
                errorDetailMsg);
        if (err != OK) {
            mediametrics_setInt32(mMetricsHandle, kCodecQueueSecureInputBufferError, err);
            ALOGW("Log queueSecureInputBuffer error: %d", err);
        }
    } else {
        //Write to normal buffer
        err = mBufferChannel->queueInputBuffer(buffer);
        if (err != OK) {
            mediametrics_setInt32(mMetricsHandle, kCodecQueueInputBufferError, err);
            ALOGW("Log queueInputBuffer error: %d", err);
        }
    }

    if (err == OK) {
        // synchronization boundary for getBufferAndFormat
        Mutex::Autolock al(mBufferLock);
        //Change the owner of bufferinfo
        info->mOwnedByClient = false;
        info->mData.clear();
        //Record the PTS written to the buffer and the corresponding write time
        statsBufferSent(timeUs, buffer);
    }

    return err;
}

The onqueueinputbuffer method is very long, mainly because there are many different methods that will call it, such as queueinputbuffer, queuecsdbuffer and queuesecurebuffer. Many judgments will be made in it, and finally the queueinputbuffer and queuesecurebuffer of codecbase are called.

Here, a complete inputbuffer processing process is over.

 

kPortIndexOutput

Bufferchannel calls the callback method onoutputbufferavailable

void BufferCallback::onOutputBufferAvailable(
        size_t index, const sp &buffer) {
    sp notify(mNotify->dup());
    notify->setInt32("what", kWhatDrainThisBuffer);
    notify->setSize("index", index);
    notify->setObject("buffer", buffer);
    notify->post();
}

Next, it is processed in onmessagereceived

case kWhatDrainThisBuffer:
    {
        //Add the output buffer to the queue
        /* size_t index = */updateBuffers(kPortIndexOutput, msg);

        if (mState == FLUSHING
                || mState == STOPPING
                || mState == RELEASING) {
            returnBuffersToCodecOnPort(kPortIndexOutput);
            break;
        }

        if (mFlags & kFlagIsAsync) {
            sp obj;
            CHECK(msg->findObject("buffer", &obj));
            sp buffer = static_cast(obj.get());

            // In asynchronous mode, output format change is processed immediately.
            //If the outputformat changes, the method update is called
            handleOutputFormatChangeIfNeeded(buffer);
            //Asynchronous notification upper layer processing outputbuffer
            onOutputBufferAvailable();
        } else if (mFlags & kFlagDequeueOutputPending) {
            CHECK(handleDequeueOutputBuffer(mDequeueOutputReplyID));

            ++mDequeueOutputTimeoutGeneration;
            mFlags &= ~kFlagDequeueOutputPending;
            mDequeueOutputReplyID = 0;
        } else {
            postActivityNotificationIfPossible();
        }

        break;
    }

Or familiar process:

1. Add the output buffer and its index to the queue

2. Update if outputformat changes

3. Call onoutputbufferavailable asynchronous notification upper layer processing

void MediaCodec::onOutputBufferAvailable() {
    int32_t index;
    while ((index = dequeuePortBuffer(kPortIndexOutput)) >= 0) {
        const sp &buffer =
            mPortBuffers[kPortIndexOutput][index].mData;
        sp msg = mCallback->dup();
        msg->setInt32("callbackID", CB_OUTPUT_AVAILABLE);
        msg->setInt32("index", index);
        msg->setSize("offset", buffer->offset());
        msg->setSize("size", buffer->size());

        int64_t timeUs;
        CHECK(buffer->meta()->findInt64("timeUs", &timeUs));

        msg->setInt64("timeUs", timeUs);

        int32_t flags;
        CHECK(buffer->meta()->findInt32("flags", &flags));

        msg->setInt32("flags", flags);

        //Record the time when the outputbuffer is sent to the upper layer and the corresponding PTS
        statsBufferReceived(timeUs, buffer);

        msg->post();
    }
}

After the upper layer gets the outputbuffer, avsync will determine whether to render or discard, and call renderoutpufferandrelease and releaseoutputbuffer

status_t MediaCodec::renderOutputBufferAndRelease(size_t index, int64_t timestampNs) {
    sp msg = new AMessage(kWhatReleaseOutputBuffer, this);
    msg->setSize("index", index);
    msg->setInt32("render", true);
    msg->setInt64("timestampNs", timestampNs);

    sp response;
    return PostAndAwaitResponse(msg, &response);
}
case kWhatReleaseOutputBuffer:
{
    sp replyID;
    CHECK(msg->senderAwaitsResponse(&replyID));

    if (!isExecuting()) {
        PostReplyWithError(replyID, INVALID_OPERATION);
        break;
    } else if (mFlags & kFlagStickyError) {
        PostReplyWithError(replyID, getStickyError());
        break;
    }

    status_t err = onReleaseOutputBuffer(msg);

    PostReplyWithError(replyID, err);
    break;
}
status_t MediaCodec::onReleaseOutputBuffer(const sp &msg) {
    size_t index;
    CHECK(msg->findSize("index", &index));

    int32_t render;
    if (!msg->findInt32("render", &render)) {
        render = 0;
    }

    if (!isExecuting()) {
        return -EINVAL;
    }

    if (index >= mPortBuffers[kPortIndexOutput].size()) {
        return -ERANGE;
    }

    BufferInfo *info = &mPortBuffers[kPortIndexOutput][index];

    if (info->mData == nullptr || !info->mOwnedByClient) {
        return -EACCES;
    }

    // synchronization boundary for getBufferAndFormat
    sp buffer;
    {
        Mutex::Autolock al(mBufferLock);
        info->mOwnedByClient = false;
        buffer = info->mData;
        info->mData.clear();
    }

    if (render && buffer->size() != 0) {
        int64_t mediaTimeUs = -1;
        buffer->meta()->findInt64("timeUs", &mediaTimeUs);

        int64_t renderTimeNs = 0;
        if (!msg->findInt64("timestampNs", &renderTimeNs)) {
            // use media timestamp if client did not request a specific render timestamp
            ALOGV("using buffer PTS of %lld", (long long)mediaTimeUs);
            renderTimeNs = mediaTimeUs * 1000;
        }

        if (mSoftRenderer != NULL) {
            std::list doneFrames = mSoftRenderer->render(
                    buffer->data(), buffer->size(), mediaTimeUs, renderTimeNs,
                    mPortBuffers[kPortIndexOutput].size(), buffer->format());

            // if we are running, notify rendered frames
            if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) {
                sp notify = mOnFrameRenderedNotification->dup();
                sp data = new AMessage;
                if (CreateFramesRenderedMessage(doneFrames, data)) {
                    notify->setMessage("data", data);
                    notify->post();
                }
            }
        }
        status_t err = mBufferChannel->renderOutputBuffer(buffer, renderTimeNs);

        if (err == NO_INIT) {
            ALOGE("rendering to non-initilized(obsolete) surface");
            return err;
        }
        if (err != OK) {
            ALOGI("rendring output error %d", err);
        }
    } else {
        mBufferChannel->discardBuffer(buffer);
    }

    return OK;
}

As you can see, the last is to call the renderoutpuffer of bufferchannel to render.

Here, the processing of an output buffer is completed.

7、flush

case kWhatFlush:
{
    if (!isExecuting()) {
        PostReplyWithError(msg, INVALID_OPERATION);
        break;
    } else if (mFlags & kFlagStickyError) {
        PostReplyWithError(msg, getStickyError());
        break;
    }

    if (mReplyID) {
        mDeferredMessages.push_back(msg);
        break;
    }
    sp replyID;
    CHECK(msg->senderAwaitsResponse(&replyID));

    mReplyID = replyID;
    // TODO: skip flushing if already FLUSHED
    setState(FLUSHING);
    //Call signalflush of codecbase
    mCodec->signalFlush();
    //Discard all buffers
    returnBuffersToCodec();
    TunnelPeekState previousState = mTunnelPeekState;
    mTunnelPeekState = TunnelPeekState::kEnabledNoBuffer;
    ALOGV("TunnelPeekState: %s -> %s",
          asString(previousState),
          asString(TunnelPeekState::kEnabledNoBuffer));
    break;
}

The flush method will first set the status to flushing, then call the signalflush method of codecbase (there should be a callback set to flushed after the call is completed), and discard all buffers. The discarded buffers are divided into two parts:

One is to call the discardbuffer method of bufferchannel and return the buffer to the decoder. The other is to clear the available indexes held by mediacode.

 

Mediacodec has no pause and resume methods! Pause and resume need to be implemented by player. The basic operation principle is probably understood clearly, and other methods will not be seen for the time being.

 

Recommended Today

SuperMap GIS Service acceleration solution

###1. brief introductionSuperMapIn addition to generating a cache for the content requested by the user, the map cache produced in iserver can also be pushed to Iexpress through intelligent cache distribution. Therefore, the service request of the client can be directly responded by Iexpress, so as to improve the service efficiency.####1.1SuperMap acceleration scheme consists of […]