Automating QoS and QoE Evaluation of HTTP Adaptive Streaming Systems

来源 :ZTE Communications | 被引量 : 0次 | 上传用户:superficalness
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
  Abstract: Streaming audio and video content currently accounts for the majority of the Internet traffic and is typically deployed over the top of the existing infrastructure. We are facing the challenge of a plethora of media players and adaptation algorithms showing different behavior but lacking a common framework for both objective and subjective evaluation of such systems. This paper aims to close this gap by proposing such a framework, describing its architecture, providing an example evaluation, and discussing open issues.
  Keywords: HTTP adaptive streaming; DASH; QoE; performance evaluation
  DOI: 10.12142/ZTECOM.201901004
  http://kns.cnki.net/kcms/detail/34.1294.TN.20190319.1713.004.html, published online March 19, 2019
  Manuscript received: 20180816
  1 Introduction
  Universal access to and provisioning of multimedia content is now reality. It is easy to generate, distribute, share, and consume any media content, anywhere, anytime, on any device. Interestingly, most of these services adopt a streaming paradigm, are typically deployed over the open, unmanaged Internet, and account for the majority of today’s Internet traffic. Current estimations expect that the global video traffic will be about 82 percent of all Internet traffic by 2021 [1]. Additionally, Nielsen’s law of Internet bandwidth states that the users’ bandwidth grows by 50 percent per year, which roughly fits data from 1983 to 2018 [2]. Thus, the users’ bandwidth will reach approximately 1 Gbit/s by 2021.
  Similarly, like programs and their data expand to fill the memory available in a computer system, network applications will grow and utilize the bandwidth provided. The majority of the available bandwidth is consumed by video applications and the amount of data is further increasing due to already established and emerging applications, e.g., ultra high?definition, virtual, augmented, and mixed realities. A major technical breakthrough and enabler was certainly HTTP adaptive streaming (HAS), which provides multimedia assets in multiple versions—referred to as representations—and chops each version into short?duration segments (e.g., 2-10 s) for dynamic adaptive streaming over HTTP (MPEG?DASH or just DASH) [3] and HTTP live streaming (HLS) [4], which are both compatible with MPEG’s Common Media Application Format (CMAF) [5]. Independent of the representation format, the media is provided in multiple versions (e.g., different resolutions and bitrates) and each version is divided into chunks of a few seconds (typically 2-10 s). A client first receives a manifest describing the available content on a server, and then, the client requests chunks based on its context (e.g., observed available bandwidth, buffer status, and decoding capabilities). Thus, it is able to adapt the media presentation in a dynamic, adaptive way. In Dynamic Adaptive Streaming over HTTP (DASH), the chunks are referred to as segments and the manifest is called a media presentation description (MPD). In this paper, we use the terminology of DASH, however, this work can be also applied to any other format sharing the same principles.   In the past, we witnessed a plethora of research papers in this area (e.g., [6] and [7]), however, we still lack a comprehensive evaluation framework for HAS systems in terms of both the objective metric, i.e., quality of service (QoS), and the subjective metric, i.e., quality of experience (QoE). Initial evaluations have been based on simple traffic shaping and network emulation tools [8] or means to rapidly prototype the adaptation algorithms [9]. Recently, we have seen various evaluation frameworks in this domain focusing on adaptation algorithms proposed both in academia and industry [8]-[10]. However, the main focus has been on QoS rather than QoE. The latter typically requires user studies, which are mainly conducted within controlled laboratory environments. Yet, nowadays crowdsourcing is also considered as a reliable tool [11] and various platforms have been proposed [12] for this purpose.
  In this paper, we propose a flexible and comprehensive framework to conduct objective and subjective evaluations of HAS systems in a fully automated and scalable way. It provides the following features:
  ·End?to?end HAS evaluation of players deployed in industry and algorithms proposed in academia under various conditions and use cases (e.g., codecs/representations, network configurations, end user devices, and player competition).
  ·Collection and analysis of objective streaming performance metrics (e.g., startup time, stalls, quality switches, and average bitrate).
  ·Subjective quality assessment utilizing crowdsourcing for QoE evaluation of HAS systems and QoE model testing/verification (e.g., testing or verifying a proposed QoE model using subjective user studies).
  The remainder of this paper is as follows. Section 2 comprises a detailed description of the architecture of the proposed framework. Section 3 presents example evaluation results to demonstrate the capabilities of the framework. A discussion and open research issues are provided in Section 4 and Section 5 concludes the paper.
  2 System Architecture
  2.1 Overview
  Our framework (Fig. 1) supports both objective and subjective evaluation of HAS systems and is composed of Adaptive Video Streaming Evaluation (AdViSE) [13] and Web?based Subjective Evaluation Platform (WESP) [14] plus extensions. AdViSE is an adaptive video streaming evaluation framework for the automated testing of web?based media players and adaptation algorithms. It has been designed in an extensible way to support (1) different adaptive media content formats (e.g., DASH, HLS, and CMAF), (2) commercially deployed media players as well as implementations of adaptation algorithms proposed in the research literature, and (3) various networking parameters (e.g., bandwidth and delay) through network emulation. The output of AdViSE comprises a set of QoS and (objective) QoE metrics gathered and calculated during the adaptive streaming evaluation as well as a log of segment requests, which are used to generate the impaired media sequences used for the subjective evaluation.   The subjective evaluation is based on WESP [14], which is a web?based subjective evaluation platform using existing crowdsourcing platforms for subject recruitment implementing best practices according to [15]. WESP takes the impaired media sequences as an input and allows for a flexible configuration of various QoE evaluation parameters such as (1) typical questionnaire assets (e.g., drop?down menus, radio buttons, and free text fields), (2) subjective quality assessment methodology based on ITU recommendations (e.g., absolute category rating), and (3) different crowdsourcing platforms (e.g., Microworkers and Mechanical Turk). The output of WESP comprises the subjective results, including mean opinion scores (MOS) and any other data gathered during the subjective quality assessment, which are stored in a MySQL database. Together with the output of AdViSE, it is used to generate fully automated reports and data export functions, which are eventually used for further analysis.
  Fig. 2 shows screenshots of both AdViSE and WESP configuration interfaces to demonstrate easy setup of HAS evaluations.
  In the following we provide a detailed description of AdViSE and WESP focusing on how they connect with each other leading to a fully automated objective and subjective evaluation of HAS systems. Further details about the individual building blocks can be found in [10], [11], [13], and [14].
  2.2 AdViSE: Adaptive Video Streaming
   Evaluation
  AdViSE includes the following components (Fig. 3):
  ·Web server with standard HTTP hosting the media content and a MySQL database
  ·Network emulation server with a customized Mininet1 environment for, e.g., bandwidth shaping
  ·Selenium2 servers for running adaptive media players/algorithms on various platforms. Note there might be multiple physical servers, each of which hosts a limited set of players/algorithms.
  ·Web management interface for conducting the experiments and running the adaptive media players.
  AdViSE defines a flexible system that allows adding new adaptive media players/algorithms relatively fast. The Web management interface provides two functions, (1) for configuring and conducting the experiments, and (2) including the actual player/algorithm to provide real?time information about the currently conducted experiment. Thus, the proposed framework in this paper provides means for a comprehensive end?to?end evaluation of adaptive streaming services over HTTP including the possibility for subjective quality testing. The interface allows to define the following items and parameters:   ·Configuration of network emulation profiles including the bandwidth trajectory, packet loss, and packet delay
  ·Specification of the number of runs of an experiment
  ·Selection of one or more adaptive HTML5 player (or adaptation algorithm) and the adaptive streaming format used (e.g., DASH, HLS, CMAF).
  The result page provides a list of conducted experiments and the analytics section contains various metrics of the conducted experiments. It is possible to generate graphs for the results by using Highcharts3 and export the raw values for further offline analysis. The following quality parameters and metrics are currently available: (1) startup time; (2) stalls (or buffer underruns); (3) number of quality switches; (4) download bitrate; (5) buffer length; (6) average bitrate; (7) instability and inefficiency; (8) simple QoE models specially designed for HAS. Further metrics can be easily added based on what the application programming interfaces (APIs) of players actually offer, as new metrics or QoE models become available.
  Finally, AdViSE provides the log of the segment requests, which are used—together with metrics such as startup time and stalls—to generate a media sequence as received by the player, and consequently, perceived by the user. The request log is used to concatenate the segments according to the request schedule of the player, thus, reflecting the media bitrate and quality switches. Other impairments such as startup time or stalls are automatically inserted based on the corresponding metrics gathered during the evaluation and by using predefined templates (e.g., stalls displayed as spinning wheel). This impaired media sequence is used in the subsequent step for the subjective QoE evaluation using WESP, which could also include the unimpaired media presentation depending on the employed evaluation method.
  In summary, AdViSE provides scalable, end?to?end HAS evaluation through emulation with a plenty of configuration possibilities regarding content configuration, players/algorithms (including for player competition), and network parameters. With AdViSE, it is possible to utilize actual content and network settings with actual dynamic, adaptive streaming including rendering. We collect various metrics from players based on their API (i.e., when access to source code is restricted) or from the algorithms/HTML5 directly. Additionally, we implemented so?called derived metrics and utilize QoE models proposed in the literature. Finally, the segment request log is used to generate impaired media sequence as perceived by end users for subjective quality testing.   2.3 WESP: Web?Based Subjective
   Evaluation Platform
  Subjective quality assessments (SQAs) are used as a vital tool for evaluating QoE. SQAs provide reliable results but is considered as cost?intensive and SQAs are typically conducted within controlled laboratory environments. Crowdsourcing has been proposed as an alternative to reduce the cost, however, various aspects need to be considered in order to get similar reliable results [15]. In the past, several frameworks have been proposed leveraging crowdsourcing platforms to conduct SQAs with each providing different features [16]. However, a common shortcoming of these frameworks is that they required tedious configuration and setup for each SQA, which made it difficult to use. Therefore, we propose to use a web?based management platform, which shall (1) enable easy and simple configuration of SQAs including possible integration of third?party tools for online surveys, (2) provide means to conduct SQAs using the existing crowdsourcing platforms considering best practice as discussed in [15], and (3) allow for the result analysis.
  The goal of WESP is not only to provide a framework, which fulfills the recommendations of the ITU for subjective evaluations of multimedia applications (e.g., BT.5004, P.9105, and P.9116), but also provide the possibility to select and to configure the preferred evaluation method via a web interface. The conceptual WESP architecture (Fig. 4) is implemented using HTML/PHP with MySQL database.
  The introduction and questionnaires can be configured separately from the test methodology and may include control questions during the main evaluation. The voting possibility can be configured independently from the test methodology, providing more flexibility in selecting the appropriate voting mechanism and rating scale. The predefined voting mechanisms include the common HTML interface elements and some custom controls like a slider in different variations. The platform consists of a management layer and a presentation layer. The management layer allows for maintaining the user study such as adding new questions or multimedia content and setting up the test method to be used (including single stimulus, double stimulus, pair comparison, continuous quality evaluation, etc.). The presentation layer is responsible for presenting the content to the participants. This allows providing different views on the user study, and thus, one can define groups to which the participants may be randomly (or in a predefined way) assigned. After a participant finishes the user study, the gathered data is stored in a MySQL database. Furthermore, the platform offers methods of tracking the participant’s behavior during an SQA (e.g., focus of web browser’s window/tab, time for consuming each stimuli presentation, and time it takes for the voting phase) and data provided by the web player API.   The stimuli presentation can be configured independently from the test method and may be combined with the voting possibility to support continuous quality evaluations. The media content can be fully downloaded and cached on the evaluation device prior starting the actual media presentation to avoid glitches during the evaluation, e.g., due to network issues. However, it also supports streaming evaluation in real?world environments where various metrics (e.g., startup time and stalls) are collected and stored for analysis.
  In summary, WESP provides an extensible, web?based QoE evaluation platform utilizing crowdsourcing. It supports a plenty of evaluation methodologies and configuration possibilities. Although it has been specifically designed to implement SQAs for HAS systems using crowdsourcing (including support for real?world environments), it can also be used for SQAs within laboratory environments.
  3 Example Evaluation Results
  In this section, we provide example evaluation results of selected industry players and adaptation algorithms proposed in the research literature: Bitmovin v7.07, dash.js v2.4.08, Flowplayer v6.0.59, FESTIVE [17], Instant [18], and Thang [19]. Note that we show only a small selection and the results presented here should be only seen as an example of what the framework provides rather than a full?fledged player comparison sheet. Additional further results using the tools described in this paper can be found in [10], [11], and [20].
  For the evaluation, we used the Big Buck Bunny sequence10 and encoded it according to the Amazon Prime video service, which offers 15 different representations as follows: 400×224 (100 kbit/s), 400×224 (150 kbit/s), 512×288 (200 kbit/s), 512×288 (300 kbit/s), 512×288 (500 kbit/s), 640×360 (800 kbit/s), 704×396 (1 200 kbit/s), 704×396 (1 800 kbit/s), 720×404 (2 400 kbit/s), 720×404 (2 500 kbit/s), 960×540 (2 995 kbit/s), 1 280×720 (3 000 kbit/s), 1 280×720 (4 500 kbit/s), 1 920×1 080 (8 000 kbit/s), and 1 920×1 080 (15 000 kbit/s). The segment length was 4 s and one audio representation at 128 kbit/s was used. We adopted the bandwidth trajectory from [8] providing both step?wise and abrupt changes in the available bandwidth, i.e., 750 kbit/s (65 s), 350 kbit/s (90 s), 2 500 kbit/s (120 s), 500 kbit/s (90 s), 700 kbit/s (30 s), 1 500 kbit/s (30 s), 2 500 kbit/s (30 s), 3 500 kbit/s (30 s), 2 000 kbit/s (30 s), 1 000 kbit/s (30 s) and 500 kbit/s (85 s). The network delay was set to 70 ms.   Fig. 5 shows the download bitrate for the players and algorithms in question, and Table 1 provides an overview of all metrics. Metrics a.-e. are directly retrieved from the player/HTML5 API and algorithm implementation, respectively. Metrics f.-g. utilize simple QoE models [21], [22] to calculate MOS values ranging from one to five based on a subset of other metrics. Interestingly, industry players and research algorithms provide different performance behavior under the same conditions but can be directly compared among each other.
  4 Discussion and Challenges
  In this section, we provide a discussion about our framework for the automated objective and subjective evaluation of HAS systems. It allows for an easy setup of various configurations and running multiple evaluations in parallel. New players and algorithms can be added easily as they appear in the market and research literature. Over time it is possible to build up a repository of players and algorithms for comprehensive performance evaluation. As it is possible to run multiple Selenium servers in parallel, our framework is capable to evaluate when players/algorithms compete for bandwidth in various configurations (e.g., n player A vs. m player B).
  The framework is quite flexible, and thus, comes with a high number of degrees of freedom. Hence, it is important to design the evaluation carefully. Here we provide a brief list of the aspects to consider:
  (1) Content assets: content type, codec/coding parameters (including High Dynamic Range, White Color Gamut), representations (bitrate/resolution pairs, also referred to as bitrate ladder), segment length (including GOP size), representation format (i.e., DASH, HLS, CMAF), etc.
  (2) Network parameters: bandwidth trajectory (i.e., predefined and network traces), delay, loss, and other networking aspects (see below for further details)
  (3) End user device environment: device type, operating system, browser, etc.
  (4) Streaming performance metrics: average bitrate, startup time, stalls (frequency, duration), quality switches (frequency, amplitude), etc.
  (5) Quantitative QoE models based on audio?video quality and/or streaming performance metrics
  (6) General HAS evaluation setup: live vs. on?demand content, single player vs. multiple players competing for bandwidth, etc.
  (7) Templates for generating the impaired media sequence (i.e., how to realize startup delay and stalls)   (8) Questionnaire for SQA including control questions for crowdsourcing
  (9) SQA method (e.g., single stimulus, double stimulus, pair?wise comparison) and its parametrization
  (10) J. Collection of all results and further (offline) analysis.
  All these aspects are important to consider any a potential source of risk when conducting such experiments.
  Based on our experience of conducting multiple evaluations and performance comparisons, we identified the following research challenges, possibly subject to future work:
  (1) The reliability of results requires cross?validation, specifically those from SQAs, which typically call for SQAs in controlled laboratory environments.
  (2) The network is a key aspect within HAS systems but is often neglected. Network emulation is a vital tool but with limitations. For HAS systems, we also need to consider content distribution networks (CDNs), software?defined networking (SDN), information?centric networking (ICN), and next?generation (mobile) networks (e.g., 5G). Detailed analysis and evaluations of these aspects in the context of HAS are currently missing. However, recent standardization and research contributions have showed benefits for HAS systems when combined them with SDN [23].
  (3) Reproducibility of such a framework can be achieved by providing containerized versions of the modules as done in [12]. This is considered critical for industry players, which often require licenses. Additionally, it could be interesting to connect to large?scale research networks (such as PlanetLab11, Virtual Internet Routing Lab12, and GENI13).
  5 Conclusions
  This paper describes how AdViSE and WESP can be combined to perform objective and subjective evaluations of HAS systems in a fully automated and scalable way. For example, it can be used to test and compare new players/algorithms under various context conditions or research new QoE models with practically instant verification through subjective tests. The main finding of this work is that a comprehensive objective and subjective evaluation of HAS systems is feasible for both industry players and adaptation algorithms proposed in the research literature. Hence, we recommend adopting it when proposing new features in this area and evaluating the state of the art of these features.
  References
  [1] Cisco Systems, Inc. Cisco Visual Networking Index: Forecast and Methodology, 2016-2021 (White Paper). [R/OL]. (2017?09?15)[2018?07?28]. http://bit.ly/2wmdZJb   [2] NIELSEN J. Nielsen’s Law of Internet Bandwidth (updated 2018) [EB/OL]. (1998?04)[2018?03?03]. https://www.nngroup.com/articles/law?of?bandwidth
  [3] Sodagar, I. The MPEG?DASH Standard for Multimedia Streaming Over the Internet [J]. IEEE Multimedia, 2011, 18(4): 62-67. DOI: 10.1109/MMUL.2011.71
  [4] PANTOS R, MAY W. HTTP Live Streaming [EB/OL]. (2017)[2018?07?28]. https://www.ietf.org/rfc/rfc8216.txt.
  [5] ISO/IEC. Information Technology—Multimedia Application Format (MPEG?A)—Part 19: Common Media Application Format (CMAF) for Segmented Media: ISO/IEC 23000?19 [S]. 2017.
  [6] SEUFERT M, EGGER S, SLANINA M, et al. A Survey on Quality of Experience of HTTP Adaptive Streaming [J]. IEEE Communications Surveys & Tutorials, 2015, 17(1): 469-492. DOI: 10.1109/comst.2014.2360940
  [7] BENTALEB A, TAANI B, BEGEN A C, et al. A Survey on Bitrate Adaptation Schemes for Streaming Media over HTTP [J]. IEEE Communications Surveys Tutorials, 2019, 21(1): 562-585. DOI: 10.1109/COMST.2018.2862938
  [8] M?LLER C, LEDERER S, TIMMERER. An Evaluation of Dynamic Adaptive Streaming over HTTP in Vehicular Environments [C]//Proceedings of the 4th Workshop on Mobile Video, ser. MoVid’12, New York, USA: ACM, 2012: 37-42. DOI: 10.1145/2151677.2151686
  [9] CICCO De L, CALDARALO V, PALMISANO V, et al. TAPAS: A Tool for rApid Prototyping of Adaptive Streaming Algorithms [C]//Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming, ser. VideoNext’14, New York, USA: ACM: 2014: 1-6. DOI: 10.1145/2676652. 2676654
  [10] ZABROVSKIY A, PETROV E, KUZMIN E, et al. Evaluation of the Performance of Adaptive HTTP Streaming Systems [EB/OL]. CoRR, vol. abs/1710.02459 [2017]. http://arxiv.org/abs/1710.02459
  [11] TIMMERER C, ZABROVSKIY A, KUZMIN E, et al. Quality of Experience of Commercially Deployed Adaptive Media Players [C]//21st Conference of Open Innovations Association (FRUCT), Helsinki, Finland, 2017: 330-335
  [12] STOHR D, FR?MMGEN A, RIZK A, et al. Where are the Sweet Spots? A Systematic Approach to Reproducible DASH Player Comparisons [C]//Proceedings of the 2017 ACM on Multimedia Conference, ser. MM’17, New York, USA: ACM, 2017: 1113-1121. DOI: 10.1145/3123266.3123426
  [13] ZABROVSKIY A, KUZMIN E, PETROV E, et al. AdViSE: Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players [C]//Proceedings of the 8th ACM on Multimedia Systems Conference, ser. MMSys’17, New York, USA: ACM, 2017: pp. 217-220. DOI: 10.1145/3083187.3083221   [14] RAINER B, WALTL M, TIMMERER C. A Web Based Subjective Evaluation Platform [C]//Fifth International Workshop on Quality of Multimedia Experience (QoMEX), Klagenfurt am W?rthersee, Austria, 2013: 24-25. DOI: 10.1109/QoMEX.2013.6603196
  [15] HOSSFELD T, KEIMEL C, HIRTH M, et al. Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing [J]. IEEE Transactions on Multimedia, 2014, 16(2): 541-558. DOI: 10.1109/tmm.2013.2291663
  [16] HO?FELD T, HIRTH M, KORSHUNOV P, et al. Survey of Web?Based Crowdsourcing Frameworks for Subjective Quality Assessment [C]//IEEE 16th International Workshop on Multimedia Signal Processing (MMSP), Jakarta, Indonesia, 2014: 1-6. DOI: 10.1109/MMSP.2014.6958831
  [17] JIANG J, SEKAR V, ZHANG H. Improving Fairness, Efficiency, and Stability in HTTP?based Adaptive Video Streaming with FESTIVE [C]//Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies, ser. CoNEXT ’12, New York, USA: ACM, 2012: 97-108. DOI: 10.1145/2413176.2413189
  [18] ROMERO L R. A Dynamic Adaptive HTTP Streaming Video Service for Google Android [D]. Master of Science Thesis, Stockholm, Sweden: Royal Institute of Technology (KTH) Stockholm, 2011.
  [19] THANG T, HO Q D, KANG J, et al. Adaptive Streaming of Audiovisual Content Using MPEG DASH [J]. IEEE Transactions on Consumer Electronics, 2012, 58(1): 78-85. DOI: 10.1109/tce.2012.6170058
  [20] TIMMERER C, MAIERO M, RAINER B. Which Adaptation Logic? An Objective and Subjective Performance Evaluation of HTTP?based Adaptive Media Streaming Systems [EB/OL]. arXiv:1606.00341 (2016)[2018?07?28]. http://arxiv.org/abs/1606.00341
  [21] M?KI T, VARELA M, AMMAR D. A Layered Model for Quality Estimation of HTTP Video from QoS Measurements [C]//11th International Conference on Signal?Image Technology & Internet?Based Systems (SITIS), Bangkok, Thailand, 2015: 591-598. DOI: 10.1109/SITIS.2015.41
  [22] MOK R K P, CHAN E W W, CHANG R K C. Measuring the Quality of Experience of HTTP Video Streaming [C]//12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops, Dublin, Ireland, 2011: 485-492. DOI: 10.1109/INM.2011.5990550
  [23] BENTALEB A, BEGEN A C, ZIMMERMANN R, et al. SDNHAS: An SDN?Enabled Architecture to Optimize QoE in HTTP Adaptive Streaming [J]. IEEE Transactions on Multimedia, 2017, 19(10): 2136-2151. DOI: 10.1109/tmm.2017.2733344
  Biographies
  Christian Timmerer (christian.timmerer@itec.aau.at) is an associate professor with Alpen?Adria?Universita?t Klagenfurt, Austria. He is a Co?Founder of Bitmovin Inc., San Francsico, USA, as well as the CIO and the Head of Research and Standardization. He has co?authored seven patents and over 200 publications in workshops, conferences, journals, and book chapters. He participated in several EC?funded projects, notably DANAE, ENTHRONE, P2P?Next, ALICANTE, SocialSensor, ICoSOLE, and the COST Action IC1003 QUALINET. He also participated in ISO/MPEG work for several years, notably in the areas of MPEG?21, MPEG?M, MPEG?V, and MPEG? DASH. His research interests include immersive multimedia communications, streaming, adaptation, and quality of experience. He was the General Chair of WIAMIS 2008, QoMEX 2013, ACM MMSys 2016, and Packet Video 2018. Further information can be found at http://blog.timmerer.com.   Anatoliy Zabrovskiy received his B.S. and M.S. degrees in information and computer technology from Petrozavodsk State University, Russia in 2006 and 2008 respectively, and a Ph.D. degree in engineering from the same university in 2013. He has been working in the field of network and multimedia communication technologies for over ten years. He was a Cisco certified academy instructor for CCNA. He was award winner of two international programs: Scholarships of the Scholarship Foundation of the Republic of Austria for Postdocs and Erasmus Mundus External Cooperation Window program for doctorate students. He was a prize winner of Sun Microsystems contest “Idea2Project”. He is currently a postdoctoral researcher at the Department of Information Technology (ITEC), Alpen?Adria?Universit?t Klagenfurt, Austria. He is a member of the Technical Program Committee of ACM MMSys 2019. His research interests include video streaming, network technologies, quality of experience, and machine learning.
其他文献
期刊
期刊
专家简介李小鹰,解放军总医院老年心血管科主任医师、教授,博士生导师,我国著名老年心血管病专家;中央保健委员会第四届专家组成员;国务院深化医药卫生体制改革首届专家委员
时代在不断的发展与进步,我国的教育改革工作也在不断的深入,这就导致初中英语教学工作中的教学方式产生了一定的变化,传统的教学方式已经无法满足现阶段的发展过程中英语教
Abstract: This paper considers outdoor fingerprinting localization in LTE cellular Networks, which can localize non?cooperative user equipment (UE) that is unwilling to provide Global Positioning Syst
As 5G mobile communication is making its powerful progress towards full deployment in the near fu-ture, we have witnessed tremendous growth of smart mobile devi
期刊
Video quality assessment(VQA) plays a vital role in the field of video processing, including areas of video acquisition, video filtering in retrieval, video compression, video restoration, and video e
古诗是小学语文教学的的重要组成部分,古诗形式独特、内容丰富多彩,对培养小学生语言表达能力、陶冶情操都有很好的促进作用.本文针对当前本校教学现状进行分析,结合当前小学
The rapid growth of IP traffic has contributed to wide deployment of optical devices in elastic optical network. However, the passband shape of wavelength selec