Вы находитесь на странице: 1из 6

Real Time Video Streaming over Heterogeneous Networks

Mohammed A Qadeer, Rehan Ahmad, Mohd Siddique Khan, Tauseef Ahmad Department of Computer Engineering Zakir Husain College of Engineering & Technology Aligarh Muslim University, Aligarh-202002, India {maqadeer, rehanahmad, siddiquekhan, tauseefahmad}@zhcet.ac.in

Abstract Technological advances allow handheld devices to be equipped with faster processors and wireless interfaces, making the performance comparable to laptop Computers. In this paper, we describe real-time video streaming over heterogeneous networks namely Bluetooth, Wi-Fi and GPRSEDGE. The wireless channel used for communication in different network varies in their Bandwidth, supported Bit rate and Unequal Error Protection coding techniques. In Bluetooth we can create a Bluetooth PAN (Piconet) in which mobile computers can dynamically connect to master and communicate with other slave. In Wi-Fi real-time video streaming over wireless LANs for both unicast and multicast transmission. The wireless channel is modeled as a packeterasure channel at the IP level. The video delivery system in GPRS-EDGE is based on MPEG-4 video simple profile and on the IETF protocols RTSP, RTP, and RTCP for the media delivery and synchronization in compliance with the 3GPP specifications for Packet Streaming services (PSS). Keywords- Bluetooth; Video Streaming; Piconet; Quality of Service; Performance; GPRS-EDGE; IEEE802.11;wireless LANs; MPEG-4; IETF; RTSP; RTP; RTCP; PSS.

file and later the file is save in video format any format you want.

1.2 GPRS-EDGE

1. Introduction 1.1 Bluetooth


A Bluetooth [4] network has no fixed networking infrastructure. It consists of multiple mobile nodes which maintain network connectivity through wireless communication, and it is completely dynamic, so such networks are easily deployable. The widespread use of Bluetooth and mobile devices has generated the need to provide services which are currently possible only in the wired networks. The services that are provided with wired network needs to be explored for Bluetooth. To transfer video data from mobile device to Personal computer is a difficult work. In java, j2me is used on mobile side which continuously takes the camera input for video and microphone for audio. The audio and video is converted into byte array and byte stream is directed in the output stream using Bluetooth. In the PC side j2se is used for server programming. This server program opens a input stream and connect to the server output stream and whatever data is write on server input stream is in byte form and redirect to a

The main improvement in services to be offered by 3rd Generation mobile communication systems over those available using current technologies such as GSM is the provision of true multimedia services. This is understood as consisting of a combination of different services such as data, multimedia-rich Web content, media streaming services, conversational video and high-quality audio services. This has two very significant implications upon the design of the end-to-end mobile network architecture. The first is the Quality of Service that must be offered to the client applications running on the mobile terminal. Each service, be it audio, video, Web information etc. requires a different type connection and different connection parameters such as throughput, end-to-end latency, error rates and frame dropping rates. This means that each mobile terminal must have access to a number of bearer channels, each offering a different Quality of Service to the various application-level services being used. The second implication is one of interoperability. The sheer variation in types of services envisaged means that standardized protocols must be employed at both the application layer and as a service interface to the network functions. This use of accepted standards has already led to the success of the Internet. The Internet Protocol is now by far the most widely-used Layer 3 protocol and has allowed an extremely diverse range of terminals and devices to communicate with each other. Similarly, accepted application-layer standards such as the Hyper Text Transfer Protocol (HTTP) for Web applications, and de-facto standards such as Real video for video communications have allowed multimedia applications to thrive. The work described in this paper will therefore make use of MPEG-4 video encapsulated into IP packets for transmission over GPRS mobile networks and the Internet.

1.3 WiFi
Audio and video streaming over wired networks, such as the Internet, have been popular now for quite some time.

ISBN 978-89-5519-139-4

-1117-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

However, with the development of broadband wireless networks, attention has only recently turned to delivering video over wireless networks. In this paper, we focus on the wireless Local Area Network (LAN), which can operate at high enough bit rates to allow transmission of high quality video data. Specifically, we investigate the IEEE 802.11b wireless LAN, though the ideas that we present are applicable to other wireless networks as well. The 802.11a, 802.11b and 802.11g standards, called "physical standards" are amendments to the 802.11 standard and offer different modes of operation, which let them reach different data transfer speeds depending on their range. The 802.11b standard allows for a maximum data transfer speed of 11 Mbps, at a range of about 100 m indoors and up to 200 m outdoors (or even beyond that, with directional antennas.), shown in table 1.
Table 1: variations of 802.11b standard

As is well known, video streaming is very different from data communication due to the inherent delay constraints, as late arriving data is not useful to the video decoder. In fact, it is better to drop such data at the sender rather than attempt to send it after the deadline has passed. For this reason, TCP/IP, which is designed to deliver data reliably (but asynchronously) and works very well for data communication, is not always the best solution for real-time video streaming. To this end, several industrial streaming media architectures (e.g., by Microsoft and Real Networks) have been developed. However, the existing solutions were not designed specifically for the harsh conditions that are prevalent in a wireless channel. With wireless networks gaining prominence and acceptance, especially the LANs based on the IEEE 802.11 standards, it is foreseeable that streaming of audio/video will be a critical part of the wireless digital infrastructure. Yet, to the best of our knowledge, video streaming applications have not been studied extensively for IEEE 802.11 based wireless networks. There are two major challenges for video streaming over wireless LANs: 1) fluctuations in channel quality and 2) high bit-error rates compared with wired links. In addition to wireless channel-related challenges, when there are multiple clients, we have the problem of heterogeneity among receivers, since each user will have different channel conditions, power limitations, processing capabilities, etc., and only limited feedback channel capabilities.

reasons including time-varying features, out-of-range devices, and interference with other devices or external sources that make Bluetooth links more challenging for video streaming. To address these challenges for video streaming over Bluetooth links, recent research has been conducted. To present various issues and give a clear picture of the field of video streaming over Bluetooth, we discuss three major areas, namely video compression, Quality of Service (QoS) control and intermediate protocols[3]. Each of the areas is one of the basic components in building a complete architecture for streaming video over Bluetooth. The relations among them can be illustrated in Fig.1. Figure 1 shows functional components for video streaming over Bluetooth links [3]. Moreover, the layer/layers over which a component works is also indicated. The aim of video compression is to remove redundant information form a digitized video sequence. Raw data must be compressed before transmission to achieve efficiency. This is critical for wireless video streaming since the bandwidth of wireless links is limited. Upon the clients request, the media sever retrieves compressed video and the QoS control modules adapts the media bit-streams, or adjusts transmission parameters of intermediate layer based on the current link status and QoS [3] requirements. After the adaptation, compressed video stream are partitioned into packets of the chosen intermediate layer (e.g., L2CAP, HCI, IP), where packets are packetized and segmented. It then sends the segmented packets to Bluetooth module for transmission. On the receiving side, the Bluetooth module receives media packets from air, reassembles them in the intermediate protocols, and sends them to decoder for decompression. As shown in figure 1, QoS control can be further categorized into congestion control and error control [7]. Congestion control in Bluetooth is employed to prevent packet loss and reduce delay by regulating transmission rate or reserving bandwidth according to changing link status and QoS requirements. Error control [7], on the other hand, is to improve video quality in the presence of packet loss.

2.1 USB Programming


For USB port programming, we have to use an open source API called jUSB API since no API is available for USB programming in any JAVA SDK, even not in j2sdk1.5.0.02. The design approach to implement the usb.windows package for the Java USB API is separated into two parts. One part deals with the enumeration and monitoring of the USB while the other part looks into the aspects of communicating with USB devices in general. Both parts are implemented using Java Native Interface (JNI) to access native operation on the Windows operating system. The jUSB dynamic link library (DLL) provides the native functions that realize the JNI interface of the Java usb.windows package. Communication with an USB device is managed by the jUSB driver. The structures and important aspects of the jUSB driver are introduced in section 5. The chapter itself is a summary and covers only some fraction of the driver implementation. A lot of useful information about

2. Video Streaming over Bluetooth


Traditional video streaming over wired/wireless networks typically has band-width, delay and loss requirements due to its real-time nature. More over, there are many potential

ISBN 978-89-5519-139-4

-1118-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

driver writing and the internal structures can be looked up in Walter Oneys book Programming the Microsoft Driver Model [8].

Figure 2: USB driver stack for Windows

Figure 1: Architecture for Streaming over Bluetooth

What we have explained is clearly shown in Figure 3, since the original USB driver stack is as shown in Figure 2 but the other drivers can not be accessed by the programmer Once the JAVA USB API is installed you are ready to program your own USB ports to detect USB devices as well as read and write through these devices. This JAVA USB API is actually an open source project carried out at Institute for Information Systems, ETH Zrich by Michael Stahl. For details of how to write code for programming, refer to the Java USB API for Windows by Michael Stahl [9]. Basic classes used in this API are listed below DeviceImpl Class: basic methods used are Open Handle Close Handle Get friendly Device Name Get Attached Device Name Get Num Ports Get device Description Get Unique Device ID jUSB class : basic methods used are getDeivcePath JUSBReadControl getConfigurationBuffer doInterruptTransfer Architectural design of the developed design and data flow are shown in figure. 4.

Figure 3: Java USB API layer for Windows

GSM [1][2]. Although, it was initially designed for use in non- delay-critical data applications, two of its features enable it to be used as a suitable medium for video communications. The multi-slotting capability of GPRS effectively allows for the throughput capability of a single terminal to be increased simply by allocating more timeslots (or Packet Data Traffic Channels) to a single terminal. In addition, its native IP support will allow for interworking with Internet multimedia applications. Fig. 4 shows a schematic diagram of real-time video recording from handhelds (mobiles) and retrieval of pre-stored media over GPRS.

3.1 GPRS Radio Access Protocols


GPRS provides a network architecture which is transparent to the layer-3 protocols operating in the end-user terminals. As in this case IP will be used, IP packets and all relevant transport protocol headers are forwarded to the Sub-network Dependent Convergence Protocol (SNDCP) layer which formats the network packets for transmission over the GPRS network. As all layers below the network layer are transparent to the network layer, end-to-end functionality is only provided at the network-layer upwards. The SNDCP

3 General Packet Radio Service (GPRS)


GPRS is an end-to-end mobile packet radio communication system which makes use of the same radio architecture as

ISBN 978-89-5519-139-4

-1119-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

also carries out header compression and multiplexing of data coming from different sources.

Figure 6: GPRS-EDGE Channel Protection Scheme

3.2 Protocol Architecture


In the application scenario envisaged, a Video over IP service is used. In this way, different video frames are segmented and encapsulated into IP packets for transmission over an IP packet network. The services being examined are real-time in nature, meaning that they may either involve streaming of stored video information or two-way conversational communications. As IP does not retain timing information as it only offers a Best-Effort service, use of the Real-Time Transport Protocol is necessary [10]. This provides time stamping and sequence numbering information which allows the decoding application to correctly reconstruct the received media stream.

Figure 4: Schematic diagram of developed system

The Logical Link Control Layer operates above the lUC and GSSGP layers in the reference architecture to provide highly reliable logical links between the mobile station and the Serving GPRS Support Node (SGSN). Its main functions are designed towards supporting such a reliable link. As long as the network packet size does not exceed the maximum LLC frame size (1520 octets), each IP packet is mapped onto a single LLC frame. The LLC frames are passed onto the RLCMAC layer where they are segmented into fixed-length RLC/MAC blocks. The MAC protocol then carries out procedures that allow multiple mobile stations to share a common transmission medium which may consist of several physical channels. In GPRS, each timeslot may be multiplexed between up to eight users, while a single user may also use up to eight timeslots. This allows for great flexibility in resource allocation.

3.3 Error-Resilient Video Compression


A number of different video compression schemes suitable for use at low bitrates are currently available, including a number of proprietary codecs, as well as other standardized codecs. The standardized codecs are dominated by two families, the ITU-Ts H.263 family of video coding standards and the IS0 MPEG-4 audio-visual standard. Both codecs are based on similar technology, namely the use of motion-predictive Discrete Cosine Transform to perform lossy video compression. MPEG-4 [5] was originally intended to be a step forward in terms of compression efficiency. However, no significant advances were made in that area, and attention focused on providing a variety of new features. The codecs best-known innovation is probably its ability to code a video scene as a number of different objects. This property can be used in a number of ways. Allowing the user to manipulate objects in real-time will result in interactive video communications. Manipulation of objects is facilitated by the use of BIFS (Binary Format for Scenes). BIFS allows object properties to be altered (e.g. volume, position, colour). It is also used for real-time streaming of video, allowing individual objects to be displayed as they are received, without having to wait for the rest of the scene. MPEG-4 also has its own subset of Java commands, called MPEG-J. These commands enable interfaces to video objects, network resources, and input devices. An MPEG-4 encoder will therefore produce a number of different streams for a single video sequence. Each object produces at least one stream for its video and another for its audio. There may also be another stream containing higher level information such as BIFS. Although the mechanisms by which these streams are packaged is not standardized, there is a FlexMux tool that can be used before transmission to combine streams when the number of streams becomes impractical.

Figure 5: GPRS-EDGE Data Flow

The RLC blocks are arranged into GSM bursts for transmission across the radio interface, where the Physical Link Layer is responsible for forward error correction and detection. Rectangular interleaving of radio blocks and procedures for detecting physical link congestion are also carried out in this layer. GPRS data is transmitted over the Packet Data Traffic Channel (PDTCH) and is protected by four different channel protection schemes. CS-1, CS- 2, and CS-3 use convolutional codes and block check sequences of differing strengths so as to give different rates. CS-4 on the other hand only provides error detection functionality and was found in experiments not to be suitable for use for video transmission. The experiments described in this section therefore concentrate on schemes CS-1 to CS-3.

ISBN 978-89-5519-139-4

-1120-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

3.4 Error Resilience Tools


For those wishing to implement a video conferencing system over a mobile link, MPEG-4s most significant feature is probably its suite of error resilience tools [6]. Unlike H.263+, MPEG-4 was specifically designed with mobile networks in mind. The most basic error resilience option places the coded video data into regular sized packets, with resynchronization words between each packet. Any single packet can be decoded independently of all of the other packets within that frame. Insertion of resynchronization words at regular bit intervals eliminates a problem in H.263, where insertion of such words is performed at regular picture block intervals. In the latter case, the variable length nature of the bitstream leads to irregular numbers of bits between each resynchronization word, resulting in some parts of the picture being more sensitive to errors than others. Data partitioning separates motion and header data from texture data within each video packet. A codeword is inserted in between the two sets of data. Usually the majority of the packet is made up of texture information, the loss of which causes much less distortion at the decoder than if motion or header data is lost. Thus, data partitioning succeeds in ensuring that much of the packet is not very sensitive to error. It also places the most sensitive data at the beginning of the packet. Placing all of the texture data together allows the use of Reversible Variable Length Codes (RVLCs). When an error is found in the packet, it is possible to search for the next packet header, and read backwards until another error is found. This often allows the recovery of a great deal more data than if normal VLCs are used. The three modes provided are designed to provide incremental levels of robustness. As most of the overhead is provided by the first option, it makes little sense to use one option on its own. Therefore, it is recommended that all of the options are used at once.

example when presenting simulation results. Network topology for multiple video streams sharing a single-hop wireless network is shown in fig.7. (a) All streams originate from the same wireless node. (b) The video source nodes are distributed.

Figure 7: network topologies

4.1 Streaming Over A Single Wireless Link


As the wireless link quality varies, video transmission rate needs to be adapted accordingly. In [11], measurements of packet transmission delays at the MAC layer are used to select the optimal bit rate for video, subsequently enforced by a transcoder. The benefit of cross-layer signaling has also been demonstrated in [12], where adaptive rate control at the MAC layer is applied in conjunction with adaptive rate control during live video encoding. Video rate adaptation can also been achieved by switching between multiple bitstreams encoded at different rates, or truncating the bitstream from a scalably encoded representation. Packets can also be dropped intelligently, based on their relative importance and urgency, utilizing the rate-distortion optimized framework. We simulate the transmission of a single video stream over an otherwise idle 802.11a wireless link. With a nominal link speed of 54 Mbps and a much slower transmission rate of 6 Mbps for MAC-layer 40 Mbps for video packets of 1500 bytes. The HD video sequence Harbor (1280x720p, 60 fps) is encoded using the H.264/AVC reference codec, with GOP length of 30 at various quality levels. Video streaming at one fixed quality level using TCP is compared against streaming on top of UDP with a video-aware application-layer transport protocol. The application-layer rate controller switches between different versions of video bitstreams according to estimated link capacity. While acknowledgment packets are sent for every received packet in TCP, the ACK frequency is reduced to once every ten received packets in the application-layer transport protocol. As a consequence, a higher video rate and quality can be supported, due to the reduction of acknowledgment overhead 1.

4. WIFI
Video streaming over wireless networks is compelling for many applications, and an increasing number of systems are being deployed. A wireless local area network (WLAN) might connect various audiovisual entertainment devices in a home. While video streaming requires a steady flow of information and delivery of packets by a deadline, wireless radio networks have difficulties to provide such a service reliably. The problem is challenging due to contention from other network nodes, as well as intermittent interference from external radio sources such as microwave ovens or cordless phones. For mobile nodes, multi-path fading and shadowing might further increase the variability in link capacities and transmission error rate. For such systems to deliver the best end-to-end performance, video coding, reliable transport and wireless resource allocation must be considered jointly, thus moving from the traditional layered system architecture to a cross-layer design. While most of the issues discussed are general, we use high-definition (HD) video streaming over 802.11a home networks as a concrete

4.2 Streaming Over Mesh Network


Video streaming over wireless mesh networks imposes additional challenges introduced by multi-hop transmissions. Cross-layer design and optimization for this problem is a very active area of investigation with many remaining open problems. In the following, a survey of research efforts in joint optimization of multiple protocol layers is presented

ISBN 978-89-5519-139-4

-1121-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

first, followed by rate allocation among multiple video streams in mesh networks.

REFERENCES
[1] [2] ETSI/SMG, GSM 03.64, 1998 Overall Description of the GPRS Interface Stage 2, V.5.2.0. Brasche G., Walke B., 1997, Concepts, Services and Protocols of the New GSM Phase 2+ General Packet Radio Service, IEEE comms Mag., 94-104. Wang Xiaohang, Video Streaming over Bluetooth: A Survey. Specification of Bluetooth System Core vol.1,ver1.1 www.bluetooth.com Schafer R., 1998, MPEG-4: A Multimedia Compression Standard for Interactive Application and Services, IEE J. Electronics & Comms Eng., Dec. 1998 Talluri R.,1998, Error Resilient Video Coding in the ISO MPEG-4 Video Standard. IEEE Comms Mag.,112-119. Nick Feamster and Hari Balakrishnan,Packet Loss Recovery for Streaming Video. http://nms.lcs.mit.edu/projects/videocm/ Walter Oneys, Programming the Microsoft Driver Model . Michael Stahl, Java USB API for Windows, September 18, 2003. Schulzrinne, Casner, Frederick, Jacobsen, RTP: A Transfer Protocol for Real Time Applications and Services, RFC 1889,1998. P. van Beek and M. U. Demicrin, Delay Constrained rate adaptation for robust video transmission over home networks, IEEE International Conference on Image Processing (ICIP05), Genova, Italy, vol. 2, pp. 173-176, Sept. 2005. L. Haratcherev, J. Taal, K. Langendeon, R. Lagendijk, and H. Sips, Optimised video streaming over 802.11 by cross-layer signalling, IEEE Communication Magazine, vol. 44, no. 1, pp. 115-121, Jan. 2006. Ratish J. Punnoose, Richard S. Tseng, and Daniel D. Stancil. Experimental Results for Interference between Bluetooth and IEEE 802.11b DSSS Systems, IEEE Vehicular Society Conference, October 2001. E. Setton, T. Yoo, X. Zhu, A. Goldsmith, and B. Girod, Cross-layer design of ad-hoc networks for real-time video streaming, IEEE Wireless communications Magazine, vol. 12, no. 4, pp. 50-65, Aug. 2005. D. De Couto, D. Aguayo, B. Chambers, and R. Morris, Performance of multihop wireless networks: Shortest path is not enough, Proc. ACM First Workshop of Hot Topics in Networks (HotNets-I), Princeton, New Jersey, USA, pp. 83-88, Oct. 2002. Ratish J. Punnoose, Richard S. Tseng, and Daniel D. Stancil. Experimental Results for Interference between Bluetooth and IEEE 802.11b DSSS Systems, IEEE Vehicular Society Conference, October 2001. M. Fainberg, D. Goodman, Analysis of the interference between IEEE 802.11b and Bluetooth systems, VTC 2001 Fall, vol. 2, p.p. 967-971 Simon N. Fabri, Stewart Worrall, Abdul Sadka, Ahmet Kondoz, Real-Time Video Communications over GPRS, 3G Mobile Communication Technologies, Conference Publication No. 471, 0 IEE 2000 Y.Wu, P. A. Chou, Q. Zhang, K. Jain,W. Zhu, and S-Y. Kung, .Network planning in wireless ad-hoc networks : A crosslayer approach,. IEEE Journal on Selected Areas in Communications, vol. 23, no. 1, pp. 136.150, Jan. 2005. E. Setton, X. Zhu, and B. Girod, .Congestion-optimized multipath streaming of video over ad hoc wireless networks,. Proc. IEEE International Conference on Multimedia and Expo (ICME04), Taipei, Taiwan, vol. 3, pp. 1619.1622, July 2004.

4.3 Multi-Layer Resource Allocation


The flexibility offered by cross-layer design has been exploited in a number of research efforts. Joint optimization of power allocation at the physical layer, link scheduling at the MAC layer, network layer flow assignment and transport layer congestion control has been investigated with convex optimization formulations. The cross-layer design framework attempts to maintain a layered architecture while exchanging key parameters between adjacent protocol layers. The framework allows enough flexibility for significant performance gains, while keeping protocol design tractable within the layered structure, as demonstrated by the preliminary results exploring adaptive link-layer techniques, joint capacity and flow assignment, media-aware packet scheduling and congestion-aware video rate allocation.

[3] [4] [5]

[6] [7] [8] [9] [10] [11]

4.4 Multi-Stream Rate Allocation


When multiple streams share a wireless mesh network, their rates need to be jointly optimized to avoid network congestion while maximizing overall received video quality. For each stream, the optimal allocated rate strikes a balance between minimizing its own video distortion and minimizing its contribution to overall network congestion. This is achieved by a distributed rate allocation protocol, which allows cross-layer information exchange between the video streaming agents at the application layer on the source nodes and the link state monitors at the MAC layer on the relay nodes.
[12]

[13]

[14]

[15]

5. Conclusion
[16]

In this paper, we have shown system of real time media streaming over different network technologies. In Bluetooth, we have shown a compression and streaming of live videos. We developed a j2me application on mobile side and j2se application on PC side. Three major aspects are to be taken into consideration namely video compression, Quality of Service( QoS) control and intermediate protocols. Video compression is to remove redundancy to achieve efficiency in a limited bandwidth network. QoS includes congestion control and error control. It is to check packet loss, reduce delay and improving video quality. The server side requires USB port to be programmed for enumeration, monitoring and communicating with USB devices. In GPRS, a real-time multimedia testbed for video was developed. This included implementations of error-resilience-capable versions of the MPEG-4 encoder and decoder. It was seen that the CS-1 code adequately protects video down to a received Eb/No of SdB, whereas the CS-3 code is only useful at Eb/No in excess of 18dB. In WiFi, we have described a cross layered design for high definition (HD) video streaming over 802.11a home network on different network configurations.

[17]

[18]

[19]

[20]

ISBN 978-89-5519-139-4

-1122-

Feb. 15-18, 2009 ICACT 2009

Authorized licensed use limited to: UNIVERSIDADE DE BRASILIA. Downloaded on June 23, 2009 at 14:57 from IEEE Xplore. Restrictions apply.

Вам также может понравиться