Computation Offloading contributes towards moving to a Mobile Cloud Computing paradigm. In this work, a two-level resource allocation and admission control mechanism for a cluster of edge servers, offers an alternative choice to mobile users for executing their tasks. At the lower level, the behavior of edge servers is modeled by a set of linear systems and linera controllers are designed to meet the system's constraints and QoS metrics, while at the upper level, an optimizer tackles the problems of load balancing and application placement towards maximizing the number the offloaded requests.
There is a growing number of business process management systems under development both in academia and in practice. At the same time, the advent of big data analytics has changed the scope of such systems. However, reference architectures for business process management systems date back 20 years and, consequently, are not up to date with modern developments. Therefore, this paper proposes an up-to-date reference architecture, called BPMS-RA, for modern business process management system, which is based on recent literature and on existing commercial implementations.
Scientific workflows enable to conduct analysis on large datasets and perform complex scientific simulations. These workflows are often mapped onto distributed computational infrastructures to speed up their execution. Prior execution, a workflow structure may suffer transformations to accommodate the computing infrastructures. However, these transformations may cause workflow imbalance because of runtime or data imbalance. To mitigate these imbalances, in this paper we propose an autonomic data-throttling approach to compute how data transmission must be throttled throughout workflow jobs. Our approach relies on structural analysis of Petri nets, obtained by model transformation of data-intensive workflows, and Linear Programming techniques.
Special Issue on the Economics of Security and Privacy: Guest Editors' Introduction
In this paper, we aim to understand the dynamics of bundling sale strategy and under what situations it will be more attractive than the separate sales. We focus on online service markets that exhibit network effect. We provide mathematical models to capture the interactions between buyers and sellers, analyze the market equilibrium and its stability, and formulate an optimization framework to determine the optimal sale strategy for the service provider. We analyze the impact of the key factors such as network effects, operating costs, as well as the variance and correlation of customers' valuations towards these services.
TOIT Reviewers for 2017
Cognitive computing is changing the way the world is seen, and indeed cognitive applications are almost deployed in the cloud. However, the attention to security, privacy, network connection and other issues has enabled a paradigm shift, moving the cognitive computing from the cloud to the network's edge. This paper firstly introduces a new network architecture for edge cognitive computing (ECC),then describes the ECC evolution process and design issues in details, and conducts an experiment platform of dynamic service migration based on mobile user behavioral cognition.The experimental results show that proposed ECC architecture has ultra-low latency and high user experience.
Mobile crowdsensing becomes a promising technology for the emerging Internet of Things (IoT) applications in smart environments. In this paper, we introduce a framework enabling mobile crowdsensing in fog environments with a hierarchical scheduling strategy. We first introduce the crowdsensing framework that has a hierarchical structure to organize different resources. Since different positions and performance of fog servers influent the quality of service of IoT applications, we formulate a scheduling problem in the hierarchical fog structure and solve it by using a deep reinforcement learning based strategy. From extensive simulation results, our solution outperforms other scheduling solutions for mobile crowdsensing
The diffusion of personal smart devices capable of capturing and analyzing user's data has encouraged the growth of novel sensing methodologies. In this context, simple Human Activity Recognition (HAR) techniques can be directly implemented on mobile devices; however, when complex activities need to be analysed timely, users' personal devices can operate as part of a more complex architecture. We propose a multi-device HAR framework that exploits the fog computing paradigm to move heavy computation from the sensing layer (wrist-worn devices), to intermediate devices (smartphones), and then to the cloud. Results show the effectiveness of the framework on a real-world scenario.
Standard security protocols are characterized by high computational complexity that is unsuitable to networks of low-power devices. The typical solution based on cloud services that facilitate deployment, intermediate all messages among things and enable secure communications has the disadvantage of requiring permanent Internet connectivity even for things connected over a local network. This paradigm is inappropriate in several scenarios, hence we propose an efficient fog-based system that enables secure communications and preserves easy management of cloud-assisted IoT. The proposal is based on an original lightweight proxy re-encryption scheme that can be executed even by large networks of low-power devices.
Smart environments (SE) are expected to significantly benefit from integrated edge-fog-cloud (IEFC) paradigm. High-order Bi-Lanczos has emerged as a powerful tool in SE. How to complete data processing without compromising privacy is a challenge in SE. In this work, we propose a novel privacy-preserving high-order Bi-Lanczos (PPHOBL) scheme in IEFC paradigm for SE. We firstly propose a privacy-preserving big data processing model using IEFC paradigm. Subsequently, making use of the model, we present a PPHOBL scheme. Finally, we analyze the scheme based on an intelligent surveillance system. The results demonstrate that the superiority of the scheme for SE.
In this paper, we focus on computation offloading over multi-cloudlets environment. We consider several mobile users with different energy and latency constrained tasks that can be offloaded over cloudlets. We investigate offloading policy that decides which tasks should be offloaded and selects the assigned cloudlet, accordingly with network and system resources. The objective is to minimize the execution time and the energy consumption. We propose a distributed relaxation heuristic based on Lagrangian decomposition. Numerical results show that our policy achieves a good offloading solution quickly, and can achieve better performances for large-scale scenarios compared to alternatives approaches from the literature.
The massive increase of IoT devices and their collected data raises the question how to analyze all that data. Edge computing may provide a suitable compromise, but the question remains: how much processing should be done locally vs. offloaded to other edge devices? The diverse application requirements and limited resources at the edge extend these challenges. In this article we propose Oops, an optimization framework to decide and adapt the resource management at runtime, and orchestrate the edge devices in a distributed manner. Experimental results show a significantly reduced runtime overhead while even increasing the user utility compared to state-of-the-art.
Mobile-, edge-, and cloud-computing have the potential to form a computing continuum for disruptive applications. The choice of where in the continuum to execute different functionalities is made at run time, based on context and requirements, with the goal of minimizing latency and battery consumption, and maximizing availability. We propose A3-E, a unified model that exploits the Function-as-a-Service to abstract away the heterogeneity of the continuum. Experiments show that A3-E is capable of dynamically routing the application's requests to the continuum, reducing latency by up to $90$\% when using edge infrastructures, and battery consumption by $74$\% when offloading mobile computation.
Despite the growing body of research focused on understanding Knowledge Intensive Processes (KIPs), the research question on how to measure the performance of KIPs and of the knowledge workers involved is still open. In this paper, we address it with a proposal to enable performance management of KIPs: An ontology that allows us to define process performance indicators in the context KIPs, and a methodology that builds on the ontology and the concepts of lead and lag indicators. Both the ontology and the methodology have been applied to a case study of a real ICT Outsourcing Company in Brazil.
We present a new spatio-temporal incentive-based approach to achieve a geographically balanced coverage of crowdsourced services. The proposed approach is based on a new spatio-temporal incentive model that considers multiple parameters including location entropy and spatio-temporal density to encourage the participation of crowdsourced service providers. We present a greedy network flow algorithm that offers incentives to redistribute the crowdsourced service providers to improve the crowdsourced coverage balance. A novel participation probability model is introduced to estimate the expected number of crowdsourced service providers movement based on spatio-temporal features. Experimental results validate the efficiency and effectiveness of the proposed approach.
We propose in this paper a hybrid approach to improve the design structure of Web service interfaces and fix antipatterns as a combination of both deterministic and heuristic-based methods. The first step consists of a deterministic method using graph partitioning-based technique to split the operations of a large service interface into more cohesive interfaces, each one representing a distinct abstraction. Then, the produced interfaces will be checked using a heuristic-based approach based on the non-dominated sorting genetic algorithm (NSGA-II) to correct potential antipatterns while reducing the interface design deviation to avoid taking the service away from its original design.
Detecting concurrency relations between events is a fundamental primitive underpinning a range of process mining techniques. Existing approaches identify concurrency relations at the level of event types under a global interpretation. If two event types are declared to be concurrent, every occurrence of one event type is deemed to be concurrent to one occurrence of the other. In practice, this interpretation is too coarse-grained and leads to over-generalization. This paper proposes a finer-grained approach, whereby two event types may be deemed to be in a concurrency relation relative to one state of the process, but not relative to other states.