ACM DL

ACM Transactions on

Internet Technology (TOIT)

Menu
Latest Articles

Special Issue: Computational Ethics and Accountability

Automatic Resolution of Normative Conflicts in Supportive Technology Based on User Values

Social commitments (SCs) provide a flexible, norm-based, governance structure for sharing and receiving data. However, users of data sharing... (more)

Preserving Privacy as Social Responsibility in Online Social Networks

Online social networks provide an environment for their users to share content with others, where the user who shares a content item is put in charge,... (more)

Measuring Moral Acceptability in E-deliberation: A Practical Application of Ethics by Participation

Current developments in governance and policy setting are challenging traditional top-down models of decision-making. Whereas, on the one hand, citizens are increasingly demanding and expected to participate directly on governance questions, social networking platforms are, on the other hand, increasingly providing podia for the spread of... (more)

Enhanced Audit Strategies for Collaborative and Accountable Data Sharing in Social Networks

Data sharing and access control management is one of the issues still hindering the development of decentralized online social networks (DOSNs), which... (more)

easIE: Easy-to-Use Information Extraction for Constructing CSR Databases From the Web

Public awareness of and concerns about companies’ social and environmental impacts have seen a marked increase over recent decades. In parallel, the quantity of relevant information has increased, as states pass laws requiring certain forms of reporting, researchers investigate... (more)

Accountable Protocols in Abductive Logic Programming

Finding the entity responsible for an unpleasant situation is often difficult, especially in artificial agent societies. SCIFF is a formalization of... (more)

On the Assessment of Systematic Risk in Networked Systems

In a networked system, the risk of security compromises depends not only on each node’s security but also on the topological structure formed... (more)

Rotten Apples or Bad Harvest? What We Are Measuring When We Are Measuring Abuse

Internet security and technology policy research regularly uses technical indicators of abuse to identify culprits and to tailor mitigation... (more)

Revisiting the Risks of Bitcoin Currency Exchange Closure

Bitcoin has enjoyed wider adoption than any previous cryptocurrency; yet its success has also attracted the attention of fraudsters who have taken... (more)

NEWS

Call for Editor-In-Chief Nominations

 

Nominations, including self nominations, are invited for a three-year term as TOIT EiC, beginning on November 15, 2018.

The deadline for nomination 
submissions is August 31, 2018.

Read more here.

About TOIT

The ACM Transactions on Internet Technology (TOIT) brings together many computing disciplines including computer software engineering, computer programming languages, middleware, database management, security, knowledge discovery and data mining, networking and distributed systems, communications, performance and scalability etc. TOIT will cover the results and roles of the individual disciplines and the relationships among them.

read more
Forthcoming Articles
Adaptive Resource Allocation for Computation Offloading: A Control-theoretic Approach

Computation Offloading contributes towards moving to a Mobile Cloud Computing paradigm. In this work, a two-level resource allocation and admission control mechanism for a cluster of edge servers, offers an alternative choice to mobile users for executing their tasks. At the lower level, the behavior of edge servers is modeled by a set of linear systems and linera controllers are designed to meet the system's constraints and QoS metrics, while at the upper level, an optimizer tackles the problems of load balancing and application placement towards maximizing the number the offloaded requests.

BPMS-RA: A Novel Reference Architecture for Business Process Management Systems

There is a growing number of business process management systems under development both in academia and in practice. At the same time, the advent of big data analytics has changed the scope of such systems. However, reference architectures for business process management systems date back 20 years and, consequently, are not up to date with modern developments. Therefore, this paper proposes an up-to-date reference architecture, called BPMS-RA, for modern business process management system, which is based on recent literature and on existing commercial implementations.

A Dynamic Data-Throttling Approach to Minimize Workflow Imbalance

Scientific workflows enable to conduct analysis on large datasets and perform complex scientific simulations. These workflows are often mapped onto distributed computational infrastructures to speed up their execution. Prior execution, a workflow structure may suffer transformations to accommodate the computing infrastructures. However, these transformations may cause workflow imbalance because of runtime or data imbalance. To mitigate these imbalances, in this paper we propose an autonomic data-throttling approach to compute how data transmission must be throttled throughout workflow jobs. Our approach relies on structural analysis of Petri nets, obtained by model transformation of data-intensive workflows, and Linear Programming techniques.

Special Issue on the Economics of Security and Privacy: Guest Editors' Introduction

On the Profitability of Bundling Sale Strategy for Online Service Markets with Network Effects

In this paper, we aim to understand the dynamics of bundling sale strategy and under what situations it will be more attractive than the separate sales. We focus on online service markets that exhibit network effect. We provide mathematical models to capture the interactions between buyers and sellers, analyze the market equilibrium and its stability, and formulate an optimization framework to determine the optimal sale strategy for the service provider. We analyze the impact of the key factors such as network effects, operating costs, as well as the variance and correlation of customers' valuations towards these services.

TOIT Reviewers for 2017

A Dynamic Service-Migration Mechanism in Edge Cognitive Computing

Cognitive computing is changing the way the world is seen, and indeed cognitive applications are almost deployed in the cloud. However, the attention to security, privacy, network connection and other issues has enabled a paradigm shift, moving the cognitive computing from the cloud to the network's edge. This paper firstly introduces a new network architecture for edge cognitive computing (ECC),then describes the ECC evolution process and design issues in details, and conducts an experiment platform of dynamic service migration based on mobile user behavioral cognition.The experimental results show that proposed ECC architecture has ultra-low latency and high user experience.

Deep Reinforcement Scheduling for Mobile Crowdsensing in Fog Computing

Mobile crowdsensing becomes a promising technology for the emerging Internet of Things (IoT) applications in smart environments. In this paper, we introduce a framework enabling mobile crowdsensing in fog environments with a hierarchical scheduling strategy. We first introduce the crowdsensing framework that has a hierarchical structure to organize different resources. Since different positions and performance of fog servers influent the quality of service of IoT applications, we formulate a scheduling problem in the hierarchical fog structure and solve it by using a deep reinforcement learning based strategy. From extensive simulation results, our solution outperforms other scheduling solutions for mobile crowdsensing

A Fog-based application for human activity recognition using personal smart devices

The diffusion of personal smart devices capable of capturing and analyzing user's data has encouraged the growth of novel sensing methodologies. In this context, simple Human Activity Recognition (HAR) techniques can be directly implemented on mobile devices; however, when complex activities need to be analysed timely, users' personal devices can operate as part of a more complex architecture. We propose a multi-device HAR framework that exploits the fog computing paradigm to move heavy computation from the sensing layer (wrist-worn devices), to intermediate devices (smartphones), and then to the cloud. Results show the effectiveness of the framework on a real-world scenario.

Fog-based secure communications for low power IoT devices

Standard security protocols are characterized by high computational complexity that is unsuitable to networks of low-power devices. The typical solution based on cloud services that facilitate deployment, intermediate all messages among things and enable secure communications has the disadvantage of requiring permanent Internet connectivity even for things connected over a local network. This paradigm is inappropriate in several scenarios, hence we propose an efficient fog-based system that enables secure communications and preserves easy management of cloud-assisted IoT. The proposal is based on an original lightweight proxy re-encryption scheme that can be executed even by large networks of low-power devices.

Privacy-Preserving High-Order Bi-Lanczos in Integrated Edge-Fog-Cloud Architecture for CPSS

Smart environments (SE) are expected to significantly benefit from integrated edge-fog-cloud (IEFC) paradigm. High-order Bi-Lanczos has emerged as a powerful tool in SE. How to complete data processing without compromising privacy is a challenge in SE. In this work, we propose a novel privacy-preserving high-order Bi-Lanczos (PPHOBL) scheme in IEFC paradigm for SE. We firstly propose a privacy-preserving big data processing model using IEFC paradigm. Subsequently, making use of the model, we present a PPHOBL scheme. Finally, we analyze the scheme based on an intelligent surveillance system. The results demonstrate that the superiority of the scheme for SE.

an Efficient Computation Offloading for Multi-cloudlet Mobile-Edge Computing

In this paper, we focus on computation offloading over multi-cloudlets environment. We consider several mobile users with different energy and latency constrained tasks that can be offloaded over cloudlets. We investigate offloading policy that decides which tasks should be offloaded and selects the assigned cloudlet, accordingly with network and system resources. The objective is to minimize the execution time and the energy consumption. We propose a distributed relaxation heuristic based on Lagrangian decomposition. Numerical results show that our policy achieves a good offloading solution quickly, and can achieve better performances for large-scale scenarios compared to alternatives approaches from the literature.

Oops: Optimizing Operation-mode Selection for IoT Edge Devices

The massive increase of IoT devices and their collected data raises the question how to analyze all that data. Edge computing may provide a suitable compromise, but the question remains: how much processing should be done locally vs. offloaded to other edge devices? The diverse application requirements and limited resources at the edge extend these challenges. In this article we propose Oops, an optimization framework to decide and adapt the resource management at runtime, and orchestrate the edge devices in a distributed manner. Experimental results show a significantly reduced runtime overhead while even increasing the user utility compared to state-of-the-art.

Unified Model for the Mobile-Edge-Cloud Continuum

Mobile-, edge-, and cloud-computing have the potential to form a computing continuum for disruptive applications. The choice of where in the continuum to execute different functionalities is made at run time, based on context and requirements, with the goal of minimizing latency and battery consumption, and maximizing availability. We propose A3-E, a unified model that exploits the Function-as-a-Service to abstract away the heterogeneity of the continuum. Experiments show that A3-E is capable of dynamically routing the application's requests to the continuum, reducing latency by up to $90$\% when using edge infrastructures, and battery consumption by $74$\% when offloading mobile computation.

Measuring Performance in Knowledge Intensive Processes

Despite the growing body of research focused on understanding Knowledge Intensive Processes (KIPs), the research question on how to measure the performance of KIPs and of the knowledge workers involved is still open. In this paper, we address it with a proposal to enable performance management of KIPs: An ontology that allows us to define process performance indicators in the context KIPs, and a methodology that builds on the ontology and the concepts of lead and lag indicators. Both the ontology and the methodology have been applied to a case study of a real ICT Outsourcing Company in Brazil.

Incentive-Based Crowdsourcing of Hotspot Services

We present a new spatio-temporal incentive-based approach to achieve a geographically balanced coverage of crowdsourced services. The proposed approach is based on a new spatio-temporal incentive model that considers multiple parameters including location entropy and spatio-temporal density to encourage the participation of crowdsourced service providers. We present a greedy network flow algorithm that offers incentives to redistribute the crowdsourced service providers to improve the crowdsourced coverage balance. A novel participation probability model is introduced to estimate the expected number of crowdsourced service providers movement based on spatio-temporal features. Experimental results validate the efficiency and effectiveness of the proposed approach.

A Hybrid Approach for Improving the Design Quality of Web Service Interfaces

We propose in this paper a hybrid approach to improve the design structure of Web service interfaces and fix antipatterns as a combination of both deterministic and heuristic-based methods. The first step consists of a deterministic method using graph partitioning-based technique to split the operations of a large service interface into more cohesive interfaces, each one representing a distinct abstraction. Then, the produced interfaces will be checked using a heuristic-based approach based on the non-dominated sorting genetic algorithm (NSGA-II) to correct potential antipatterns while reducing the interface design deviation to avoid taking the service away from its original design.

Local Concurrency Detection in Business Process Event Logs

Detecting concurrency relations between events is a fundamental primitive underpinning a range of process mining techniques. Existing approaches identify concurrency relations at the level of event types under a global interpretation. If two event types are declared to be concurrent, every occurrence of one event type is deemed to be concurrent to one occurrence of the other. In practice, this interpretation is too coarse-grained and leads to over-generalization. This paper proposes a finer-grained approach, whereby two event types may be deemed to be in a concurrency relation relative to one state of the process, but not relative to other states.

All ACM Journals | See Full Journal Index

Search TOIT
enter search term and/or author name