Abstracts Track 2023

Area 1 - Technologies and Foundations

Nr: 8

Proposal for a Granular Access Control Method Based on Similarity of File Accesses Behavior Among Users


Yuki Kodaka, Hirokazu Hasegawa and Hiroki Takakura

Abstract: It is difficult for current technology to completely prevent cyber-attacks. Assuming damage will be inevitable, it is important to realize a method that minimizes the risk of information leakage and system destruction. In the conventional studies, file access privileges are gained broadly according to each user’s role. However, the importance of data and the role of users are constantly changing, and the risk of damage also varies on the stage of a cyber-attack. For the flexible response to these changes. a novel access control is required. As a first step, we propose a method to minimize file access privileges based on access history.  The method refers an event log that records who accessed a file with its timestamp. If there is a record for a certain period of time, it means that someone has been allowed access to the file. Otherwise, the file is treated as none has access permission. To simplify our discussion, we assume one month as the period.  When a user tries to access a file, our method judges the access is allowed or not. In usual business activities, we expect that personnel in the same department show similar file access behavior, that is, the order of their file access is assumed to be almost same. Our method refers the event logs of a user and other members of his/her department. If the order seems to be similar, the access is allowed.  Our method uses graph theory to judge the similarity. The first step is to record the order by users in the same department over the past month from the event log. If file A and then B is accessed, link from node A to B is connected. The weight of the link is calculated according to the number of accesses. By dividing the weight of each link by the total number of link weights, normalized weights are calculated. In this abstract, two conditions are defined. Condition 1 represents threshold at which a link is considered major. Condition 2 represents the threshold at which the ratio of minor links is acceptable in the series of file accesses. Here, 0.1 and 0.25 are used as examples.  Let’s consider the following scenario. One user has accessed file A, B, C, and D in order, and is now trying to access E. In the access patterns in the department, A→B and C→D are observed so many times. For example, these links are assigned weight 100 and 97 respectively. B→C and D→E are seldom accessed, so weighted e.g., 2 and 1. Because the total number of weights is 200, the normalized weight of the link DE is 0.005. The existence of minor links is 2/5 = 0.4. Conditions 1 and 2 is not satisfied. In such cases, the access will be rejected.  If we try to judge by the access order of all files, the number of links would be enormous. As a result, the normalized weights of all links will be too small. Therefore, it is necessary to determine by the most recent n accesses before the previous access from D to E. Although this paper omits discussion on the setting of Condition 1 and Condition 2 due to the page limitation, these conditions should be varied depending on the situation, such as the level of risk of cyber-attacks and irresponsible behavior of users.  Although necessary access may be rejected and manual operation is required, the advantage of this method is to minimize access privileges with change in roles of users and access to data.

Nr: 9

Feasibility Verification on Impact of Frequently Access Control Update Based on User Reliability


Atsushi Shinoda, Hirokazu Hasegawa, Yukiko Yamaguchi, Hajime Shimada and Hiroki Takakura

Abstract: Nowadays, telecommuting, in which users connect to a corporate network from remote locations such as their homes, is increasing as a measure to prevent COVID-19 spread. However, telecommuting exposes companies to information security risks by allowing users to connect terminals from their home that is out of control. If the terminal is infected with malware, it may become a bridgehead which allows lateral movement in the corporate network. Further security enhancements are required for ensuring secure telecommuting, but they easily cause trade-off issues between security and business efficiency that the administrator has to solve. As a solution to this problem, we have proposed an access control system to minimize the loss of business efficiency while enhancing security. The system calculates the reliability of each connected user and implements network access control, which allows connection to many resources if the user's reliability is high, and minimizes the number of resources available for connection if the user's reliability is low. The system frequently recalculates reliability and updates access control dynamically. This would secure the network by minimizing a user's access range when the user's reliability is decreased for any reason, and restore the accessible range to recover business efficiency when the user's reliability returns to normal. Since it is important to implement access control to adapts to changing conditions from moment to moment, the higher the frequency of access control updates, the better. However, frequent updating of access control can be a heavy load to network equipment. In this research, we verified the impact of the dynamic access control function on the corporate network when the proposed system is implemented. The proposed system was implemented in a pseudo-corporate network using SDN. While communicating with resource servers and clients in the corporate network, access control was updated by recalculating reliability with different frequency. Then, we confirmed how the communication would be affected. We also verified environmental differences using computers with different CPU in the SDN switch that performs network access control. Experiments were conducted on the frequency of access control updates, varying from (I) every 30 s (seconds), (II) every 20 s, (III) every 10 s, and (IV) every 5 s. 6 couples of clients and servers in the corporate network communicate through SMB, and we calculate the average of transfer time measurements for a file of about 660 MB. The experimental results showed that the file transfer time slightly deteriorates by comparing with baselines that does not deploy the dynamic update. In the case that a software switch using high-performance hardware, the average time increases (I) 45 s, (II) 43 s, (III) 53 s, and (IV) 57 s from 41 s. For using low-performance hardware, the time increases, (I) 58 s, (II) 53 s, (III) 60 s, and (IV) 72 s from 59 s. In conclusion, it was confirmed that although the time did not simply obey monotonic increase, the high frequency of updates caused additional latency in communication. It was also confirmed that there were differences depending on the performance of the SDN switch equipment. (IV) is an experiment that assumes an unrealistic excessive frequency, which has clearly affected the results. However, it is still considered to be practical enough, since it lasts at most 10 s.