Changes between Version 6 and Version 7 of IPC
- Timestamp:
- 2018-06-21T13:43:59Z (7 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
TabularUnified IPC
v6 v7 1 1 = IPC for Dummies = 2 2 3 Understanding HelenOS IPC is essential for the development of HelenOS user space servers and services and,4 to a much lesser extent, for the development of any HelenOS user space code. This document attempts to concisely explain how3 Understanding HelenOS IPC is essential for the development of HelenOS user space servers and services and, 4 to a much lesser extent, for the development of any HelenOS user space code. This document attempts to concisely explain how 5 5 to use the HelenOS IPC. It doesn't aspire to be exhaustive nor to cover the implementation details of the IPC 6 subsystem itself , which is dealt with in chapter 8 of the HelenOS design [http://www.helenos.eu/doc/design.pdfdocumentation].6 subsystem itself. The original design motivations are explained in Chapter 8 of the [http://www.helenos.eu/doc/design.pdf HelenOS design documentation]. 7 7 8 8 * [#IpcIntroRT Introduction to the runtime environment] … … 19 19 20 20 * [#IpcSkeletonSrv Writing a skeleton server] 21 22 21 23 22 == Introduction to the runtime environment == #IpcIntroRT … … 28 27 each task breaks down to one or more independently scheduled ''threads''. 29 28 30 In user space, each thread executes by the means of a lightweight execution entities called ''fibrils''.31 The distinction between threads and fibrils is that the kernel schedules threads and i s completely unaware of fibrils.29 In user space, each thread executes by the means of a lightweight execution entities called ''fibrils''. 30 The distinction between threads and fibrils is that the kernel schedules threads and it is completely unaware of fibrils. 32 31 33 32 The standard library cooperatively schedules fibrils and lets them run on behalf of the underlying thread. Due to this … … 39 38 * The underlying thread is preempted by the kernel 40 39 41 Fibrils were introduced especially to facilitate more straight 40 Fibrils were introduced especially to facilitate more straightforward IPC communication. 42 41 43 42 == Basics of IPC communication == #IpcIntroIPC 44 43 45 44 Because tasks are isolated from each other, they need to use the kernel's syscall interface for communication with the rest of 46 the world. In the lastgeneration of microkernels, the emphasis was put on synchronous IPC communication. In HelenOS, both47 synchronous and asynchronous communication is possible, but it could be concluded thatthe HelenOS IPC is primarily asynchronous.45 the world. In the previous generation of microkernels, the emphasis was put on synchronous IPC communication. In HelenOS, both 46 synchronous and asynchronous communication is possible, but the HelenOS IPC is primarily asynchronous. 48 47 49 48 The concept of and the terminology used in HelenOS IPC is based on the natural abstraction of a telephone dialogue between a man on one … … 51 50 Because of that, the call cannot be immediately answered, but needs to be first picked up from the answerbox by the second party. 52 51 53 In HelenOS, the IPC communication goes like in the following example. A user space fibril uses one of its ''phones'', which is connected to the52 In HelenOS, the IPC communication goes like in the following example. A user space fibril uses one of its ''phones'', which is connected to the 54 53 callee task's ''answerbox'', and makes a short ''call''. The caller fibril can either make another call or wait for the answer. The callee task 55 54 has a missed call stored in its answerbox now. Sooner or later, one of the callee task's fibril will pick the call up, process it and either answer … … 63 62 available fibril to pick it up, but then we could not talk about a connection and if we tried to preserve the concept of connection, the code handling 64 63 incoming calls would most likely become full of state automata and callbacks. In HelenOS, there is a specialized piece of software called asynchronous 65 framework, which forms a layer above the low-level IPC library functions. The asynchronous framework does all the state automata and callback dirty work64 framework, which forms a layer above the low-level IPC mechanism. The asynchronous framework does all the state automata and callback dirty work 66 65 itself and hides the implementation details from the programmer. 67 66 … … 69 68 With the asynchronous framework in place, there are two kinds of fibrils: 70 69 71 * manager fibrils and70 * manager fibrils, and 72 71 * worker fibrils. 73 72 … … 77 76 The benefit of using the asynchronous framework and fibrils is that the programmer can do without callbacks and state automata and still use asynchronous communication. 78 77 79 === Capabilities of HelenOS IPC ===80 81 The capabilities of HelenOS IPC can be summarized in the following list:78 === Features of HelenOS IPC === 79 80 The features of HelenOS IPC can be summarized in the following list: 82 81 83 82 * short calls, consisting of one argument for method number and five arguments of payload, … … 87 86 * sharing memory from another task, 88 87 * sharing memory to another task, 89 * interrupt notifications for user space device drivers.88 * interrupt notifications for user space device drivers. 90 89 91 90 The first two items can be considered basic building blocks. … … 99 98 100 99 {{{ 101 #include <ipc/ipc.h> 102 #include <ipc/services.h> 103 ... 104 vfs_phone = ipc_connect_me_to(PHONE_NS, SERVICE_VFS, 0, 0); 105 if (vfs_phone < 0) { 106 /* handle error */ 107 } 108 }}} 109 110 The naming service simply forwards the ''IPC_M_CONNECT_ME_TO'' call, which is marshalled by the ipc_connect_me_to(), 111 to the destination service, provided that such service exists. Note that the service to which you intend connecting to will create 112 a new fibril for handling the connection from your task. The newly created fibril in the destination task will receive the 113 ''IPC_M_CONNECT_ME_TO'' call and will be given chance either to accept or reject the connection. In the snippet above, the 114 client doesn't make use of two server-defined connection arguments. If the connection is accepted, a new non-negative phone 115 number will be returned to the client task. From that time on, the task can use that phone for making calls to the service. 100 #include <async.h> 101 ... 102 /* 103 * Use the naming service session than abstracts 104 * the phone to the naming service. 105 */ 106 async_exch_t *exch = async_exchange_begin(ns_session); 107 if (exch == NULL) { 108 /* Handle error creating an exchange */ 109 } 110 111 async_sess_t *session = 112 async_connect_me_to_iface(exch, 113 INTERFACE_VFS, SERVICE_VFS, 0); 114 async_exchange_end(exch); 115 116 if (session == NULL) { 117 /* Handle error connecting to the VFS */ 118 } 119 }}} 120 121 The ''async_connect_me_to_iface'' is a wrapper for sending the ''IPC_M_CONNECT_ME_TO'' low-level IPC message to the naming service. 122 The naming service simply forwards the ''IPC_M_CONNECT_ME_TO'' call to the destination service, provided that such service exists. 123 Note that the service to which you intend connecting to will create a new fibril for handling the connection from your task. 124 The newly created fibril in the destination task will receive the ''IPC_M_CONNECT_ME_TO'' call and will be given chance either 125 to accept or reject the connection. In the snippet above, the client doesn't make use of the server-defined connection argument. 126 If the connection is accepted, a new non-negative phone number will be returned to the client task and the asynchronous framework 127 will create a new session for it. From that time on, the task can use that session for making calls to the service. 116 128 The connection exists until either side closes it. 117 129 118 The client uses the '' ipc_hangup(int phone)'' interface to close the connection.130 The client uses the ''async_hangup(async_sess_t *session)'' interface to close the connection. 119 131 120 132 == Passing short IPC messages == #IpcShortMsg … … 128 140 protocol-defined methods, the payload arguments will be defined by the protocol in question. 129 141 130 Even though a call can be made by using the low-level IPC primitives, it is strongly discouraged (unless you know what you are doing) in favor of142 Even though a user space task can use the low-level IPC mechanisms directly, it is strongly discouraged (unless you know what you are doing) in favor of 131 143 using the asynchronous framework. Making an asynchronous request via the asynchronous framework is fairly easy, as can be seen in the following example: 132 144 133 134 {{{ 135 #include <ipc/ipc.h> 136 #include <async.h> 137 ... 138 int vfs_phone; 139 aid_t req; 140 ipc_call_t answer; 141 ipcarg_t rc; 142 ... 143 req = async_send_3(vfs_phone, VFS_OPEN, lflag, oflag, 0, &answer); 144 ... 145 async_wait_for(req, &rc); 146 .... 147 if (rc != EOK) { 148 /* handle error */ 149 } 145 {{{ 146 #include <async.h> 147 ... 148 async_exch_t *exch = async_exchange_begin(session); 149 if (exch == NULL) { 150 /* Handle error creating an exchange */ 151 } 152 153 ipc_call_t answer; 154 aid_t req = async_send_3(exch, VFS_IN_OPEN, lflags, oflags, 0, &answer); 155 async_exchange_end(exch); 156 ... 157 int rc; 158 async_wait_for(req, &rc); 159 160 if (rc != EOK) { 161 /* Handle error from the server */ 162 } 150 163 }}} 151 164 152 165 In the example above, the standard library is making an asynchronous call to the VFS server. 153 The method number is ''VFS_ OPEN'', and ''lflag'', ''oflag'' and 0 are three payload arguments defined154 by the VFS protocol. Note that the number of arguments figures in the numeric suffix of the async_send_3()166 The method number is ''VFS_IN_OPEN'', and ''lflag'', ''oflag'' and 0 are three payload arguments defined 167 by the VFS protocol. Note that the number of arguments figures in the numeric suffix of the ''async_send_3()'' 155 168 function name. There are analogous interfaces which take from zero to five payload arguments. 156 169 … … 163 176 164 177 {{{ 165 #include <ipc/ipc.h> 166 #include <async.h> 167 ... 168 int vfs_phone; 169 int fildes; 170 ipcarg_t rc; 171 ... 172 rc = async_req_1_0(vfs_phone, VFS_CLOSE, fildes); 178 #include <async.h> 179 ... 180 async_exch_t *exch = async_exchange_begin(session); 181 if (exch == NULL) { 182 /* Handle error creating an exchange */ 183 } 184 185 int rc = async_req_1_0(exch, VFS_IN_CLOSE, fildes); 186 async_exchange_end(exch); 187 173 188 if (rc != EOK) { 174 /* handle error */189 /* Handle error from the server */ 175 190 } 176 191 }}} 177 192 178 193 The example above illustrates how the standard library synchronously calls the VFS server and asks it to close a file descriptor passed 179 in the ''fildes'' argument, which is the only payload argument defined for the ''VFS_ CLOSE'' method. The interface name encodes the number of input and return arguments in the function name, so there are variants that take or return different number of arguments. Note that contrary to the asynchronous example above, the return arguments would be stored directly to pointers passed to the function.180 181 The interface for answering calls is '' ipc_answer_n()'', where ''n'' is the number of return arguments. This is how the VFS server answers the ''VFS_OPEN'' call:182 183 {{{ 184 ipc_answer_1(rid, EOK, fd);185 }}} 186 187 In this example, ''rid'' is the hashof the received call, ''EOK'' is the return value and ''fd'' is the only return argument.194 in the ''fildes'' argument, which is the only payload argument defined for the ''VFS_IN_CLOSE'' method. The interface name encodes the number of input and return arguments in the function name, so there are variants that take or return different number of arguments. Note that contrary to the asynchronous example above, the return arguments would be stored directly to pointers passed to the function. 195 196 The interface for answering calls is ''async_answer_n()'', where ''n'' is the number of return arguments. This is how the VFS server answers the ''VFS_IN_OPEN'' call: 197 198 {{{ 199 async_answer_1(rid, EOK, fd); 200 }}} 201 202 In this example, ''rid'' is the capability of the received call, ''EOK'' is the return value and ''fd'' is the only return argument. 188 203 189 204 == Passing large data via IPC == #IpcDataCopy 190 205 191 Passing five words of payload in a request and five words of payload in an answer is not very suitable for larger data transfers. Instead, the application can use these 192 building blocks to negotiate transfer of a much larger block (currently there is a hard limit on 64KiB). The negotiation has three phases: 206 Passing five words of payload in a request and five words of payload in an answer is not very suitable for larger data transfers. 207 Instead, the application can use these building blocks to negotiate transfer of a much larger block (currently there is a hard limit 208 on 64 KiB). The negotiation has three phases: 193 209 194 210 * the initial phase in which the client announces its intention to copy memory to or from the recipient, … … 196 212 * the final phase in which the server either accepts or rejects the bid. 197 213 198 We use the terms client and server instead of the terms sender and recipient, because a client can be both the sender and the recipient and a server can be both the recipient and the sender, depending on the direction of the data transfer. In the following text, we'll cover both. 199 200 In theory, the programmer can use the low-level short IPC messages to implement all three phases himself. However, this is can be tedious and error prone and therefore the standard library offers convenience wrappers for each phase instead. 214 We use the terms client and server instead of the terms sender and recipient, because a client can be both the sender and the recipient and 215 a server can be both the recipient and the sender, depending on the direction of the data transfer. In the following text, we'll cover both. 216 217 In theory, the programmer can use the low-level short IPC messages to implement all three phases himself or herself. However, this is can be 218 tedious and error prone and therefore the standard library offers convenience wrappers for each phase instead. 201 219 202 220 === Sending data ===