1685a6bf8SThomas Gleixner // SPDX-License-Identifier: GPL-2.0-only 2d021c344SAndy King /* 3d021c344SAndy King * VMware vSockets Driver 4d021c344SAndy King * 5d021c344SAndy King * Copyright (C) 2007-2013 VMware, Inc. All rights reserved. 6d021c344SAndy King */ 7d021c344SAndy King 8d021c344SAndy King /* Implementation notes: 9d021c344SAndy King * 10d021c344SAndy King * - There are two kinds of sockets: those created by user action (such as 11d021c344SAndy King * calling socket(2)) and those created by incoming connection request packets. 12d021c344SAndy King * 13d021c344SAndy King * - There are two "global" tables, one for bound sockets (sockets that have 14d021c344SAndy King * specified an address that they are responsible for) and one for connected 15d021c344SAndy King * sockets (sockets that have established a connection with another socket). 16d021c344SAndy King * These tables are "global" in that all sockets on the system are placed 17d021c344SAndy King * within them. - Note, though, that the bound table contains an extra entry 18d021c344SAndy King * for a list of unbound sockets and SOCK_DGRAM sockets will always remain in 19d021c344SAndy King * that list. The bound table is used solely for lookup of sockets when packets 20d021c344SAndy King * are received and that's not necessary for SOCK_DGRAM sockets since we create 21d021c344SAndy King * a datagram handle for each and need not perform a lookup. Keeping SOCK_DGRAM 22d021c344SAndy King * sockets out of the bound hash buckets will reduce the chance of collisions 23d021c344SAndy King * when looking for SOCK_STREAM sockets and prevents us from having to check the 24d021c344SAndy King * socket type in the hash table lookups. 25d021c344SAndy King * 26d021c344SAndy King * - Sockets created by user action will either be "client" sockets that 27d021c344SAndy King * initiate a connection or "server" sockets that listen for connections; we do 28d021c344SAndy King * not support simultaneous connects (two "client" sockets connecting). 29d021c344SAndy King * 30d021c344SAndy King * - "Server" sockets are referred to as listener sockets throughout this 313b4477d2SStefan Hajnoczi * implementation because they are in the TCP_LISTEN state. When a 32ea3803c1SStefan Hajnoczi * connection request is received (the second kind of socket mentioned above), 33ea3803c1SStefan Hajnoczi * we create a new socket and refer to it as a pending socket. These pending 34ea3803c1SStefan Hajnoczi * sockets are placed on the pending connection list of the listener socket. 35ea3803c1SStefan Hajnoczi * When future packets are received for the address the listener socket is 36ea3803c1SStefan Hajnoczi * bound to, we check if the source of the packet is from one that has an 37ea3803c1SStefan Hajnoczi * existing pending connection. If it does, we process the packet for the 38ea3803c1SStefan Hajnoczi * pending socket. When that socket reaches the connected state, it is removed 39ea3803c1SStefan Hajnoczi * from the listener socket's pending list and enqueued in the listener 40ea3803c1SStefan Hajnoczi * socket's accept queue. Callers of accept(2) will accept connected sockets 41ea3803c1SStefan Hajnoczi * from the listener socket's accept queue. If the socket cannot be accepted 42ea3803c1SStefan Hajnoczi * for some reason then it is marked rejected. Once the connection is 43ea3803c1SStefan Hajnoczi * accepted, it is owned by the user process and the responsibility for cleanup 44ea3803c1SStefan Hajnoczi * falls with that user process. 45d021c344SAndy King * 46d021c344SAndy King * - It is possible that these pending sockets will never reach the connected 47d021c344SAndy King * state; in fact, we may never receive another packet after the connection 48d021c344SAndy King * request. Because of this, we must schedule a cleanup function to run in the 49d021c344SAndy King * future, after some amount of time passes where a connection should have been 50d021c344SAndy King * established. This function ensures that the socket is off all lists so it 51d021c344SAndy King * cannot be retrieved, then drops all references to the socket so it is cleaned 52d021c344SAndy King * up (sock_put() -> sk_free() -> our sk_destruct implementation). Note this 53d021c344SAndy King * function will also cleanup rejected sockets, those that reach the connected 54d021c344SAndy King * state but leave it before they have been accepted. 55d021c344SAndy King * 564192f672SStefan Hajnoczi * - Lock ordering for pending or accept queue sockets is: 574192f672SStefan Hajnoczi * 584192f672SStefan Hajnoczi * lock_sock(listener); 594192f672SStefan Hajnoczi * lock_sock_nested(pending, SINGLE_DEPTH_NESTING); 604192f672SStefan Hajnoczi * 614192f672SStefan Hajnoczi * Using explicit nested locking keeps lockdep happy since normally only one 624192f672SStefan Hajnoczi * lock of a given class may be taken at a time. 634192f672SStefan Hajnoczi * 64d021c344SAndy King * - Sockets created by user action will be cleaned up when the user process 65d021c344SAndy King * calls close(2), causing our release implementation to be called. Our release 66d021c344SAndy King * implementation will perform some cleanup then drop the last reference so our 67d021c344SAndy King * sk_destruct implementation is invoked. Our sk_destruct implementation will 68d021c344SAndy King * perform additional cleanup that's common for both types of sockets. 69d021c344SAndy King * 70d021c344SAndy King * - A socket's reference count is what ensures that the structure won't be 71d021c344SAndy King * freed. Each entry in a list (such as the "global" bound and connected tables 72d021c344SAndy King * and the listener socket's pending list and connected queue) ensures a 73d021c344SAndy King * reference. When we defer work until process context and pass a socket as our 74d021c344SAndy King * argument, we must ensure the reference count is increased to ensure the 75d021c344SAndy King * socket isn't freed before the function is run; the deferred function will 76d021c344SAndy King * then drop the reference. 773b4477d2SStefan Hajnoczi * 783b4477d2SStefan Hajnoczi * - sk->sk_state uses the TCP state constants because they are widely used by 793b4477d2SStefan Hajnoczi * other address families and exposed to userspace tools like ss(8): 803b4477d2SStefan Hajnoczi * 813b4477d2SStefan Hajnoczi * TCP_CLOSE - unconnected 823b4477d2SStefan Hajnoczi * TCP_SYN_SENT - connecting 833b4477d2SStefan Hajnoczi * TCP_ESTABLISHED - connected 843b4477d2SStefan Hajnoczi * TCP_CLOSING - disconnecting 853b4477d2SStefan Hajnoczi * TCP_LISTEN - listening 86d021c344SAndy King */ 87d021c344SAndy King 88b6459415SJakub Kicinski #include <linux/compat.h> 89d021c344SAndy King #include <linux/types.h> 90d021c344SAndy King #include <linux/bitops.h> 91d021c344SAndy King #include <linux/cred.h> 92d021c344SAndy King #include <linux/init.h> 93d021c344SAndy King #include <linux/io.h> 94d021c344SAndy King #include <linux/kernel.h> 95174cd4b1SIngo Molnar #include <linux/sched/signal.h> 96d021c344SAndy King #include <linux/kmod.h> 97d021c344SAndy King #include <linux/list.h> 98d021c344SAndy King #include <linux/miscdevice.h> 99d021c344SAndy King #include <linux/module.h> 100d021c344SAndy King #include <linux/mutex.h> 101d021c344SAndy King #include <linux/net.h> 102d021c344SAndy King #include <linux/poll.h> 1038236b08cSLepton Wu #include <linux/random.h> 104d021c344SAndy King #include <linux/skbuff.h> 105d021c344SAndy King #include <linux/smp.h> 106d021c344SAndy King #include <linux/socket.h> 107d021c344SAndy King #include <linux/stddef.h> 108d021c344SAndy King #include <linux/unistd.h> 109d021c344SAndy King #include <linux/wait.h> 110d021c344SAndy King #include <linux/workqueue.h> 111d021c344SAndy King #include <net/sock.h> 11282a54d0eSAsias He #include <net/af_vsock.h> 113d021c344SAndy King 114d021c344SAndy King static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr); 115d021c344SAndy King static void vsock_sk_destruct(struct sock *sk); 116d021c344SAndy King static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 117d021c344SAndy King 118d021c344SAndy King /* Protocol family. */ 119*634f1a71SBobby Eshleman struct proto vsock_proto = { 120d021c344SAndy King .name = "AF_VSOCK", 121d021c344SAndy King .owner = THIS_MODULE, 122d021c344SAndy King .obj_size = sizeof(struct vsock_sock), 123*634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 124*634f1a71SBobby Eshleman .psock_update_sk_prot = vsock_bpf_update_proto, 125*634f1a71SBobby Eshleman #endif 126d021c344SAndy King }; 127d021c344SAndy King 128d021c344SAndy King /* The default peer timeout indicates how long we will wait for a peer response 129d021c344SAndy King * to a control message. 130d021c344SAndy King */ 131d021c344SAndy King #define VSOCK_DEFAULT_CONNECT_TIMEOUT (2 * HZ) 132d021c344SAndy King 133b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_SIZE (1024 * 256) 134b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_MAX_SIZE (1024 * 256) 135b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_MIN_SIZE 128 136b9f2b0ffSStefano Garzarella 137c0cfa2d8SStefano Garzarella /* Transport used for host->guest communication */ 138c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_h2g; 139c0cfa2d8SStefano Garzarella /* Transport used for guest->host communication */ 140c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_g2h; 141c0cfa2d8SStefano Garzarella /* Transport used for DGRAM communication */ 142c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_dgram; 1430e121905SStefano Garzarella /* Transport used for local communication */ 1440e121905SStefano Garzarella static const struct vsock_transport *transport_local; 145d021c344SAndy King static DEFINE_MUTEX(vsock_register_mutex); 146d021c344SAndy King 147d021c344SAndy King /**** UTILS ****/ 148d021c344SAndy King 149d021c344SAndy King /* Each bound VSocket is stored in the bind hash table and each connected 150d021c344SAndy King * VSocket is stored in the connected hash table. 151d021c344SAndy King * 152d021c344SAndy King * Unbound sockets are all put on the same list attached to the end of the hash 153d021c344SAndy King * table (vsock_unbound_sockets). Bound sockets are added to the hash table in 154d021c344SAndy King * the bucket that their local address hashes to (vsock_bound_sockets(addr) 155d021c344SAndy King * represents the list that addr hashes to). 156d021c344SAndy King * 157d021c344SAndy King * Specifically, we initialize the vsock_bind_table array to a size of 158d021c344SAndy King * VSOCK_HASH_SIZE + 1 so that vsock_bind_table[0] through 159d021c344SAndy King * vsock_bind_table[VSOCK_HASH_SIZE - 1] are for bound sockets and 160d021c344SAndy King * vsock_bind_table[VSOCK_HASH_SIZE] is for unbound sockets. The hash function 161a49dd9dcSAsias He * mods with VSOCK_HASH_SIZE to ensure this. 162d021c344SAndy King */ 163d021c344SAndy King #define MAX_PORT_RETRIES 24 164d021c344SAndy King 165a49dd9dcSAsias He #define VSOCK_HASH(addr) ((addr)->svm_port % VSOCK_HASH_SIZE) 166d021c344SAndy King #define vsock_bound_sockets(addr) (&vsock_bind_table[VSOCK_HASH(addr)]) 167d021c344SAndy King #define vsock_unbound_sockets (&vsock_bind_table[VSOCK_HASH_SIZE]) 168d021c344SAndy King 169d021c344SAndy King /* XXX This can probably be implemented in a better way. */ 170d021c344SAndy King #define VSOCK_CONN_HASH(src, dst) \ 171a49dd9dcSAsias He (((src)->svm_cid ^ (dst)->svm_port) % VSOCK_HASH_SIZE) 172d021c344SAndy King #define vsock_connected_sockets(src, dst) \ 173d021c344SAndy King (&vsock_connected_table[VSOCK_CONN_HASH(src, dst)]) 174d021c344SAndy King #define vsock_connected_sockets_vsk(vsk) \ 175d021c344SAndy King vsock_connected_sockets(&(vsk)->remote_addr, &(vsk)->local_addr) 176d021c344SAndy King 17744f20980SStefan Hajnoczi struct list_head vsock_bind_table[VSOCK_HASH_SIZE + 1]; 17844f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_bind_table); 17944f20980SStefan Hajnoczi struct list_head vsock_connected_table[VSOCK_HASH_SIZE]; 18044f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_connected_table); 18144f20980SStefan Hajnoczi DEFINE_SPINLOCK(vsock_table_lock); 18244f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_table_lock); 183d021c344SAndy King 184b3a6dfe8SAsias He /* Autobind this socket to the local address if necessary. */ 185b3a6dfe8SAsias He static int vsock_auto_bind(struct vsock_sock *vsk) 186b3a6dfe8SAsias He { 187b3a6dfe8SAsias He struct sock *sk = sk_vsock(vsk); 188b3a6dfe8SAsias He struct sockaddr_vm local_addr; 189b3a6dfe8SAsias He 190b3a6dfe8SAsias He if (vsock_addr_bound(&vsk->local_addr)) 191b3a6dfe8SAsias He return 0; 192b3a6dfe8SAsias He vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 193b3a6dfe8SAsias He return __vsock_bind(sk, &local_addr); 194b3a6dfe8SAsias He } 195b3a6dfe8SAsias He 196c0cfa2d8SStefano Garzarella static void vsock_init_tables(void) 197d021c344SAndy King { 198d021c344SAndy King int i; 199d021c344SAndy King 200d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_bind_table); i++) 201d021c344SAndy King INIT_LIST_HEAD(&vsock_bind_table[i]); 202d021c344SAndy King 203d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) 204d021c344SAndy King INIT_LIST_HEAD(&vsock_connected_table[i]); 205d021c344SAndy King } 206d021c344SAndy King 207d021c344SAndy King static void __vsock_insert_bound(struct list_head *list, 208d021c344SAndy King struct vsock_sock *vsk) 209d021c344SAndy King { 210d021c344SAndy King sock_hold(&vsk->sk); 211d021c344SAndy King list_add(&vsk->bound_table, list); 212d021c344SAndy King } 213d021c344SAndy King 214d021c344SAndy King static void __vsock_insert_connected(struct list_head *list, 215d021c344SAndy King struct vsock_sock *vsk) 216d021c344SAndy King { 217d021c344SAndy King sock_hold(&vsk->sk); 218d021c344SAndy King list_add(&vsk->connected_table, list); 219d021c344SAndy King } 220d021c344SAndy King 221d021c344SAndy King static void __vsock_remove_bound(struct vsock_sock *vsk) 222d021c344SAndy King { 223d021c344SAndy King list_del_init(&vsk->bound_table); 224d021c344SAndy King sock_put(&vsk->sk); 225d021c344SAndy King } 226d021c344SAndy King 227d021c344SAndy King static void __vsock_remove_connected(struct vsock_sock *vsk) 228d021c344SAndy King { 229d021c344SAndy King list_del_init(&vsk->connected_table); 230d021c344SAndy King sock_put(&vsk->sk); 231d021c344SAndy King } 232d021c344SAndy King 233d021c344SAndy King static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) 234d021c344SAndy King { 235d021c344SAndy King struct vsock_sock *vsk; 236d021c344SAndy King 23736c5b48bSStefano Garzarella list_for_each_entry(vsk, vsock_bound_sockets(addr), bound_table) { 23836c5b48bSStefano Garzarella if (vsock_addr_equals_addr(addr, &vsk->local_addr)) 239d021c344SAndy King return sk_vsock(vsk); 240d021c344SAndy King 24136c5b48bSStefano Garzarella if (addr->svm_port == vsk->local_addr.svm_port && 24236c5b48bSStefano Garzarella (vsk->local_addr.svm_cid == VMADDR_CID_ANY || 24336c5b48bSStefano Garzarella addr->svm_cid == VMADDR_CID_ANY)) 24436c5b48bSStefano Garzarella return sk_vsock(vsk); 24536c5b48bSStefano Garzarella } 24636c5b48bSStefano Garzarella 247d021c344SAndy King return NULL; 248d021c344SAndy King } 249d021c344SAndy King 250d021c344SAndy King static struct sock *__vsock_find_connected_socket(struct sockaddr_vm *src, 251d021c344SAndy King struct sockaddr_vm *dst) 252d021c344SAndy King { 253d021c344SAndy King struct vsock_sock *vsk; 254d021c344SAndy King 255d021c344SAndy King list_for_each_entry(vsk, vsock_connected_sockets(src, dst), 256d021c344SAndy King connected_table) { 257990454b5SReilly Grant if (vsock_addr_equals_addr(src, &vsk->remote_addr) && 258990454b5SReilly Grant dst->svm_port == vsk->local_addr.svm_port) { 259d021c344SAndy King return sk_vsock(vsk); 260d021c344SAndy King } 261d021c344SAndy King } 262d021c344SAndy King 263d021c344SAndy King return NULL; 264d021c344SAndy King } 265d021c344SAndy King 266d021c344SAndy King static void vsock_insert_unbound(struct vsock_sock *vsk) 267d021c344SAndy King { 268d021c344SAndy King spin_lock_bh(&vsock_table_lock); 269d021c344SAndy King __vsock_insert_bound(vsock_unbound_sockets, vsk); 270d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 271d021c344SAndy King } 272d021c344SAndy King 273d021c344SAndy King void vsock_insert_connected(struct vsock_sock *vsk) 274d021c344SAndy King { 275d021c344SAndy King struct list_head *list = vsock_connected_sockets( 276d021c344SAndy King &vsk->remote_addr, &vsk->local_addr); 277d021c344SAndy King 278d021c344SAndy King spin_lock_bh(&vsock_table_lock); 279d021c344SAndy King __vsock_insert_connected(list, vsk); 280d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 281d021c344SAndy King } 282d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_insert_connected); 283d021c344SAndy King 284d021c344SAndy King void vsock_remove_bound(struct vsock_sock *vsk) 285d021c344SAndy King { 286d021c344SAndy King spin_lock_bh(&vsock_table_lock); 287d5afa82cSSunil Muthuswamy if (__vsock_in_bound_table(vsk)) 288d021c344SAndy King __vsock_remove_bound(vsk); 289d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 290d021c344SAndy King } 291d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_bound); 292d021c344SAndy King 293d021c344SAndy King void vsock_remove_connected(struct vsock_sock *vsk) 294d021c344SAndy King { 295d021c344SAndy King spin_lock_bh(&vsock_table_lock); 296d5afa82cSSunil Muthuswamy if (__vsock_in_connected_table(vsk)) 297d021c344SAndy King __vsock_remove_connected(vsk); 298d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 299d021c344SAndy King } 300d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_connected); 301d021c344SAndy King 302d021c344SAndy King struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr) 303d021c344SAndy King { 304d021c344SAndy King struct sock *sk; 305d021c344SAndy King 306d021c344SAndy King spin_lock_bh(&vsock_table_lock); 307d021c344SAndy King sk = __vsock_find_bound_socket(addr); 308d021c344SAndy King if (sk) 309d021c344SAndy King sock_hold(sk); 310d021c344SAndy King 311d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 312d021c344SAndy King 313d021c344SAndy King return sk; 314d021c344SAndy King } 315d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_find_bound_socket); 316d021c344SAndy King 317d021c344SAndy King struct sock *vsock_find_connected_socket(struct sockaddr_vm *src, 318d021c344SAndy King struct sockaddr_vm *dst) 319d021c344SAndy King { 320d021c344SAndy King struct sock *sk; 321d021c344SAndy King 322d021c344SAndy King spin_lock_bh(&vsock_table_lock); 323d021c344SAndy King sk = __vsock_find_connected_socket(src, dst); 324d021c344SAndy King if (sk) 325d021c344SAndy King sock_hold(sk); 326d021c344SAndy King 327d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 328d021c344SAndy King 329d021c344SAndy King return sk; 330d021c344SAndy King } 331d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_find_connected_socket); 332d021c344SAndy King 3336773b7dcSStefan Hajnoczi void vsock_remove_sock(struct vsock_sock *vsk) 3346773b7dcSStefan Hajnoczi { 3356773b7dcSStefan Hajnoczi vsock_remove_bound(vsk); 3366773b7dcSStefan Hajnoczi vsock_remove_connected(vsk); 3376773b7dcSStefan Hajnoczi } 3386773b7dcSStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_remove_sock); 3396773b7dcSStefan Hajnoczi 3408e6ed963SJiyong Park void vsock_for_each_connected_socket(struct vsock_transport *transport, 3418e6ed963SJiyong Park void (*fn)(struct sock *sk)) 342d021c344SAndy King { 343d021c344SAndy King int i; 344d021c344SAndy King 345d021c344SAndy King spin_lock_bh(&vsock_table_lock); 346d021c344SAndy King 347d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) { 348d021c344SAndy King struct vsock_sock *vsk; 349d021c344SAndy King list_for_each_entry(vsk, &vsock_connected_table[i], 3508e6ed963SJiyong Park connected_table) { 3518e6ed963SJiyong Park if (vsk->transport != transport) 3528e6ed963SJiyong Park continue; 3538e6ed963SJiyong Park 354d021c344SAndy King fn(sk_vsock(vsk)); 355d021c344SAndy King } 3568e6ed963SJiyong Park } 357d021c344SAndy King 358d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 359d021c344SAndy King } 360d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_for_each_connected_socket); 361d021c344SAndy King 362d021c344SAndy King void vsock_add_pending(struct sock *listener, struct sock *pending) 363d021c344SAndy King { 364d021c344SAndy King struct vsock_sock *vlistener; 365d021c344SAndy King struct vsock_sock *vpending; 366d021c344SAndy King 367d021c344SAndy King vlistener = vsock_sk(listener); 368d021c344SAndy King vpending = vsock_sk(pending); 369d021c344SAndy King 370d021c344SAndy King sock_hold(pending); 371d021c344SAndy King sock_hold(listener); 372d021c344SAndy King list_add_tail(&vpending->pending_links, &vlistener->pending_links); 373d021c344SAndy King } 374d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_add_pending); 375d021c344SAndy King 376d021c344SAndy King void vsock_remove_pending(struct sock *listener, struct sock *pending) 377d021c344SAndy King { 378d021c344SAndy King struct vsock_sock *vpending = vsock_sk(pending); 379d021c344SAndy King 380d021c344SAndy King list_del_init(&vpending->pending_links); 381d021c344SAndy King sock_put(listener); 382d021c344SAndy King sock_put(pending); 383d021c344SAndy King } 384d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_pending); 385d021c344SAndy King 386d021c344SAndy King void vsock_enqueue_accept(struct sock *listener, struct sock *connected) 387d021c344SAndy King { 388d021c344SAndy King struct vsock_sock *vlistener; 389d021c344SAndy King struct vsock_sock *vconnected; 390d021c344SAndy King 391d021c344SAndy King vlistener = vsock_sk(listener); 392d021c344SAndy King vconnected = vsock_sk(connected); 393d021c344SAndy King 394d021c344SAndy King sock_hold(connected); 395d021c344SAndy King sock_hold(listener); 396d021c344SAndy King list_add_tail(&vconnected->accept_queue, &vlistener->accept_queue); 397d021c344SAndy King } 398d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_enqueue_accept); 399d021c344SAndy King 400408624afSStefano Garzarella static bool vsock_use_local_transport(unsigned int remote_cid) 401408624afSStefano Garzarella { 402408624afSStefano Garzarella if (!transport_local) 403408624afSStefano Garzarella return false; 404408624afSStefano Garzarella 405408624afSStefano Garzarella if (remote_cid == VMADDR_CID_LOCAL) 406408624afSStefano Garzarella return true; 407408624afSStefano Garzarella 408408624afSStefano Garzarella if (transport_g2h) { 409408624afSStefano Garzarella return remote_cid == transport_g2h->get_local_cid(); 410408624afSStefano Garzarella } else { 411408624afSStefano Garzarella return remote_cid == VMADDR_CID_HOST; 412408624afSStefano Garzarella } 413408624afSStefano Garzarella } 414408624afSStefano Garzarella 4156a2c0962SStefano Garzarella static void vsock_deassign_transport(struct vsock_sock *vsk) 4166a2c0962SStefano Garzarella { 4176a2c0962SStefano Garzarella if (!vsk->transport) 4186a2c0962SStefano Garzarella return; 4196a2c0962SStefano Garzarella 4206a2c0962SStefano Garzarella vsk->transport->destruct(vsk); 4216a2c0962SStefano Garzarella module_put(vsk->transport->module); 4226a2c0962SStefano Garzarella vsk->transport = NULL; 4236a2c0962SStefano Garzarella } 4246a2c0962SStefano Garzarella 425c0cfa2d8SStefano Garzarella /* Assign a transport to a socket and call the .init transport callback. 426c0cfa2d8SStefano Garzarella * 4278cb48554SArseny Krasnov * Note: for connection oriented socket this must be called when vsk->remote_addr 4288cb48554SArseny Krasnov * is set (e.g. during the connect() or when a connection request on a listener 429c0cfa2d8SStefano Garzarella * socket is received). 430c0cfa2d8SStefano Garzarella * The vsk->remote_addr is used to decide which transport to use: 431408624afSStefano Garzarella * - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if 432408624afSStefano Garzarella * g2h is not loaded, will use local transport; 4337f816984SAndra Paraschiv * - remote CID <= VMADDR_CID_HOST or h2g is not loaded or remote flags field 4347f816984SAndra Paraschiv * includes VMADDR_FLAG_TO_HOST flag value, will use guest->host transport; 435c0cfa2d8SStefano Garzarella * - remote CID > VMADDR_CID_HOST will use host->guest transport; 436c0cfa2d8SStefano Garzarella */ 437c0cfa2d8SStefano Garzarella int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) 438c0cfa2d8SStefano Garzarella { 439c0cfa2d8SStefano Garzarella const struct vsock_transport *new_transport; 440c0cfa2d8SStefano Garzarella struct sock *sk = sk_vsock(vsk); 441c0cfa2d8SStefano Garzarella unsigned int remote_cid = vsk->remote_addr.svm_cid; 4427f816984SAndra Paraschiv __u8 remote_flags; 443039fcccaSStefano Garzarella int ret; 444c0cfa2d8SStefano Garzarella 4451b5f2ab9SAndra Paraschiv /* If the packet is coming with the source and destination CIDs higher 4461b5f2ab9SAndra Paraschiv * than VMADDR_CID_HOST, then a vsock channel where all the packets are 4471b5f2ab9SAndra Paraschiv * forwarded to the host should be established. Then the host will 4481b5f2ab9SAndra Paraschiv * need to forward the packets to the guest. 4491b5f2ab9SAndra Paraschiv * 4501b5f2ab9SAndra Paraschiv * The flag is set on the (listen) receive path (psk is not NULL). On 4511b5f2ab9SAndra Paraschiv * the connect path the flag can be set by the user space application. 4521b5f2ab9SAndra Paraschiv */ 4531b5f2ab9SAndra Paraschiv if (psk && vsk->local_addr.svm_cid > VMADDR_CID_HOST && 4541b5f2ab9SAndra Paraschiv vsk->remote_addr.svm_cid > VMADDR_CID_HOST) 4551b5f2ab9SAndra Paraschiv vsk->remote_addr.svm_flags |= VMADDR_FLAG_TO_HOST; 4561b5f2ab9SAndra Paraschiv 4577f816984SAndra Paraschiv remote_flags = vsk->remote_addr.svm_flags; 4587f816984SAndra Paraschiv 459c0cfa2d8SStefano Garzarella switch (sk->sk_type) { 460c0cfa2d8SStefano Garzarella case SOCK_DGRAM: 461c0cfa2d8SStefano Garzarella new_transport = transport_dgram; 462c0cfa2d8SStefano Garzarella break; 463c0cfa2d8SStefano Garzarella case SOCK_STREAM: 4640798e78bSArseny Krasnov case SOCK_SEQPACKET: 465408624afSStefano Garzarella if (vsock_use_local_transport(remote_cid)) 466408624afSStefano Garzarella new_transport = transport_local; 4677f816984SAndra Paraschiv else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g || 4687f816984SAndra Paraschiv (remote_flags & VMADDR_FLAG_TO_HOST)) 469c0cfa2d8SStefano Garzarella new_transport = transport_g2h; 470c0cfa2d8SStefano Garzarella else 471c0cfa2d8SStefano Garzarella new_transport = transport_h2g; 472c0cfa2d8SStefano Garzarella break; 473c0cfa2d8SStefano Garzarella default: 474c0cfa2d8SStefano Garzarella return -ESOCKTNOSUPPORT; 475c0cfa2d8SStefano Garzarella } 476c0cfa2d8SStefano Garzarella 477c0cfa2d8SStefano Garzarella if (vsk->transport) { 478c0cfa2d8SStefano Garzarella if (vsk->transport == new_transport) 479c0cfa2d8SStefano Garzarella return 0; 480c0cfa2d8SStefano Garzarella 4813f74957fSStefano Garzarella /* transport->release() must be called with sock lock acquired. 4828cb48554SArseny Krasnov * This path can only be taken during vsock_connect(), where we 4838cb48554SArseny Krasnov * have already held the sock lock. In the other cases, this 4848cb48554SArseny Krasnov * function is called on a new socket which is not assigned to 4858cb48554SArseny Krasnov * any transport. 4863f74957fSStefano Garzarella */ 487c0cfa2d8SStefano Garzarella vsk->transport->release(vsk); 4886a2c0962SStefano Garzarella vsock_deassign_transport(vsk); 489c0cfa2d8SStefano Garzarella } 490c0cfa2d8SStefano Garzarella 4916a2c0962SStefano Garzarella /* We increase the module refcnt to prevent the transport unloading 4926a2c0962SStefano Garzarella * while there are open sockets assigned to it. 4936a2c0962SStefano Garzarella */ 4946a2c0962SStefano Garzarella if (!new_transport || !try_module_get(new_transport->module)) 495c0cfa2d8SStefano Garzarella return -ENODEV; 496c0cfa2d8SStefano Garzarella 4970798e78bSArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) { 4980798e78bSArseny Krasnov if (!new_transport->seqpacket_allow || 4990798e78bSArseny Krasnov !new_transport->seqpacket_allow(remote_cid)) { 5000798e78bSArseny Krasnov module_put(new_transport->module); 5010798e78bSArseny Krasnov return -ESOCKTNOSUPPORT; 5020798e78bSArseny Krasnov } 5030798e78bSArseny Krasnov } 5040798e78bSArseny Krasnov 505039fcccaSStefano Garzarella ret = new_transport->init(vsk, psk); 506039fcccaSStefano Garzarella if (ret) { 507039fcccaSStefano Garzarella module_put(new_transport->module); 508039fcccaSStefano Garzarella return ret; 509039fcccaSStefano Garzarella } 510039fcccaSStefano Garzarella 511c0cfa2d8SStefano Garzarella vsk->transport = new_transport; 512c0cfa2d8SStefano Garzarella 513039fcccaSStefano Garzarella return 0; 514c0cfa2d8SStefano Garzarella } 515c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_assign_transport); 516c0cfa2d8SStefano Garzarella 517c0cfa2d8SStefano Garzarella bool vsock_find_cid(unsigned int cid) 518c0cfa2d8SStefano Garzarella { 519c0cfa2d8SStefano Garzarella if (transport_g2h && cid == transport_g2h->get_local_cid()) 520c0cfa2d8SStefano Garzarella return true; 521c0cfa2d8SStefano Garzarella 522c0cfa2d8SStefano Garzarella if (transport_h2g && cid == VMADDR_CID_HOST) 523c0cfa2d8SStefano Garzarella return true; 524c0cfa2d8SStefano Garzarella 525408624afSStefano Garzarella if (transport_local && cid == VMADDR_CID_LOCAL) 526408624afSStefano Garzarella return true; 527408624afSStefano Garzarella 528c0cfa2d8SStefano Garzarella return false; 529c0cfa2d8SStefano Garzarella } 530c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_find_cid); 531c0cfa2d8SStefano Garzarella 532d021c344SAndy King static struct sock *vsock_dequeue_accept(struct sock *listener) 533d021c344SAndy King { 534d021c344SAndy King struct vsock_sock *vlistener; 535d021c344SAndy King struct vsock_sock *vconnected; 536d021c344SAndy King 537d021c344SAndy King vlistener = vsock_sk(listener); 538d021c344SAndy King 539d021c344SAndy King if (list_empty(&vlistener->accept_queue)) 540d021c344SAndy King return NULL; 541d021c344SAndy King 542d021c344SAndy King vconnected = list_entry(vlistener->accept_queue.next, 543d021c344SAndy King struct vsock_sock, accept_queue); 544d021c344SAndy King 545d021c344SAndy King list_del_init(&vconnected->accept_queue); 546d021c344SAndy King sock_put(listener); 547d021c344SAndy King /* The caller will need a reference on the connected socket so we let 548d021c344SAndy King * it call sock_put(). 549d021c344SAndy King */ 550d021c344SAndy King 551d021c344SAndy King return sk_vsock(vconnected); 552d021c344SAndy King } 553d021c344SAndy King 554d021c344SAndy King static bool vsock_is_accept_queue_empty(struct sock *sk) 555d021c344SAndy King { 556d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 557d021c344SAndy King return list_empty(&vsk->accept_queue); 558d021c344SAndy King } 559d021c344SAndy King 560d021c344SAndy King static bool vsock_is_pending(struct sock *sk) 561d021c344SAndy King { 562d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 563d021c344SAndy King return !list_empty(&vsk->pending_links); 564d021c344SAndy King } 565d021c344SAndy King 566d021c344SAndy King static int vsock_send_shutdown(struct sock *sk, int mode) 567d021c344SAndy King { 568fe502c4aSStefano Garzarella struct vsock_sock *vsk = vsock_sk(sk); 569fe502c4aSStefano Garzarella 570c0cfa2d8SStefano Garzarella if (!vsk->transport) 571c0cfa2d8SStefano Garzarella return -ENODEV; 572c0cfa2d8SStefano Garzarella 573fe502c4aSStefano Garzarella return vsk->transport->shutdown(vsk, mode); 574d021c344SAndy King } 575d021c344SAndy King 576455f05ecSCong Wang static void vsock_pending_work(struct work_struct *work) 577d021c344SAndy King { 578d021c344SAndy King struct sock *sk; 579d021c344SAndy King struct sock *listener; 580d021c344SAndy King struct vsock_sock *vsk; 581d021c344SAndy King bool cleanup; 582d021c344SAndy King 583455f05ecSCong Wang vsk = container_of(work, struct vsock_sock, pending_work.work); 584d021c344SAndy King sk = sk_vsock(vsk); 585d021c344SAndy King listener = vsk->listener; 586d021c344SAndy King cleanup = true; 587d021c344SAndy King 588d021c344SAndy King lock_sock(listener); 5894192f672SStefan Hajnoczi lock_sock_nested(sk, SINGLE_DEPTH_NESTING); 590d021c344SAndy King 591d021c344SAndy King if (vsock_is_pending(sk)) { 592d021c344SAndy King vsock_remove_pending(listener, sk); 5931190cfdbSJorgen Hansen 5947976a11bSEric Dumazet sk_acceptq_removed(listener); 595d021c344SAndy King } else if (!vsk->rejected) { 596d021c344SAndy King /* We are not on the pending list and accept() did not reject 597d021c344SAndy King * us, so we must have been accepted by our user process. We 598d021c344SAndy King * just need to drop our references to the sockets and be on 599d021c344SAndy King * our way. 600d021c344SAndy King */ 601d021c344SAndy King cleanup = false; 602d021c344SAndy King goto out; 603d021c344SAndy King } 604d021c344SAndy King 605d021c344SAndy King /* We need to remove ourself from the global connected sockets list so 606d021c344SAndy King * incoming packets can't find this socket, and to reduce the reference 607d021c344SAndy King * count. 608d021c344SAndy King */ 609d021c344SAndy King vsock_remove_connected(vsk); 610d021c344SAndy King 6113b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 612d021c344SAndy King 613d021c344SAndy King out: 614d021c344SAndy King release_sock(sk); 615d021c344SAndy King release_sock(listener); 616d021c344SAndy King if (cleanup) 617d021c344SAndy King sock_put(sk); 618d021c344SAndy King 619d021c344SAndy King sock_put(sk); 620d021c344SAndy King sock_put(listener); 621d021c344SAndy King } 622d021c344SAndy King 623d021c344SAndy King /**** SOCKET OPERATIONS ****/ 624d021c344SAndy King 625a9e29e55SArseny Krasnov static int __vsock_bind_connectible(struct vsock_sock *vsk, 626d021c344SAndy King struct sockaddr_vm *addr) 627d021c344SAndy King { 628a22d3251SLepton Wu static u32 port; 629d021c344SAndy King struct sockaddr_vm new_addr; 630d021c344SAndy King 6318236b08cSLepton Wu if (!port) 632d247aabdSJason A. Donenfeld port = get_random_u32_above(LAST_RESERVED_PORT); 6338236b08cSLepton Wu 634d021c344SAndy King vsock_addr_init(&new_addr, addr->svm_cid, addr->svm_port); 635d021c344SAndy King 636d021c344SAndy King if (addr->svm_port == VMADDR_PORT_ANY) { 637d021c344SAndy King bool found = false; 638d021c344SAndy King unsigned int i; 639d021c344SAndy King 640d021c344SAndy King for (i = 0; i < MAX_PORT_RETRIES; i++) { 641d021c344SAndy King if (port <= LAST_RESERVED_PORT) 642d021c344SAndy King port = LAST_RESERVED_PORT + 1; 643d021c344SAndy King 644d021c344SAndy King new_addr.svm_port = port++; 645d021c344SAndy King 646d021c344SAndy King if (!__vsock_find_bound_socket(&new_addr)) { 647d021c344SAndy King found = true; 648d021c344SAndy King break; 649d021c344SAndy King } 650d021c344SAndy King } 651d021c344SAndy King 652d021c344SAndy King if (!found) 653d021c344SAndy King return -EADDRNOTAVAIL; 654d021c344SAndy King } else { 655d021c344SAndy King /* If port is in reserved range, ensure caller 656d021c344SAndy King * has necessary privileges. 657d021c344SAndy King */ 658d021c344SAndy King if (addr->svm_port <= LAST_RESERVED_PORT && 659d021c344SAndy King !capable(CAP_NET_BIND_SERVICE)) { 660d021c344SAndy King return -EACCES; 661d021c344SAndy King } 662d021c344SAndy King 663d021c344SAndy King if (__vsock_find_bound_socket(&new_addr)) 664d021c344SAndy King return -EADDRINUSE; 665d021c344SAndy King } 666d021c344SAndy King 667d021c344SAndy King vsock_addr_init(&vsk->local_addr, new_addr.svm_cid, new_addr.svm_port); 668d021c344SAndy King 6698cb48554SArseny Krasnov /* Remove connection oriented sockets from the unbound list and add them 6708cb48554SArseny Krasnov * to the hash table for easy lookup by its address. The unbound list 6718cb48554SArseny Krasnov * is simply an extra entry at the end of the hash table, a trick used 6728cb48554SArseny Krasnov * by AF_UNIX. 673d021c344SAndy King */ 674d021c344SAndy King __vsock_remove_bound(vsk); 675d021c344SAndy King __vsock_insert_bound(vsock_bound_sockets(&vsk->local_addr), vsk); 676d021c344SAndy King 677d021c344SAndy King return 0; 678d021c344SAndy King } 679d021c344SAndy King 680d021c344SAndy King static int __vsock_bind_dgram(struct vsock_sock *vsk, 681d021c344SAndy King struct sockaddr_vm *addr) 682d021c344SAndy King { 683fe502c4aSStefano Garzarella return vsk->transport->dgram_bind(vsk, addr); 684d021c344SAndy King } 685d021c344SAndy King 686d021c344SAndy King static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr) 687d021c344SAndy King { 688d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 689d021c344SAndy King int retval; 690d021c344SAndy King 691d021c344SAndy King /* First ensure this socket isn't already bound. */ 692d021c344SAndy King if (vsock_addr_bound(&vsk->local_addr)) 693d021c344SAndy King return -EINVAL; 694d021c344SAndy King 695d021c344SAndy King /* Now bind to the provided address or select appropriate values if 696d021c344SAndy King * none are provided (VMADDR_CID_ANY and VMADDR_PORT_ANY). Note that 697d021c344SAndy King * like AF_INET prevents binding to a non-local IP address (in most 698c0cfa2d8SStefano Garzarella * cases), we only allow binding to a local CID. 699d021c344SAndy King */ 700c0cfa2d8SStefano Garzarella if (addr->svm_cid != VMADDR_CID_ANY && !vsock_find_cid(addr->svm_cid)) 701d021c344SAndy King return -EADDRNOTAVAIL; 702d021c344SAndy King 703d021c344SAndy King switch (sk->sk_socket->type) { 704d021c344SAndy King case SOCK_STREAM: 7050798e78bSArseny Krasnov case SOCK_SEQPACKET: 706d021c344SAndy King spin_lock_bh(&vsock_table_lock); 707a9e29e55SArseny Krasnov retval = __vsock_bind_connectible(vsk, addr); 708d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 709d021c344SAndy King break; 710d021c344SAndy King 711d021c344SAndy King case SOCK_DGRAM: 712d021c344SAndy King retval = __vsock_bind_dgram(vsk, addr); 713d021c344SAndy King break; 714d021c344SAndy King 715d021c344SAndy King default: 716d021c344SAndy King retval = -EINVAL; 717d021c344SAndy King break; 718d021c344SAndy King } 719d021c344SAndy King 720d021c344SAndy King return retval; 721d021c344SAndy King } 722d021c344SAndy King 723455f05ecSCong Wang static void vsock_connect_timeout(struct work_struct *work); 724455f05ecSCong Wang 725b9ca2f5fSStefano Garzarella static struct sock *__vsock_create(struct net *net, 726d021c344SAndy King struct socket *sock, 727d021c344SAndy King struct sock *parent, 728d021c344SAndy King gfp_t priority, 72911aa9c28SEric W. Biederman unsigned short type, 73011aa9c28SEric W. Biederman int kern) 731d021c344SAndy King { 732d021c344SAndy King struct sock *sk; 733d021c344SAndy King struct vsock_sock *psk; 734d021c344SAndy King struct vsock_sock *vsk; 735d021c344SAndy King 73611aa9c28SEric W. Biederman sk = sk_alloc(net, AF_VSOCK, priority, &vsock_proto, kern); 737d021c344SAndy King if (!sk) 738d021c344SAndy King return NULL; 739d021c344SAndy King 740d021c344SAndy King sock_init_data(sock, sk); 741d021c344SAndy King 742d021c344SAndy King /* sk->sk_type is normally set in sock_init_data, but only if sock is 743d021c344SAndy King * non-NULL. We make sure that our sockets always have a type by 744d021c344SAndy King * setting it here if needed. 745d021c344SAndy King */ 746d021c344SAndy King if (!sock) 747d021c344SAndy King sk->sk_type = type; 748d021c344SAndy King 749d021c344SAndy King vsk = vsock_sk(sk); 750d021c344SAndy King vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 751d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 752d021c344SAndy King 753d021c344SAndy King sk->sk_destruct = vsock_sk_destruct; 754d021c344SAndy King sk->sk_backlog_rcv = vsock_queue_rcv_skb; 755d021c344SAndy King sock_reset_flag(sk, SOCK_DONE); 756d021c344SAndy King 757d021c344SAndy King INIT_LIST_HEAD(&vsk->bound_table); 758d021c344SAndy King INIT_LIST_HEAD(&vsk->connected_table); 759d021c344SAndy King vsk->listener = NULL; 760d021c344SAndy King INIT_LIST_HEAD(&vsk->pending_links); 761d021c344SAndy King INIT_LIST_HEAD(&vsk->accept_queue); 762d021c344SAndy King vsk->rejected = false; 763d021c344SAndy King vsk->sent_request = false; 764d021c344SAndy King vsk->ignore_connecting_rst = false; 765d021c344SAndy King vsk->peer_shutdown = 0; 766455f05ecSCong Wang INIT_DELAYED_WORK(&vsk->connect_work, vsock_connect_timeout); 767455f05ecSCong Wang INIT_DELAYED_WORK(&vsk->pending_work, vsock_pending_work); 768d021c344SAndy King 769d021c344SAndy King psk = parent ? vsock_sk(parent) : NULL; 770d021c344SAndy King if (parent) { 771d021c344SAndy King vsk->trusted = psk->trusted; 772d021c344SAndy King vsk->owner = get_cred(psk->owner); 773d021c344SAndy King vsk->connect_timeout = psk->connect_timeout; 774b9f2b0ffSStefano Garzarella vsk->buffer_size = psk->buffer_size; 775b9f2b0ffSStefano Garzarella vsk->buffer_min_size = psk->buffer_min_size; 776b9f2b0ffSStefano Garzarella vsk->buffer_max_size = psk->buffer_max_size; 7771f935e8eSDavid Brazdil security_sk_clone(parent, sk); 778d021c344SAndy King } else { 779af545bb5SJeff Vander Stoep vsk->trusted = ns_capable_noaudit(&init_user_ns, CAP_NET_ADMIN); 780d021c344SAndy King vsk->owner = get_current_cred(); 781d021c344SAndy King vsk->connect_timeout = VSOCK_DEFAULT_CONNECT_TIMEOUT; 782b9f2b0ffSStefano Garzarella vsk->buffer_size = VSOCK_DEFAULT_BUFFER_SIZE; 783b9f2b0ffSStefano Garzarella vsk->buffer_min_size = VSOCK_DEFAULT_BUFFER_MIN_SIZE; 784b9f2b0ffSStefano Garzarella vsk->buffer_max_size = VSOCK_DEFAULT_BUFFER_MAX_SIZE; 785d021c344SAndy King } 786d021c344SAndy King 787d021c344SAndy King return sk; 788d021c344SAndy King } 789d021c344SAndy King 790a9e29e55SArseny Krasnov static bool sock_type_connectible(u16 type) 791a9e29e55SArseny Krasnov { 7920798e78bSArseny Krasnov return (type == SOCK_STREAM) || (type == SOCK_SEQPACKET); 793a9e29e55SArseny Krasnov } 794a9e29e55SArseny Krasnov 7950d9138ffSDexuan Cui static void __vsock_release(struct sock *sk, int level) 796d021c344SAndy King { 797d021c344SAndy King if (sk) { 798d021c344SAndy King struct sock *pending; 799d021c344SAndy King struct vsock_sock *vsk; 800d021c344SAndy King 801d021c344SAndy King vsk = vsock_sk(sk); 802d021c344SAndy King pending = NULL; /* Compiler warning. */ 803d021c344SAndy King 8040d9138ffSDexuan Cui /* When "level" is SINGLE_DEPTH_NESTING, use the nested 8050d9138ffSDexuan Cui * version to avoid the warning "possible recursive locking 8060d9138ffSDexuan Cui * detected". When "level" is 0, lock_sock_nested(sk, level) 8070d9138ffSDexuan Cui * is the same as lock_sock(sk). 8080d9138ffSDexuan Cui */ 8090d9138ffSDexuan Cui lock_sock_nested(sk, level); 8103f74957fSStefano Garzarella 8113f74957fSStefano Garzarella if (vsk->transport) 8123f74957fSStefano Garzarella vsk->transport->release(vsk); 813a9e29e55SArseny Krasnov else if (sock_type_connectible(sk->sk_type)) 8143f74957fSStefano Garzarella vsock_remove_sock(vsk); 8153f74957fSStefano Garzarella 816d021c344SAndy King sock_orphan(sk); 817d021c344SAndy King sk->sk_shutdown = SHUTDOWN_MASK; 818d021c344SAndy King 8193b7ad08bSChristophe JAILLET skb_queue_purge(&sk->sk_receive_queue); 820d021c344SAndy King 821d021c344SAndy King /* Clean up any sockets that never were accepted. */ 822d021c344SAndy King while ((pending = vsock_dequeue_accept(sk)) != NULL) { 8230d9138ffSDexuan Cui __vsock_release(pending, SINGLE_DEPTH_NESTING); 824d021c344SAndy King sock_put(pending); 825d021c344SAndy King } 826d021c344SAndy King 827d021c344SAndy King release_sock(sk); 828d021c344SAndy King sock_put(sk); 829d021c344SAndy King } 830d021c344SAndy King } 831d021c344SAndy King 832d021c344SAndy King static void vsock_sk_destruct(struct sock *sk) 833d021c344SAndy King { 834d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 835d021c344SAndy King 8366a2c0962SStefano Garzarella vsock_deassign_transport(vsk); 837d021c344SAndy King 838d021c344SAndy King /* When clearing these addresses, there's no need to set the family and 839d021c344SAndy King * possibly register the address family with the kernel. 840d021c344SAndy King */ 841d021c344SAndy King vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 842d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 843d021c344SAndy King 844d021c344SAndy King put_cred(vsk->owner); 845d021c344SAndy King } 846d021c344SAndy King 847d021c344SAndy King static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 848d021c344SAndy King { 849d021c344SAndy King int err; 850d021c344SAndy King 851d021c344SAndy King err = sock_queue_rcv_skb(sk, skb); 852d021c344SAndy King if (err) 853d021c344SAndy King kfree_skb(skb); 854d021c344SAndy King 855d021c344SAndy King return err; 856d021c344SAndy King } 857d021c344SAndy King 858b9ca2f5fSStefano Garzarella struct sock *vsock_create_connected(struct sock *parent) 859b9ca2f5fSStefano Garzarella { 860b9ca2f5fSStefano Garzarella return __vsock_create(sock_net(parent), NULL, parent, GFP_KERNEL, 861b9ca2f5fSStefano Garzarella parent->sk_type, 0); 862b9ca2f5fSStefano Garzarella } 863b9ca2f5fSStefano Garzarella EXPORT_SYMBOL_GPL(vsock_create_connected); 864b9ca2f5fSStefano Garzarella 865d021c344SAndy King s64 vsock_stream_has_data(struct vsock_sock *vsk) 866d021c344SAndy King { 867fe502c4aSStefano Garzarella return vsk->transport->stream_has_data(vsk); 868d021c344SAndy King } 869d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_stream_has_data); 870d021c344SAndy King 871*634f1a71SBobby Eshleman s64 vsock_connectible_has_data(struct vsock_sock *vsk) 8720798e78bSArseny Krasnov { 8730798e78bSArseny Krasnov struct sock *sk = sk_vsock(vsk); 8740798e78bSArseny Krasnov 8750798e78bSArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) 8760798e78bSArseny Krasnov return vsk->transport->seqpacket_has_data(vsk); 8770798e78bSArseny Krasnov else 8780798e78bSArseny Krasnov return vsock_stream_has_data(vsk); 8790798e78bSArseny Krasnov } 880*634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_connectible_has_data); 8810798e78bSArseny Krasnov 882d021c344SAndy King s64 vsock_stream_has_space(struct vsock_sock *vsk) 883d021c344SAndy King { 884fe502c4aSStefano Garzarella return vsk->transport->stream_has_space(vsk); 885d021c344SAndy King } 886d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_stream_has_space); 887d021c344SAndy King 888f2fdcf67SArseniy Krasnov void vsock_data_ready(struct sock *sk) 889f2fdcf67SArseniy Krasnov { 890f2fdcf67SArseniy Krasnov struct vsock_sock *vsk = vsock_sk(sk); 891f2fdcf67SArseniy Krasnov 892f2fdcf67SArseniy Krasnov if (vsock_stream_has_data(vsk) >= sk->sk_rcvlowat || 893f2fdcf67SArseniy Krasnov sock_flag(sk, SOCK_DONE)) 894f2fdcf67SArseniy Krasnov sk->sk_data_ready(sk); 895f2fdcf67SArseniy Krasnov } 896f2fdcf67SArseniy Krasnov EXPORT_SYMBOL_GPL(vsock_data_ready); 897f2fdcf67SArseniy Krasnov 898d021c344SAndy King static int vsock_release(struct socket *sock) 899d021c344SAndy King { 9000d9138ffSDexuan Cui __vsock_release(sock->sk, 0); 901d021c344SAndy King sock->sk = NULL; 902d021c344SAndy King sock->state = SS_FREE; 903d021c344SAndy King 904d021c344SAndy King return 0; 905d021c344SAndy King } 906d021c344SAndy King 907d021c344SAndy King static int 908d021c344SAndy King vsock_bind(struct socket *sock, struct sockaddr *addr, int addr_len) 909d021c344SAndy King { 910d021c344SAndy King int err; 911d021c344SAndy King struct sock *sk; 912d021c344SAndy King struct sockaddr_vm *vm_addr; 913d021c344SAndy King 914d021c344SAndy King sk = sock->sk; 915d021c344SAndy King 916d021c344SAndy King if (vsock_addr_cast(addr, addr_len, &vm_addr) != 0) 917d021c344SAndy King return -EINVAL; 918d021c344SAndy King 919d021c344SAndy King lock_sock(sk); 920d021c344SAndy King err = __vsock_bind(sk, vm_addr); 921d021c344SAndy King release_sock(sk); 922d021c344SAndy King 923d021c344SAndy King return err; 924d021c344SAndy King } 925d021c344SAndy King 926d021c344SAndy King static int vsock_getname(struct socket *sock, 9279b2c45d4SDenys Vlasenko struct sockaddr *addr, int peer) 928d021c344SAndy King { 929d021c344SAndy King int err; 930d021c344SAndy King struct sock *sk; 931d021c344SAndy King struct vsock_sock *vsk; 932d021c344SAndy King struct sockaddr_vm *vm_addr; 933d021c344SAndy King 934d021c344SAndy King sk = sock->sk; 935d021c344SAndy King vsk = vsock_sk(sk); 936d021c344SAndy King err = 0; 937d021c344SAndy King 938d021c344SAndy King lock_sock(sk); 939d021c344SAndy King 940d021c344SAndy King if (peer) { 941d021c344SAndy King if (sock->state != SS_CONNECTED) { 942d021c344SAndy King err = -ENOTCONN; 943d021c344SAndy King goto out; 944d021c344SAndy King } 945d021c344SAndy King vm_addr = &vsk->remote_addr; 946d021c344SAndy King } else { 947d021c344SAndy King vm_addr = &vsk->local_addr; 948d021c344SAndy King } 949d021c344SAndy King 950d021c344SAndy King if (!vm_addr) { 951d021c344SAndy King err = -EINVAL; 952d021c344SAndy King goto out; 953d021c344SAndy King } 954d021c344SAndy King 955d021c344SAndy King /* sys_getsockname() and sys_getpeername() pass us a 956d021c344SAndy King * MAX_SOCK_ADDR-sized buffer and don't set addr_len. Unfortunately 957d021c344SAndy King * that macro is defined in socket.c instead of .h, so we hardcode its 958d021c344SAndy King * value here. 959d021c344SAndy King */ 960d021c344SAndy King BUILD_BUG_ON(sizeof(*vm_addr) > 128); 961d021c344SAndy King memcpy(addr, vm_addr, sizeof(*vm_addr)); 9629b2c45d4SDenys Vlasenko err = sizeof(*vm_addr); 963d021c344SAndy King 964d021c344SAndy King out: 965d021c344SAndy King release_sock(sk); 966d021c344SAndy King return err; 967d021c344SAndy King } 968d021c344SAndy King 969d021c344SAndy King static int vsock_shutdown(struct socket *sock, int mode) 970d021c344SAndy King { 971d021c344SAndy King int err; 972d021c344SAndy King struct sock *sk; 973d021c344SAndy King 974d021c344SAndy King /* User level uses SHUT_RD (0) and SHUT_WR (1), but the kernel uses 975d021c344SAndy King * RCV_SHUTDOWN (1) and SEND_SHUTDOWN (2), so we must increment mode 976d021c344SAndy King * here like the other address families do. Note also that the 977d021c344SAndy King * increment makes SHUT_RDWR (2) into RCV_SHUTDOWN | SEND_SHUTDOWN (3), 978d021c344SAndy King * which is what we want. 979d021c344SAndy King */ 980d021c344SAndy King mode++; 981d021c344SAndy King 982d021c344SAndy King if ((mode & ~SHUTDOWN_MASK) || !mode) 983d021c344SAndy King return -EINVAL; 984d021c344SAndy King 9858cb48554SArseny Krasnov /* If this is a connection oriented socket and it is not connected then 9868cb48554SArseny Krasnov * bail out immediately. If it is a DGRAM socket then we must first 9878cb48554SArseny Krasnov * kick the socket so that it wakes up from any sleeping calls, for 9888cb48554SArseny Krasnov * example recv(), and then afterwards return the error. 989d021c344SAndy King */ 990d021c344SAndy King 991d021c344SAndy King sk = sock->sk; 9921c5fae9cSStefano Garzarella 9931c5fae9cSStefano Garzarella lock_sock(sk); 994d021c344SAndy King if (sock->state == SS_UNCONNECTED) { 995d021c344SAndy King err = -ENOTCONN; 996a9e29e55SArseny Krasnov if (sock_type_connectible(sk->sk_type)) 9971c5fae9cSStefano Garzarella goto out; 998d021c344SAndy King } else { 999d021c344SAndy King sock->state = SS_DISCONNECTING; 1000d021c344SAndy King err = 0; 1001d021c344SAndy King } 1002d021c344SAndy King 1003d021c344SAndy King /* Receive and send shutdowns are treated alike. */ 1004d021c344SAndy King mode = mode & (RCV_SHUTDOWN | SEND_SHUTDOWN); 1005d021c344SAndy King if (mode) { 1006d021c344SAndy King sk->sk_shutdown |= mode; 1007d021c344SAndy King sk->sk_state_change(sk); 1008d021c344SAndy King 1009a9e29e55SArseny Krasnov if (sock_type_connectible(sk->sk_type)) { 1010d021c344SAndy King sock_reset_flag(sk, SOCK_DONE); 1011d021c344SAndy King vsock_send_shutdown(sk, mode); 1012d021c344SAndy King } 1013d021c344SAndy King } 1014d021c344SAndy King 10151c5fae9cSStefano Garzarella out: 10161c5fae9cSStefano Garzarella release_sock(sk); 1017d021c344SAndy King return err; 1018d021c344SAndy King } 1019d021c344SAndy King 1020a11e1d43SLinus Torvalds static __poll_t vsock_poll(struct file *file, struct socket *sock, 1021a11e1d43SLinus Torvalds poll_table *wait) 1022d021c344SAndy King { 1023a11e1d43SLinus Torvalds struct sock *sk; 1024a11e1d43SLinus Torvalds __poll_t mask; 1025a11e1d43SLinus Torvalds struct vsock_sock *vsk; 1026a11e1d43SLinus Torvalds 1027a11e1d43SLinus Torvalds sk = sock->sk; 1028a11e1d43SLinus Torvalds vsk = vsock_sk(sk); 1029a11e1d43SLinus Torvalds 1030a11e1d43SLinus Torvalds poll_wait(file, sk_sleep(sk), wait); 1031a11e1d43SLinus Torvalds mask = 0; 1032d021c344SAndy King 1033d021c344SAndy King if (sk->sk_err) 1034d021c344SAndy King /* Signify that there has been an error on this socket. */ 1035a9a08845SLinus Torvalds mask |= EPOLLERR; 1036d021c344SAndy King 1037d021c344SAndy King /* INET sockets treat local write shutdown and peer write shutdown as a 1038a9a08845SLinus Torvalds * case of EPOLLHUP set. 1039d021c344SAndy King */ 1040d021c344SAndy King if ((sk->sk_shutdown == SHUTDOWN_MASK) || 1041d021c344SAndy King ((sk->sk_shutdown & SEND_SHUTDOWN) && 1042d021c344SAndy King (vsk->peer_shutdown & SEND_SHUTDOWN))) { 1043a9a08845SLinus Torvalds mask |= EPOLLHUP; 1044d021c344SAndy King } 1045d021c344SAndy King 1046d021c344SAndy King if (sk->sk_shutdown & RCV_SHUTDOWN || 1047d021c344SAndy King vsk->peer_shutdown & SEND_SHUTDOWN) { 1048a9a08845SLinus Torvalds mask |= EPOLLRDHUP; 1049d021c344SAndy King } 1050d021c344SAndy King 1051d021c344SAndy King if (sock->type == SOCK_DGRAM) { 1052d021c344SAndy King /* For datagram sockets we can read if there is something in 1053d021c344SAndy King * the queue and write as long as the socket isn't shutdown for 1054d021c344SAndy King * sending. 1055d021c344SAndy King */ 10563ef7cf57SEric Dumazet if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || 1057d021c344SAndy King (sk->sk_shutdown & RCV_SHUTDOWN)) { 1058a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1059d021c344SAndy King } 1060d021c344SAndy King 1061d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) 1062a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND; 1063d021c344SAndy King 1064a9e29e55SArseny Krasnov } else if (sock_type_connectible(sk->sk_type)) { 1065c518adafSAlexander Popov const struct vsock_transport *transport; 1066c518adafSAlexander Popov 1067d021c344SAndy King lock_sock(sk); 1068d021c344SAndy King 1069c518adafSAlexander Popov transport = vsk->transport; 1070c518adafSAlexander Popov 1071d021c344SAndy King /* Listening sockets that have connections in their accept 1072d021c344SAndy King * queue can be read. 1073d021c344SAndy King */ 10743b4477d2SStefan Hajnoczi if (sk->sk_state == TCP_LISTEN 1075d021c344SAndy King && !vsock_is_accept_queue_empty(sk)) 1076a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1077d021c344SAndy King 1078d021c344SAndy King /* If there is something in the queue then we can read. */ 1079c0cfa2d8SStefano Garzarella if (transport && transport->stream_is_active(vsk) && 1080d021c344SAndy King !(sk->sk_shutdown & RCV_SHUTDOWN)) { 1081d021c344SAndy King bool data_ready_now = false; 1082ee0b3843SArseniy Krasnov int target = sock_rcvlowat(sk, 0, INT_MAX); 1083d021c344SAndy King int ret = transport->notify_poll_in( 1084ee0b3843SArseniy Krasnov vsk, target, &data_ready_now); 1085d021c344SAndy King if (ret < 0) { 1086a9a08845SLinus Torvalds mask |= EPOLLERR; 1087d021c344SAndy King } else { 1088d021c344SAndy King if (data_ready_now) 1089a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1090d021c344SAndy King 1091d021c344SAndy King } 1092d021c344SAndy King } 1093d021c344SAndy King 1094d021c344SAndy King /* Sockets whose connections have been closed, reset, or 1095d021c344SAndy King * terminated should also be considered read, and we check the 1096d021c344SAndy King * shutdown flag for that. 1097d021c344SAndy King */ 1098d021c344SAndy King if (sk->sk_shutdown & RCV_SHUTDOWN || 1099d021c344SAndy King vsk->peer_shutdown & SEND_SHUTDOWN) { 1100a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1101d021c344SAndy King } 1102d021c344SAndy King 1103d021c344SAndy King /* Connected sockets that can produce data can be written. */ 11041980c058SStefano Garzarella if (transport && sk->sk_state == TCP_ESTABLISHED) { 1105d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { 1106d021c344SAndy King bool space_avail_now = false; 1107d021c344SAndy King int ret = transport->notify_poll_out( 1108d021c344SAndy King vsk, 1, &space_avail_now); 1109d021c344SAndy King if (ret < 0) { 1110a9a08845SLinus Torvalds mask |= EPOLLERR; 1111d021c344SAndy King } else { 1112d021c344SAndy King if (space_avail_now) 1113a9a08845SLinus Torvalds /* Remove EPOLLWRBAND since INET 1114d021c344SAndy King * sockets are not setting it. 1115d021c344SAndy King */ 1116a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM; 1117d021c344SAndy King 1118d021c344SAndy King } 1119d021c344SAndy King } 1120d021c344SAndy King } 1121d021c344SAndy King 1122d021c344SAndy King /* Simulate INET socket poll behaviors, which sets 1123a9a08845SLinus Torvalds * EPOLLOUT|EPOLLWRNORM when peer is closed and nothing to read, 1124d021c344SAndy King * but local send is not shutdown. 1125d021c344SAndy King */ 1126ba3169fcSStefan Hajnoczi if (sk->sk_state == TCP_CLOSE || sk->sk_state == TCP_CLOSING) { 1127d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) 1128a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM; 1129d021c344SAndy King 1130d021c344SAndy King } 1131d021c344SAndy King 1132d021c344SAndy King release_sock(sk); 1133d021c344SAndy King } 1134d021c344SAndy King 1135d021c344SAndy King return mask; 1136d021c344SAndy King } 1137d021c344SAndy King 1138*634f1a71SBobby Eshleman static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor) 1139*634f1a71SBobby Eshleman { 1140*634f1a71SBobby Eshleman struct vsock_sock *vsk = vsock_sk(sk); 1141*634f1a71SBobby Eshleman 1142*634f1a71SBobby Eshleman return vsk->transport->read_skb(vsk, read_actor); 1143*634f1a71SBobby Eshleman } 1144*634f1a71SBobby Eshleman 11451b784140SYing Xue static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, 11461b784140SYing Xue size_t len) 1147d021c344SAndy King { 1148d021c344SAndy King int err; 1149d021c344SAndy King struct sock *sk; 1150d021c344SAndy King struct vsock_sock *vsk; 1151d021c344SAndy King struct sockaddr_vm *remote_addr; 1152fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1153d021c344SAndy King 1154d021c344SAndy King if (msg->msg_flags & MSG_OOB) 1155d021c344SAndy King return -EOPNOTSUPP; 1156d021c344SAndy King 1157d021c344SAndy King /* For now, MSG_DONTWAIT is always assumed... */ 1158d021c344SAndy King err = 0; 1159d021c344SAndy King sk = sock->sk; 1160d021c344SAndy King vsk = vsock_sk(sk); 1161d021c344SAndy King 1162d021c344SAndy King lock_sock(sk); 1163d021c344SAndy King 1164c518adafSAlexander Popov transport = vsk->transport; 1165c518adafSAlexander Popov 1166b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1167b3a6dfe8SAsias He if (err) 1168d021c344SAndy King goto out; 1169d021c344SAndy King 1170d021c344SAndy King 1171d021c344SAndy King /* If the provided message contains an address, use that. Otherwise 1172d021c344SAndy King * fall back on the socket's remote handle (if it has been connected). 1173d021c344SAndy King */ 1174d021c344SAndy King if (msg->msg_name && 1175d021c344SAndy King vsock_addr_cast(msg->msg_name, msg->msg_namelen, 1176d021c344SAndy King &remote_addr) == 0) { 1177d021c344SAndy King /* Ensure this address is of the right type and is a valid 1178d021c344SAndy King * destination. 1179d021c344SAndy King */ 1180d021c344SAndy King 1181d021c344SAndy King if (remote_addr->svm_cid == VMADDR_CID_ANY) 1182d021c344SAndy King remote_addr->svm_cid = transport->get_local_cid(); 1183d021c344SAndy King 1184d021c344SAndy King if (!vsock_addr_bound(remote_addr)) { 1185d021c344SAndy King err = -EINVAL; 1186d021c344SAndy King goto out; 1187d021c344SAndy King } 1188d021c344SAndy King } else if (sock->state == SS_CONNECTED) { 1189d021c344SAndy King remote_addr = &vsk->remote_addr; 1190d021c344SAndy King 1191d021c344SAndy King if (remote_addr->svm_cid == VMADDR_CID_ANY) 1192d021c344SAndy King remote_addr->svm_cid = transport->get_local_cid(); 1193d021c344SAndy King 1194d021c344SAndy King /* XXX Should connect() or this function ensure remote_addr is 1195d021c344SAndy King * bound? 1196d021c344SAndy King */ 1197d021c344SAndy King if (!vsock_addr_bound(&vsk->remote_addr)) { 1198d021c344SAndy King err = -EINVAL; 1199d021c344SAndy King goto out; 1200d021c344SAndy King } 1201d021c344SAndy King } else { 1202d021c344SAndy King err = -EINVAL; 1203d021c344SAndy King goto out; 1204d021c344SAndy King } 1205d021c344SAndy King 1206d021c344SAndy King if (!transport->dgram_allow(remote_addr->svm_cid, 1207d021c344SAndy King remote_addr->svm_port)) { 1208d021c344SAndy King err = -EINVAL; 1209d021c344SAndy King goto out; 1210d021c344SAndy King } 1211d021c344SAndy King 12120f7db23aSAl Viro err = transport->dgram_enqueue(vsk, remote_addr, msg, len); 1213d021c344SAndy King 1214d021c344SAndy King out: 1215d021c344SAndy King release_sock(sk); 1216d021c344SAndy King return err; 1217d021c344SAndy King } 1218d021c344SAndy King 1219d021c344SAndy King static int vsock_dgram_connect(struct socket *sock, 1220d021c344SAndy King struct sockaddr *addr, int addr_len, int flags) 1221d021c344SAndy King { 1222d021c344SAndy King int err; 1223d021c344SAndy King struct sock *sk; 1224d021c344SAndy King struct vsock_sock *vsk; 1225d021c344SAndy King struct sockaddr_vm *remote_addr; 1226d021c344SAndy King 1227d021c344SAndy King sk = sock->sk; 1228d021c344SAndy King vsk = vsock_sk(sk); 1229d021c344SAndy King 1230d021c344SAndy King err = vsock_addr_cast(addr, addr_len, &remote_addr); 1231d021c344SAndy King if (err == -EAFNOSUPPORT && remote_addr->svm_family == AF_UNSPEC) { 1232d021c344SAndy King lock_sock(sk); 1233d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, 1234d021c344SAndy King VMADDR_PORT_ANY); 1235d021c344SAndy King sock->state = SS_UNCONNECTED; 1236d021c344SAndy King release_sock(sk); 1237d021c344SAndy King return 0; 1238d021c344SAndy King } else if (err != 0) 1239d021c344SAndy King return -EINVAL; 1240d021c344SAndy King 1241d021c344SAndy King lock_sock(sk); 1242d021c344SAndy King 1243b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1244b3a6dfe8SAsias He if (err) 1245d021c344SAndy King goto out; 1246d021c344SAndy King 1247fe502c4aSStefano Garzarella if (!vsk->transport->dgram_allow(remote_addr->svm_cid, 1248d021c344SAndy King remote_addr->svm_port)) { 1249d021c344SAndy King err = -EINVAL; 1250d021c344SAndy King goto out; 1251d021c344SAndy King } 1252d021c344SAndy King 1253d021c344SAndy King memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); 1254d021c344SAndy King sock->state = SS_CONNECTED; 1255d021c344SAndy King 1256*634f1a71SBobby Eshleman /* sock map disallows redirection of non-TCP sockets with sk_state != 1257*634f1a71SBobby Eshleman * TCP_ESTABLISHED (see sock_map_redirect_allowed()), so we set 1258*634f1a71SBobby Eshleman * TCP_ESTABLISHED here to allow redirection of connected vsock dgrams. 1259*634f1a71SBobby Eshleman * 1260*634f1a71SBobby Eshleman * This doesn't seem to be abnormal state for datagram sockets, as the 1261*634f1a71SBobby Eshleman * same approach can be see in other datagram socket types as well 1262*634f1a71SBobby Eshleman * (such as unix sockets). 1263*634f1a71SBobby Eshleman */ 1264*634f1a71SBobby Eshleman sk->sk_state = TCP_ESTABLISHED; 1265*634f1a71SBobby Eshleman 1266d021c344SAndy King out: 1267d021c344SAndy King release_sock(sk); 1268d021c344SAndy King return err; 1269d021c344SAndy King } 1270d021c344SAndy King 1271*634f1a71SBobby Eshleman int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 12721b784140SYing Xue size_t len, int flags) 1273d021c344SAndy King { 1274*634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 1275*634f1a71SBobby Eshleman const struct proto *prot; 1276*634f1a71SBobby Eshleman #endif 1277*634f1a71SBobby Eshleman struct vsock_sock *vsk; 1278*634f1a71SBobby Eshleman struct sock *sk; 1279*634f1a71SBobby Eshleman 1280*634f1a71SBobby Eshleman sk = sock->sk; 1281*634f1a71SBobby Eshleman vsk = vsock_sk(sk); 1282*634f1a71SBobby Eshleman 1283*634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 1284*634f1a71SBobby Eshleman prot = READ_ONCE(sk->sk_prot); 1285*634f1a71SBobby Eshleman if (prot != &vsock_proto) 1286*634f1a71SBobby Eshleman return prot->recvmsg(sk, msg, len, flags, NULL); 1287*634f1a71SBobby Eshleman #endif 1288fe502c4aSStefano Garzarella 1289fe502c4aSStefano Garzarella return vsk->transport->dgram_dequeue(vsk, msg, len, flags); 1290d021c344SAndy King } 1291*634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg); 1292d021c344SAndy King 1293d021c344SAndy King static const struct proto_ops vsock_dgram_ops = { 1294d021c344SAndy King .family = PF_VSOCK, 1295d021c344SAndy King .owner = THIS_MODULE, 1296d021c344SAndy King .release = vsock_release, 1297d021c344SAndy King .bind = vsock_bind, 1298d021c344SAndy King .connect = vsock_dgram_connect, 1299d021c344SAndy King .socketpair = sock_no_socketpair, 1300d021c344SAndy King .accept = sock_no_accept, 1301d021c344SAndy King .getname = vsock_getname, 1302a11e1d43SLinus Torvalds .poll = vsock_poll, 1303d021c344SAndy King .ioctl = sock_no_ioctl, 1304d021c344SAndy King .listen = sock_no_listen, 1305d021c344SAndy King .shutdown = vsock_shutdown, 1306d021c344SAndy King .sendmsg = vsock_dgram_sendmsg, 1307d021c344SAndy King .recvmsg = vsock_dgram_recvmsg, 1308d021c344SAndy King .mmap = sock_no_mmap, 1309d021c344SAndy King .sendpage = sock_no_sendpage, 1310*634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 1311d021c344SAndy King }; 1312d021c344SAndy King 1313380feae0SPeng Tao static int vsock_transport_cancel_pkt(struct vsock_sock *vsk) 1314380feae0SPeng Tao { 1315fe502c4aSStefano Garzarella const struct vsock_transport *transport = vsk->transport; 1316fe502c4aSStefano Garzarella 13175d1cbcc9SNorbert Slusarek if (!transport || !transport->cancel_pkt) 1318380feae0SPeng Tao return -EOPNOTSUPP; 1319380feae0SPeng Tao 1320380feae0SPeng Tao return transport->cancel_pkt(vsk); 1321380feae0SPeng Tao } 1322380feae0SPeng Tao 1323d021c344SAndy King static void vsock_connect_timeout(struct work_struct *work) 1324d021c344SAndy King { 1325d021c344SAndy King struct sock *sk; 1326d021c344SAndy King struct vsock_sock *vsk; 1327d021c344SAndy King 1328455f05ecSCong Wang vsk = container_of(work, struct vsock_sock, connect_work.work); 1329d021c344SAndy King sk = sk_vsock(vsk); 1330d021c344SAndy King 1331d021c344SAndy King lock_sock(sk); 13323b4477d2SStefan Hajnoczi if (sk->sk_state == TCP_SYN_SENT && 1333d021c344SAndy King (sk->sk_shutdown != SHUTDOWN_MASK)) { 13343b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1335a3e7b29eSPeilin Ye sk->sk_socket->state = SS_UNCONNECTED; 1336d021c344SAndy King sk->sk_err = ETIMEDOUT; 1337e3ae2365SAlexander Aring sk_error_report(sk); 13383d0bc44dSNorbert Slusarek vsock_transport_cancel_pkt(vsk); 1339d021c344SAndy King } 1340d021c344SAndy King release_sock(sk); 1341d021c344SAndy King 1342d021c344SAndy King sock_put(sk); 1343d021c344SAndy King } 1344d021c344SAndy King 1345a9e29e55SArseny Krasnov static int vsock_connect(struct socket *sock, struct sockaddr *addr, 1346d021c344SAndy King int addr_len, int flags) 1347d021c344SAndy King { 1348d021c344SAndy King int err; 1349d021c344SAndy King struct sock *sk; 1350d021c344SAndy King struct vsock_sock *vsk; 1351fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1352d021c344SAndy King struct sockaddr_vm *remote_addr; 1353d021c344SAndy King long timeout; 1354d021c344SAndy King DEFINE_WAIT(wait); 1355d021c344SAndy King 1356d021c344SAndy King err = 0; 1357d021c344SAndy King sk = sock->sk; 1358d021c344SAndy King vsk = vsock_sk(sk); 1359d021c344SAndy King 1360d021c344SAndy King lock_sock(sk); 1361d021c344SAndy King 1362d021c344SAndy King /* XXX AF_UNSPEC should make us disconnect like AF_INET. */ 1363d021c344SAndy King switch (sock->state) { 1364d021c344SAndy King case SS_CONNECTED: 1365d021c344SAndy King err = -EISCONN; 1366d021c344SAndy King goto out; 1367d021c344SAndy King case SS_DISCONNECTING: 1368d021c344SAndy King err = -EINVAL; 1369d021c344SAndy King goto out; 1370d021c344SAndy King case SS_CONNECTING: 1371d021c344SAndy King /* This continues on so we can move sock into the SS_CONNECTED 1372d021c344SAndy King * state once the connection has completed (at which point err 1373d021c344SAndy King * will be set to zero also). Otherwise, we will either wait 1374d021c344SAndy King * for the connection or return -EALREADY should this be a 1375d021c344SAndy King * non-blocking call. 1376d021c344SAndy King */ 1377d021c344SAndy King err = -EALREADY; 1378c7cd82b9SEiichi Tsukata if (flags & O_NONBLOCK) 1379c7cd82b9SEiichi Tsukata goto out; 1380d021c344SAndy King break; 1381d021c344SAndy King default: 13823b4477d2SStefan Hajnoczi if ((sk->sk_state == TCP_LISTEN) || 1383d021c344SAndy King vsock_addr_cast(addr, addr_len, &remote_addr) != 0) { 1384d021c344SAndy King err = -EINVAL; 1385d021c344SAndy King goto out; 1386d021c344SAndy King } 1387d021c344SAndy King 1388c0cfa2d8SStefano Garzarella /* Set the remote address that we are connecting to. */ 1389c0cfa2d8SStefano Garzarella memcpy(&vsk->remote_addr, remote_addr, 1390c0cfa2d8SStefano Garzarella sizeof(vsk->remote_addr)); 1391c0cfa2d8SStefano Garzarella 1392c0cfa2d8SStefano Garzarella err = vsock_assign_transport(vsk, NULL); 1393c0cfa2d8SStefano Garzarella if (err) 1394c0cfa2d8SStefano Garzarella goto out; 1395c0cfa2d8SStefano Garzarella 1396c0cfa2d8SStefano Garzarella transport = vsk->transport; 1397c0cfa2d8SStefano Garzarella 1398d021c344SAndy King /* The hypervisor and well-known contexts do not have socket 1399d021c344SAndy King * endpoints. 1400d021c344SAndy King */ 1401c0cfa2d8SStefano Garzarella if (!transport || 1402c0cfa2d8SStefano Garzarella !transport->stream_allow(remote_addr->svm_cid, 1403d021c344SAndy King remote_addr->svm_port)) { 1404d021c344SAndy King err = -ENETUNREACH; 1405d021c344SAndy King goto out; 1406d021c344SAndy King } 1407d021c344SAndy King 1408b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1409b3a6dfe8SAsias He if (err) 1410d021c344SAndy King goto out; 1411d021c344SAndy King 14123b4477d2SStefan Hajnoczi sk->sk_state = TCP_SYN_SENT; 1413d021c344SAndy King 1414d021c344SAndy King err = transport->connect(vsk); 1415d021c344SAndy King if (err < 0) 1416d021c344SAndy King goto out; 1417d021c344SAndy King 1418d021c344SAndy King /* Mark sock as connecting and set the error code to in 1419d021c344SAndy King * progress in case this is a non-blocking connect. 1420d021c344SAndy King */ 1421d021c344SAndy King sock->state = SS_CONNECTING; 1422d021c344SAndy King err = -EINPROGRESS; 1423d021c344SAndy King } 1424d021c344SAndy King 1425d021c344SAndy King /* The receive path will handle all communication until we are able to 1426d021c344SAndy King * enter the connected state. Here we wait for the connection to be 1427d021c344SAndy King * completed or a notification of an error. 1428d021c344SAndy King */ 1429d021c344SAndy King timeout = vsk->connect_timeout; 1430d021c344SAndy King prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1431d021c344SAndy King 14323b4477d2SStefan Hajnoczi while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) { 1433d021c344SAndy King if (flags & O_NONBLOCK) { 1434d021c344SAndy King /* If we're not going to block, we schedule a timeout 1435d021c344SAndy King * function to generate a timeout on the connection 1436d021c344SAndy King * attempt, in case the peer doesn't respond in a 1437d021c344SAndy King * timely manner. We hold on to the socket until the 1438d021c344SAndy King * timeout fires. 1439d021c344SAndy King */ 1440d021c344SAndy King sock_hold(sk); 14417e97cfedSPeilin Ye 14427e97cfedSPeilin Ye /* If the timeout function is already scheduled, 14437e97cfedSPeilin Ye * reschedule it, then ungrab the socket refcount to 14447e97cfedSPeilin Ye * keep it balanced. 14457e97cfedSPeilin Ye */ 14467e97cfedSPeilin Ye if (mod_delayed_work(system_wq, &vsk->connect_work, 14477e97cfedSPeilin Ye timeout)) 14487e97cfedSPeilin Ye sock_put(sk); 1449d021c344SAndy King 1450d021c344SAndy King /* Skip ahead to preserve error code set above. */ 1451d021c344SAndy King goto out_wait; 1452d021c344SAndy King } 1453d021c344SAndy King 1454d021c344SAndy King release_sock(sk); 1455d021c344SAndy King timeout = schedule_timeout(timeout); 1456d021c344SAndy King lock_sock(sk); 1457d021c344SAndy King 1458d021c344SAndy King if (signal_pending(current)) { 1459d021c344SAndy King err = sock_intr_errno(timeout); 1460c7ff9cffSLongpeng(Mike) sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE; 1461f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1462380feae0SPeng Tao vsock_transport_cancel_pkt(vsk); 1463b9208492SSeth Forshee vsock_remove_connected(vsk); 1464f7f9b5e7SClaudio Imbrenda goto out_wait; 1465d021c344SAndy King } else if (timeout == 0) { 1466d021c344SAndy King err = -ETIMEDOUT; 14673b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1468f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1469380feae0SPeng Tao vsock_transport_cancel_pkt(vsk); 1470f7f9b5e7SClaudio Imbrenda goto out_wait; 1471d021c344SAndy King } 1472d021c344SAndy King 1473d021c344SAndy King prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1474d021c344SAndy King } 1475d021c344SAndy King 1476d021c344SAndy King if (sk->sk_err) { 1477d021c344SAndy King err = -sk->sk_err; 14783b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1479f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1480f7f9b5e7SClaudio Imbrenda } else { 1481d021c344SAndy King err = 0; 1482f7f9b5e7SClaudio Imbrenda } 1483d021c344SAndy King 1484d021c344SAndy King out_wait: 1485d021c344SAndy King finish_wait(sk_sleep(sk), &wait); 1486d021c344SAndy King out: 1487d021c344SAndy King release_sock(sk); 1488d021c344SAndy King return err; 1489d021c344SAndy King } 1490d021c344SAndy King 1491cdfbabfbSDavid Howells static int vsock_accept(struct socket *sock, struct socket *newsock, int flags, 1492cdfbabfbSDavid Howells bool kern) 1493d021c344SAndy King { 1494d021c344SAndy King struct sock *listener; 1495d021c344SAndy King int err; 1496d021c344SAndy King struct sock *connected; 1497d021c344SAndy King struct vsock_sock *vconnected; 1498d021c344SAndy King long timeout; 1499d021c344SAndy King DEFINE_WAIT(wait); 1500d021c344SAndy King 1501d021c344SAndy King err = 0; 1502d021c344SAndy King listener = sock->sk; 1503d021c344SAndy King 1504d021c344SAndy King lock_sock(listener); 1505d021c344SAndy King 1506a9e29e55SArseny Krasnov if (!sock_type_connectible(sock->type)) { 1507d021c344SAndy King err = -EOPNOTSUPP; 1508d021c344SAndy King goto out; 1509d021c344SAndy King } 1510d021c344SAndy King 15113b4477d2SStefan Hajnoczi if (listener->sk_state != TCP_LISTEN) { 1512d021c344SAndy King err = -EINVAL; 1513d021c344SAndy King goto out; 1514d021c344SAndy King } 1515d021c344SAndy King 1516d021c344SAndy King /* Wait for children sockets to appear; these are the new sockets 1517d021c344SAndy King * created upon connection establishment. 1518d021c344SAndy King */ 15197e0afbdfSStefano Garzarella timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK); 1520d021c344SAndy King prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE); 1521d021c344SAndy King 1522d021c344SAndy King while ((connected = vsock_dequeue_accept(listener)) == NULL && 1523d021c344SAndy King listener->sk_err == 0) { 1524d021c344SAndy King release_sock(listener); 1525d021c344SAndy King timeout = schedule_timeout(timeout); 1526f7f9b5e7SClaudio Imbrenda finish_wait(sk_sleep(listener), &wait); 1527d021c344SAndy King lock_sock(listener); 1528d021c344SAndy King 1529d021c344SAndy King if (signal_pending(current)) { 1530d021c344SAndy King err = sock_intr_errno(timeout); 1531f7f9b5e7SClaudio Imbrenda goto out; 1532d021c344SAndy King } else if (timeout == 0) { 1533d021c344SAndy King err = -EAGAIN; 1534f7f9b5e7SClaudio Imbrenda goto out; 1535d021c344SAndy King } 1536d021c344SAndy King 1537d021c344SAndy King prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE); 1538d021c344SAndy King } 1539f7f9b5e7SClaudio Imbrenda finish_wait(sk_sleep(listener), &wait); 1540d021c344SAndy King 1541d021c344SAndy King if (listener->sk_err) 1542d021c344SAndy King err = -listener->sk_err; 1543d021c344SAndy King 1544d021c344SAndy King if (connected) { 15457976a11bSEric Dumazet sk_acceptq_removed(listener); 1546d021c344SAndy King 15474192f672SStefan Hajnoczi lock_sock_nested(connected, SINGLE_DEPTH_NESTING); 1548d021c344SAndy King vconnected = vsock_sk(connected); 1549d021c344SAndy King 1550d021c344SAndy King /* If the listener socket has received an error, then we should 1551d021c344SAndy King * reject this socket and return. Note that we simply mark the 1552d021c344SAndy King * socket rejected, drop our reference, and let the cleanup 1553d021c344SAndy King * function handle the cleanup; the fact that we found it in 1554d021c344SAndy King * the listener's accept queue guarantees that the cleanup 1555d021c344SAndy King * function hasn't run yet. 1556d021c344SAndy King */ 1557d021c344SAndy King if (err) { 1558d021c344SAndy King vconnected->rejected = true; 1559f7f9b5e7SClaudio Imbrenda } else { 1560d021c344SAndy King newsock->state = SS_CONNECTED; 1561d021c344SAndy King sock_graft(connected, newsock); 1562f7f9b5e7SClaudio Imbrenda } 1563f7f9b5e7SClaudio Imbrenda 1564d021c344SAndy King release_sock(connected); 1565d021c344SAndy King sock_put(connected); 1566d021c344SAndy King } 1567d021c344SAndy King 1568d021c344SAndy King out: 1569d021c344SAndy King release_sock(listener); 1570d021c344SAndy King return err; 1571d021c344SAndy King } 1572d021c344SAndy King 1573d021c344SAndy King static int vsock_listen(struct socket *sock, int backlog) 1574d021c344SAndy King { 1575d021c344SAndy King int err; 1576d021c344SAndy King struct sock *sk; 1577d021c344SAndy King struct vsock_sock *vsk; 1578d021c344SAndy King 1579d021c344SAndy King sk = sock->sk; 1580d021c344SAndy King 1581d021c344SAndy King lock_sock(sk); 1582d021c344SAndy King 1583a9e29e55SArseny Krasnov if (!sock_type_connectible(sk->sk_type)) { 1584d021c344SAndy King err = -EOPNOTSUPP; 1585d021c344SAndy King goto out; 1586d021c344SAndy King } 1587d021c344SAndy King 1588d021c344SAndy King if (sock->state != SS_UNCONNECTED) { 1589d021c344SAndy King err = -EINVAL; 1590d021c344SAndy King goto out; 1591d021c344SAndy King } 1592d021c344SAndy King 1593d021c344SAndy King vsk = vsock_sk(sk); 1594d021c344SAndy King 1595d021c344SAndy King if (!vsock_addr_bound(&vsk->local_addr)) { 1596d021c344SAndy King err = -EINVAL; 1597d021c344SAndy King goto out; 1598d021c344SAndy King } 1599d021c344SAndy King 1600d021c344SAndy King sk->sk_max_ack_backlog = backlog; 16013b4477d2SStefan Hajnoczi sk->sk_state = TCP_LISTEN; 1602d021c344SAndy King 1603d021c344SAndy King err = 0; 1604d021c344SAndy King 1605d021c344SAndy King out: 1606d021c344SAndy King release_sock(sk); 1607d021c344SAndy King return err; 1608d021c344SAndy King } 1609d021c344SAndy King 1610b9f2b0ffSStefano Garzarella static void vsock_update_buffer_size(struct vsock_sock *vsk, 1611b9f2b0ffSStefano Garzarella const struct vsock_transport *transport, 1612b9f2b0ffSStefano Garzarella u64 val) 1613b9f2b0ffSStefano Garzarella { 1614b9f2b0ffSStefano Garzarella if (val > vsk->buffer_max_size) 1615b9f2b0ffSStefano Garzarella val = vsk->buffer_max_size; 1616b9f2b0ffSStefano Garzarella 1617b9f2b0ffSStefano Garzarella if (val < vsk->buffer_min_size) 1618b9f2b0ffSStefano Garzarella val = vsk->buffer_min_size; 1619b9f2b0ffSStefano Garzarella 1620b9f2b0ffSStefano Garzarella if (val != vsk->buffer_size && 1621b9f2b0ffSStefano Garzarella transport && transport->notify_buffer_size) 1622b9f2b0ffSStefano Garzarella transport->notify_buffer_size(vsk, &val); 1623b9f2b0ffSStefano Garzarella 1624b9f2b0ffSStefano Garzarella vsk->buffer_size = val; 1625b9f2b0ffSStefano Garzarella } 1626b9f2b0ffSStefano Garzarella 1627a9e29e55SArseny Krasnov static int vsock_connectible_setsockopt(struct socket *sock, 1628d021c344SAndy King int level, 1629d021c344SAndy King int optname, 1630a7b75c5aSChristoph Hellwig sockptr_t optval, 1631d021c344SAndy King unsigned int optlen) 1632d021c344SAndy King { 1633d021c344SAndy King int err; 1634d021c344SAndy King struct sock *sk; 1635d021c344SAndy King struct vsock_sock *vsk; 1636fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1637d021c344SAndy King u64 val; 1638d021c344SAndy King 1639d021c344SAndy King if (level != AF_VSOCK) 1640d021c344SAndy King return -ENOPROTOOPT; 1641d021c344SAndy King 1642d021c344SAndy King #define COPY_IN(_v) \ 1643d021c344SAndy King do { \ 1644d021c344SAndy King if (optlen < sizeof(_v)) { \ 1645d021c344SAndy King err = -EINVAL; \ 1646d021c344SAndy King goto exit; \ 1647d021c344SAndy King } \ 1648a7b75c5aSChristoph Hellwig if (copy_from_sockptr(&_v, optval, sizeof(_v)) != 0) { \ 1649d021c344SAndy King err = -EFAULT; \ 1650d021c344SAndy King goto exit; \ 1651d021c344SAndy King } \ 1652d021c344SAndy King } while (0) 1653d021c344SAndy King 1654d021c344SAndy King err = 0; 1655d021c344SAndy King sk = sock->sk; 1656d021c344SAndy King vsk = vsock_sk(sk); 1657d021c344SAndy King 1658d021c344SAndy King lock_sock(sk); 1659d021c344SAndy King 1660c518adafSAlexander Popov transport = vsk->transport; 1661c518adafSAlexander Popov 1662d021c344SAndy King switch (optname) { 1663d021c344SAndy King case SO_VM_SOCKETS_BUFFER_SIZE: 1664d021c344SAndy King COPY_IN(val); 1665b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, val); 1666d021c344SAndy King break; 1667d021c344SAndy King 1668d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MAX_SIZE: 1669d021c344SAndy King COPY_IN(val); 1670b9f2b0ffSStefano Garzarella vsk->buffer_max_size = val; 1671b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, vsk->buffer_size); 1672d021c344SAndy King break; 1673d021c344SAndy King 1674d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MIN_SIZE: 1675d021c344SAndy King COPY_IN(val); 1676b9f2b0ffSStefano Garzarella vsk->buffer_min_size = val; 1677b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, vsk->buffer_size); 1678d021c344SAndy King break; 1679d021c344SAndy King 16804c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_NEW: 16814c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD: { 16824c1e34c0SRichard Palethorpe struct __kernel_sock_timeval tv; 16834c1e34c0SRichard Palethorpe 16844c1e34c0SRichard Palethorpe err = sock_copy_user_timeval(&tv, optval, optlen, 16854c1e34c0SRichard Palethorpe optname == SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD); 16864c1e34c0SRichard Palethorpe if (err) 16874c1e34c0SRichard Palethorpe break; 1688d021c344SAndy King if (tv.tv_sec >= 0 && tv.tv_usec < USEC_PER_SEC && 1689d021c344SAndy King tv.tv_sec < (MAX_SCHEDULE_TIMEOUT / HZ - 1)) { 1690d021c344SAndy King vsk->connect_timeout = tv.tv_sec * HZ + 16914c1e34c0SRichard Palethorpe DIV_ROUND_UP((unsigned long)tv.tv_usec, (USEC_PER_SEC / HZ)); 1692d021c344SAndy King if (vsk->connect_timeout == 0) 1693d021c344SAndy King vsk->connect_timeout = 1694d021c344SAndy King VSOCK_DEFAULT_CONNECT_TIMEOUT; 1695d021c344SAndy King 1696d021c344SAndy King } else { 1697d021c344SAndy King err = -ERANGE; 1698d021c344SAndy King } 1699d021c344SAndy King break; 1700d021c344SAndy King } 1701d021c344SAndy King 1702d021c344SAndy King default: 1703d021c344SAndy King err = -ENOPROTOOPT; 1704d021c344SAndy King break; 1705d021c344SAndy King } 1706d021c344SAndy King 1707d021c344SAndy King #undef COPY_IN 1708d021c344SAndy King 1709d021c344SAndy King exit: 1710d021c344SAndy King release_sock(sk); 1711d021c344SAndy King return err; 1712d021c344SAndy King } 1713d021c344SAndy King 1714a9e29e55SArseny Krasnov static int vsock_connectible_getsockopt(struct socket *sock, 1715d021c344SAndy King int level, int optname, 1716d021c344SAndy King char __user *optval, 1717d021c344SAndy King int __user *optlen) 1718d021c344SAndy King { 1719685c3f2fSRichard Palethorpe struct sock *sk = sock->sk; 1720685c3f2fSRichard Palethorpe struct vsock_sock *vsk = vsock_sk(sk); 1721685c3f2fSRichard Palethorpe 1722685c3f2fSRichard Palethorpe union { 1723685c3f2fSRichard Palethorpe u64 val64; 17244c1e34c0SRichard Palethorpe struct old_timeval32 tm32; 1725685c3f2fSRichard Palethorpe struct __kernel_old_timeval tm; 17264c1e34c0SRichard Palethorpe struct __kernel_sock_timeval stm; 1727685c3f2fSRichard Palethorpe } v; 1728685c3f2fSRichard Palethorpe 1729685c3f2fSRichard Palethorpe int lv = sizeof(v.val64); 1730d021c344SAndy King int len; 1731d021c344SAndy King 1732d021c344SAndy King if (level != AF_VSOCK) 1733d021c344SAndy King return -ENOPROTOOPT; 1734d021c344SAndy King 1735685c3f2fSRichard Palethorpe if (get_user(len, optlen)) 1736685c3f2fSRichard Palethorpe return -EFAULT; 1737d021c344SAndy King 1738685c3f2fSRichard Palethorpe memset(&v, 0, sizeof(v)); 1739d021c344SAndy King 1740d021c344SAndy King switch (optname) { 1741d021c344SAndy King case SO_VM_SOCKETS_BUFFER_SIZE: 1742685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_size; 1743d021c344SAndy King break; 1744d021c344SAndy King 1745d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MAX_SIZE: 1746685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_max_size; 1747d021c344SAndy King break; 1748d021c344SAndy King 1749d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MIN_SIZE: 1750685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_min_size; 1751d021c344SAndy King break; 1752d021c344SAndy King 17534c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_NEW: 17544c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD: 17554c1e34c0SRichard Palethorpe lv = sock_get_timeout(vsk->connect_timeout, &v, 17564c1e34c0SRichard Palethorpe optname == SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD); 1757d021c344SAndy King break; 1758685c3f2fSRichard Palethorpe 1759d021c344SAndy King default: 1760d021c344SAndy King return -ENOPROTOOPT; 1761d021c344SAndy King } 1762d021c344SAndy King 1763685c3f2fSRichard Palethorpe if (len < lv) 1764685c3f2fSRichard Palethorpe return -EINVAL; 1765685c3f2fSRichard Palethorpe if (len > lv) 1766685c3f2fSRichard Palethorpe len = lv; 1767685c3f2fSRichard Palethorpe if (copy_to_user(optval, &v, len)) 1768d021c344SAndy King return -EFAULT; 1769d021c344SAndy King 1770685c3f2fSRichard Palethorpe if (put_user(len, optlen)) 1771685c3f2fSRichard Palethorpe return -EFAULT; 1772d021c344SAndy King 1773d021c344SAndy King return 0; 1774d021c344SAndy King } 1775d021c344SAndy King 1776a9e29e55SArseny Krasnov static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg, 17771b784140SYing Xue size_t len) 1778d021c344SAndy King { 1779d021c344SAndy King struct sock *sk; 1780d021c344SAndy King struct vsock_sock *vsk; 1781fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1782d021c344SAndy King ssize_t total_written; 1783d021c344SAndy King long timeout; 1784d021c344SAndy King int err; 1785d021c344SAndy King struct vsock_transport_send_notify_data send_data; 1786499fde66SWANG Cong DEFINE_WAIT_FUNC(wait, woken_wake_function); 1787d021c344SAndy King 1788d021c344SAndy King sk = sock->sk; 1789d021c344SAndy King vsk = vsock_sk(sk); 1790d021c344SAndy King total_written = 0; 1791d021c344SAndy King err = 0; 1792d021c344SAndy King 1793d021c344SAndy King if (msg->msg_flags & MSG_OOB) 1794d021c344SAndy King return -EOPNOTSUPP; 1795d021c344SAndy King 1796d021c344SAndy King lock_sock(sk); 1797d021c344SAndy King 1798c518adafSAlexander Popov transport = vsk->transport; 1799c518adafSAlexander Popov 18008cb48554SArseny Krasnov /* Callers should not provide a destination with connection oriented 18018cb48554SArseny Krasnov * sockets. 18028cb48554SArseny Krasnov */ 1803d021c344SAndy King if (msg->msg_namelen) { 18043b4477d2SStefan Hajnoczi err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP; 1805d021c344SAndy King goto out; 1806d021c344SAndy King } 1807d021c344SAndy King 1808d021c344SAndy King /* Send data only if both sides are not shutdown in the direction. */ 1809d021c344SAndy King if (sk->sk_shutdown & SEND_SHUTDOWN || 1810d021c344SAndy King vsk->peer_shutdown & RCV_SHUTDOWN) { 1811d021c344SAndy King err = -EPIPE; 1812d021c344SAndy King goto out; 1813d021c344SAndy King } 1814d021c344SAndy King 1815c0cfa2d8SStefano Garzarella if (!transport || sk->sk_state != TCP_ESTABLISHED || 1816d021c344SAndy King !vsock_addr_bound(&vsk->local_addr)) { 1817d021c344SAndy King err = -ENOTCONN; 1818d021c344SAndy King goto out; 1819d021c344SAndy King } 1820d021c344SAndy King 1821d021c344SAndy King if (!vsock_addr_bound(&vsk->remote_addr)) { 1822d021c344SAndy King err = -EDESTADDRREQ; 1823d021c344SAndy King goto out; 1824d021c344SAndy King } 1825d021c344SAndy King 1826d021c344SAndy King /* Wait for room in the produce queue to enqueue our user's data. */ 1827d021c344SAndy King timeout = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); 1828d021c344SAndy King 1829d021c344SAndy King err = transport->notify_send_init(vsk, &send_data); 1830d021c344SAndy King if (err < 0) 1831d021c344SAndy King goto out; 1832d021c344SAndy King 1833d021c344SAndy King while (total_written < len) { 1834d021c344SAndy King ssize_t written; 1835d021c344SAndy King 1836499fde66SWANG Cong add_wait_queue(sk_sleep(sk), &wait); 1837d021c344SAndy King while (vsock_stream_has_space(vsk) == 0 && 1838d021c344SAndy King sk->sk_err == 0 && 1839d021c344SAndy King !(sk->sk_shutdown & SEND_SHUTDOWN) && 1840d021c344SAndy King !(vsk->peer_shutdown & RCV_SHUTDOWN)) { 1841d021c344SAndy King 1842d021c344SAndy King /* Don't wait for non-blocking sockets. */ 1843d021c344SAndy King if (timeout == 0) { 1844d021c344SAndy King err = -EAGAIN; 1845499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1846f7f9b5e7SClaudio Imbrenda goto out_err; 1847d021c344SAndy King } 1848d021c344SAndy King 1849d021c344SAndy King err = transport->notify_send_pre_block(vsk, &send_data); 1850f7f9b5e7SClaudio Imbrenda if (err < 0) { 1851499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1852f7f9b5e7SClaudio Imbrenda goto out_err; 1853f7f9b5e7SClaudio Imbrenda } 1854d021c344SAndy King 1855d021c344SAndy King release_sock(sk); 1856499fde66SWANG Cong timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); 1857d021c344SAndy King lock_sock(sk); 1858d021c344SAndy King if (signal_pending(current)) { 1859d021c344SAndy King err = sock_intr_errno(timeout); 1860499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1861f7f9b5e7SClaudio Imbrenda goto out_err; 1862d021c344SAndy King } else if (timeout == 0) { 1863d021c344SAndy King err = -EAGAIN; 1864499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1865f7f9b5e7SClaudio Imbrenda goto out_err; 1866d021c344SAndy King } 1867d021c344SAndy King } 1868499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1869d021c344SAndy King 1870d021c344SAndy King /* These checks occur both as part of and after the loop 1871d021c344SAndy King * conditional since we need to check before and after 1872d021c344SAndy King * sleeping. 1873d021c344SAndy King */ 1874d021c344SAndy King if (sk->sk_err) { 1875d021c344SAndy King err = -sk->sk_err; 1876f7f9b5e7SClaudio Imbrenda goto out_err; 1877d021c344SAndy King } else if ((sk->sk_shutdown & SEND_SHUTDOWN) || 1878d021c344SAndy King (vsk->peer_shutdown & RCV_SHUTDOWN)) { 1879d021c344SAndy King err = -EPIPE; 1880f7f9b5e7SClaudio Imbrenda goto out_err; 1881d021c344SAndy King } 1882d021c344SAndy King 1883d021c344SAndy King err = transport->notify_send_pre_enqueue(vsk, &send_data); 1884d021c344SAndy King if (err < 0) 1885f7f9b5e7SClaudio Imbrenda goto out_err; 1886d021c344SAndy King 1887d021c344SAndy King /* Note that enqueue will only write as many bytes as are free 1888d021c344SAndy King * in the produce queue, so we don't need to ensure len is 1889d021c344SAndy King * smaller than the queue size. It is the caller's 1890d021c344SAndy King * responsibility to check how many bytes we were able to send. 1891d021c344SAndy King */ 1892d021c344SAndy King 1893fbe70c48SArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) { 1894fbe70c48SArseny Krasnov written = transport->seqpacket_enqueue(vsk, 1895fbe70c48SArseny Krasnov msg, len - total_written); 1896fbe70c48SArseny Krasnov } else { 1897fbe70c48SArseny Krasnov written = transport->stream_enqueue(vsk, 1898fbe70c48SArseny Krasnov msg, len - total_written); 1899fbe70c48SArseny Krasnov } 1900c43170b7SBobby Eshleman 1901d021c344SAndy King if (written < 0) { 1902c43170b7SBobby Eshleman err = written; 1903f7f9b5e7SClaudio Imbrenda goto out_err; 1904d021c344SAndy King } 1905d021c344SAndy King 1906d021c344SAndy King total_written += written; 1907d021c344SAndy King 1908d021c344SAndy King err = transport->notify_send_post_enqueue( 1909d021c344SAndy King vsk, written, &send_data); 1910d021c344SAndy King if (err < 0) 1911f7f9b5e7SClaudio Imbrenda goto out_err; 1912d021c344SAndy King 1913d021c344SAndy King } 1914d021c344SAndy King 1915f7f9b5e7SClaudio Imbrenda out_err: 1916fbe70c48SArseny Krasnov if (total_written > 0) { 1917fbe70c48SArseny Krasnov /* Return number of written bytes only if: 1918fbe70c48SArseny Krasnov * 1) SOCK_STREAM socket. 1919fbe70c48SArseny Krasnov * 2) SOCK_SEQPACKET socket when whole buffer is sent. 1920fbe70c48SArseny Krasnov */ 1921fbe70c48SArseny Krasnov if (sk->sk_type == SOCK_STREAM || total_written == len) 1922d021c344SAndy King err = total_written; 1923fbe70c48SArseny Krasnov } 1924d021c344SAndy King out: 1925d021c344SAndy King release_sock(sk); 1926d021c344SAndy King return err; 1927d021c344SAndy King } 1928d021c344SAndy King 19290de5b2e6SStefano Garzarella static int vsock_connectible_wait_data(struct sock *sk, 19300de5b2e6SStefano Garzarella struct wait_queue_entry *wait, 1931b3f7fd54SArseny Krasnov long timeout, 1932b3f7fd54SArseny Krasnov struct vsock_transport_recv_notify_data *recv_data, 1933b3f7fd54SArseny Krasnov size_t target) 1934b3f7fd54SArseny Krasnov { 1935b3f7fd54SArseny Krasnov const struct vsock_transport *transport; 1936b3f7fd54SArseny Krasnov struct vsock_sock *vsk; 1937b3f7fd54SArseny Krasnov s64 data; 1938b3f7fd54SArseny Krasnov int err; 1939b3f7fd54SArseny Krasnov 1940b3f7fd54SArseny Krasnov vsk = vsock_sk(sk); 1941b3f7fd54SArseny Krasnov err = 0; 1942b3f7fd54SArseny Krasnov transport = vsk->transport; 1943b3f7fd54SArseny Krasnov 1944466a8533SDexuan Cui while (1) { 1945b3f7fd54SArseny Krasnov prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE); 1946466a8533SDexuan Cui data = vsock_connectible_has_data(vsk); 1947466a8533SDexuan Cui if (data != 0) 1948466a8533SDexuan Cui break; 1949b3f7fd54SArseny Krasnov 1950b3f7fd54SArseny Krasnov if (sk->sk_err != 0 || 1951b3f7fd54SArseny Krasnov (sk->sk_shutdown & RCV_SHUTDOWN) || 1952b3f7fd54SArseny Krasnov (vsk->peer_shutdown & SEND_SHUTDOWN)) { 1953b3f7fd54SArseny Krasnov break; 1954b3f7fd54SArseny Krasnov } 1955b3f7fd54SArseny Krasnov 1956b3f7fd54SArseny Krasnov /* Don't wait for non-blocking sockets. */ 1957b3f7fd54SArseny Krasnov if (timeout == 0) { 1958b3f7fd54SArseny Krasnov err = -EAGAIN; 1959b3f7fd54SArseny Krasnov break; 1960b3f7fd54SArseny Krasnov } 1961b3f7fd54SArseny Krasnov 1962b3f7fd54SArseny Krasnov if (recv_data) { 1963b3f7fd54SArseny Krasnov err = transport->notify_recv_pre_block(vsk, target, recv_data); 1964b3f7fd54SArseny Krasnov if (err < 0) 1965b3f7fd54SArseny Krasnov break; 1966b3f7fd54SArseny Krasnov } 1967b3f7fd54SArseny Krasnov 1968b3f7fd54SArseny Krasnov release_sock(sk); 1969b3f7fd54SArseny Krasnov timeout = schedule_timeout(timeout); 1970b3f7fd54SArseny Krasnov lock_sock(sk); 1971b3f7fd54SArseny Krasnov 1972b3f7fd54SArseny Krasnov if (signal_pending(current)) { 1973b3f7fd54SArseny Krasnov err = sock_intr_errno(timeout); 1974b3f7fd54SArseny Krasnov break; 1975b3f7fd54SArseny Krasnov } else if (timeout == 0) { 1976b3f7fd54SArseny Krasnov err = -EAGAIN; 1977b3f7fd54SArseny Krasnov break; 1978b3f7fd54SArseny Krasnov } 1979b3f7fd54SArseny Krasnov } 1980b3f7fd54SArseny Krasnov 1981b3f7fd54SArseny Krasnov finish_wait(sk_sleep(sk), wait); 1982b3f7fd54SArseny Krasnov 1983b3f7fd54SArseny Krasnov if (err) 1984b3f7fd54SArseny Krasnov return err; 1985b3f7fd54SArseny Krasnov 1986b3f7fd54SArseny Krasnov /* Internal transport error when checking for available 1987b3f7fd54SArseny Krasnov * data. XXX This should be changed to a connection 1988b3f7fd54SArseny Krasnov * reset in a later change. 1989b3f7fd54SArseny Krasnov */ 1990b3f7fd54SArseny Krasnov if (data < 0) 1991b3f7fd54SArseny Krasnov return -ENOMEM; 1992b3f7fd54SArseny Krasnov 1993b3f7fd54SArseny Krasnov return data; 1994b3f7fd54SArseny Krasnov } 1995b3f7fd54SArseny Krasnov 199619c1b90eSArseny Krasnov static int __vsock_stream_recvmsg(struct sock *sk, struct msghdr *msg, 199719c1b90eSArseny Krasnov size_t len, int flags) 1998d021c344SAndy King { 1999d021c344SAndy King struct vsock_transport_recv_notify_data recv_data; 200019c1b90eSArseny Krasnov const struct vsock_transport *transport; 200119c1b90eSArseny Krasnov struct vsock_sock *vsk; 200219c1b90eSArseny Krasnov ssize_t copied; 200319c1b90eSArseny Krasnov size_t target; 200419c1b90eSArseny Krasnov long timeout; 200519c1b90eSArseny Krasnov int err; 2006d021c344SAndy King 2007d021c344SAndy King DEFINE_WAIT(wait); 2008d021c344SAndy King 2009d021c344SAndy King vsk = vsock_sk(sk); 2010c518adafSAlexander Popov transport = vsk->transport; 2011c518adafSAlexander Popov 2012d021c344SAndy King /* We must not copy less than target bytes into the user's buffer 2013d021c344SAndy King * before returning successfully, so we wait for the consume queue to 2014d021c344SAndy King * have that much data to consume before dequeueing. Note that this 2015d021c344SAndy King * makes it impossible to handle cases where target is greater than the 2016d021c344SAndy King * queue size. 2017d021c344SAndy King */ 2018d021c344SAndy King target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); 2019d021c344SAndy King if (target >= transport->stream_rcvhiwat(vsk)) { 2020d021c344SAndy King err = -ENOMEM; 2021d021c344SAndy King goto out; 2022d021c344SAndy King } 2023d021c344SAndy King timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 2024d021c344SAndy King copied = 0; 2025d021c344SAndy King 2026d021c344SAndy King err = transport->notify_recv_init(vsk, target, &recv_data); 2027d021c344SAndy King if (err < 0) 2028d021c344SAndy King goto out; 2029d021c344SAndy King 2030d021c344SAndy King 2031d021c344SAndy King while (1) { 2032f7f9b5e7SClaudio Imbrenda ssize_t read; 2033f7f9b5e7SClaudio Imbrenda 20340de5b2e6SStefano Garzarella err = vsock_connectible_wait_data(sk, &wait, timeout, 20350de5b2e6SStefano Garzarella &recv_data, target); 2036b3f7fd54SArseny Krasnov if (err <= 0) 2037b3f7fd54SArseny Krasnov break; 2038d021c344SAndy King 2039b3f7fd54SArseny Krasnov err = transport->notify_recv_pre_dequeue(vsk, target, 2040b3f7fd54SArseny Krasnov &recv_data); 2041d021c344SAndy King if (err < 0) 2042d021c344SAndy King break; 2043d021c344SAndy King 2044b3f7fd54SArseny Krasnov read = transport->stream_dequeue(vsk, msg, len - copied, flags); 2045d021c344SAndy King if (read < 0) { 2046d021c344SAndy King err = -ENOMEM; 2047d021c344SAndy King break; 2048d021c344SAndy King } 2049d021c344SAndy King 2050d021c344SAndy King copied += read; 2051d021c344SAndy King 2052b3f7fd54SArseny Krasnov err = transport->notify_recv_post_dequeue(vsk, target, read, 2053d021c344SAndy King !(flags & MSG_PEEK), &recv_data); 2054d021c344SAndy King if (err < 0) 2055f7f9b5e7SClaudio Imbrenda goto out; 2056d021c344SAndy King 2057d021c344SAndy King if (read >= target || flags & MSG_PEEK) 2058d021c344SAndy King break; 2059d021c344SAndy King 2060d021c344SAndy King target -= read; 2061d021c344SAndy King } 2062d021c344SAndy King 2063d021c344SAndy King if (sk->sk_err) 2064d021c344SAndy King err = -sk->sk_err; 2065d021c344SAndy King else if (sk->sk_shutdown & RCV_SHUTDOWN) 2066d021c344SAndy King err = 0; 2067d021c344SAndy King 2068dedc58e0SIan Campbell if (copied > 0) 2069d021c344SAndy King err = copied; 2070d021c344SAndy King 2071d021c344SAndy King out: 207219c1b90eSArseny Krasnov return err; 207319c1b90eSArseny Krasnov } 207419c1b90eSArseny Krasnov 20759942c192SArseny Krasnov static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg, 20769942c192SArseny Krasnov size_t len, int flags) 20779942c192SArseny Krasnov { 20789942c192SArseny Krasnov const struct vsock_transport *transport; 20799942c192SArseny Krasnov struct vsock_sock *vsk; 20808fc92b7cSArseny Krasnov ssize_t msg_len; 20819942c192SArseny Krasnov long timeout; 20829942c192SArseny Krasnov int err = 0; 20839942c192SArseny Krasnov DEFINE_WAIT(wait); 20849942c192SArseny Krasnov 20859942c192SArseny Krasnov vsk = vsock_sk(sk); 20869942c192SArseny Krasnov transport = vsk->transport; 20879942c192SArseny Krasnov 20889942c192SArseny Krasnov timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 20899942c192SArseny Krasnov 20900de5b2e6SStefano Garzarella err = vsock_connectible_wait_data(sk, &wait, timeout, NULL, 0); 20919942c192SArseny Krasnov if (err <= 0) 20929942c192SArseny Krasnov goto out; 20939942c192SArseny Krasnov 20948fc92b7cSArseny Krasnov msg_len = transport->seqpacket_dequeue(vsk, msg, flags); 20959942c192SArseny Krasnov 20968fc92b7cSArseny Krasnov if (msg_len < 0) { 20979942c192SArseny Krasnov err = -ENOMEM; 20989942c192SArseny Krasnov goto out; 20999942c192SArseny Krasnov } 21009942c192SArseny Krasnov 21019942c192SArseny Krasnov if (sk->sk_err) { 21029942c192SArseny Krasnov err = -sk->sk_err; 21039942c192SArseny Krasnov } else if (sk->sk_shutdown & RCV_SHUTDOWN) { 21049942c192SArseny Krasnov err = 0; 21059942c192SArseny Krasnov } else { 21069942c192SArseny Krasnov /* User sets MSG_TRUNC, so return real length of 21079942c192SArseny Krasnov * packet. 21089942c192SArseny Krasnov */ 21099942c192SArseny Krasnov if (flags & MSG_TRUNC) 21108fc92b7cSArseny Krasnov err = msg_len; 21119942c192SArseny Krasnov else 21129942c192SArseny Krasnov err = len - msg_data_left(msg); 21139942c192SArseny Krasnov 21149942c192SArseny Krasnov /* Always set MSG_TRUNC if real length of packet is 21159942c192SArseny Krasnov * bigger than user's buffer. 21169942c192SArseny Krasnov */ 21178fc92b7cSArseny Krasnov if (msg_len > len) 21189942c192SArseny Krasnov msg->msg_flags |= MSG_TRUNC; 21199942c192SArseny Krasnov } 21209942c192SArseny Krasnov 21219942c192SArseny Krasnov out: 21229942c192SArseny Krasnov return err; 21239942c192SArseny Krasnov } 21249942c192SArseny Krasnov 2125*634f1a71SBobby Eshleman int 212619c1b90eSArseny Krasnov vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 212719c1b90eSArseny Krasnov int flags) 212819c1b90eSArseny Krasnov { 212919c1b90eSArseny Krasnov struct sock *sk; 213019c1b90eSArseny Krasnov struct vsock_sock *vsk; 213119c1b90eSArseny Krasnov const struct vsock_transport *transport; 2132*634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 2133*634f1a71SBobby Eshleman const struct proto *prot; 2134*634f1a71SBobby Eshleman #endif 213519c1b90eSArseny Krasnov int err; 213619c1b90eSArseny Krasnov 213719c1b90eSArseny Krasnov sk = sock->sk; 213819c1b90eSArseny Krasnov vsk = vsock_sk(sk); 213919c1b90eSArseny Krasnov err = 0; 214019c1b90eSArseny Krasnov 214119c1b90eSArseny Krasnov lock_sock(sk); 214219c1b90eSArseny Krasnov 214319c1b90eSArseny Krasnov transport = vsk->transport; 214419c1b90eSArseny Krasnov 214519c1b90eSArseny Krasnov if (!transport || sk->sk_state != TCP_ESTABLISHED) { 214619c1b90eSArseny Krasnov /* Recvmsg is supposed to return 0 if a peer performs an 214719c1b90eSArseny Krasnov * orderly shutdown. Differentiate between that case and when a 214819c1b90eSArseny Krasnov * peer has not connected or a local shutdown occurred with the 214919c1b90eSArseny Krasnov * SOCK_DONE flag. 215019c1b90eSArseny Krasnov */ 215119c1b90eSArseny Krasnov if (sock_flag(sk, SOCK_DONE)) 215219c1b90eSArseny Krasnov err = 0; 215319c1b90eSArseny Krasnov else 215419c1b90eSArseny Krasnov err = -ENOTCONN; 215519c1b90eSArseny Krasnov 215619c1b90eSArseny Krasnov goto out; 215719c1b90eSArseny Krasnov } 215819c1b90eSArseny Krasnov 215919c1b90eSArseny Krasnov if (flags & MSG_OOB) { 216019c1b90eSArseny Krasnov err = -EOPNOTSUPP; 216119c1b90eSArseny Krasnov goto out; 216219c1b90eSArseny Krasnov } 216319c1b90eSArseny Krasnov 216419c1b90eSArseny Krasnov /* We don't check peer_shutdown flag here since peer may actually shut 216519c1b90eSArseny Krasnov * down, but there can be data in the queue that a local socket can 216619c1b90eSArseny Krasnov * receive. 216719c1b90eSArseny Krasnov */ 216819c1b90eSArseny Krasnov if (sk->sk_shutdown & RCV_SHUTDOWN) { 216919c1b90eSArseny Krasnov err = 0; 217019c1b90eSArseny Krasnov goto out; 217119c1b90eSArseny Krasnov } 217219c1b90eSArseny Krasnov 217319c1b90eSArseny Krasnov /* It is valid on Linux to pass in a zero-length receive buffer. This 217419c1b90eSArseny Krasnov * is not an error. We may as well bail out now. 217519c1b90eSArseny Krasnov */ 217619c1b90eSArseny Krasnov if (!len) { 217719c1b90eSArseny Krasnov err = 0; 217819c1b90eSArseny Krasnov goto out; 217919c1b90eSArseny Krasnov } 218019c1b90eSArseny Krasnov 2181*634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 2182*634f1a71SBobby Eshleman prot = READ_ONCE(sk->sk_prot); 2183*634f1a71SBobby Eshleman if (prot != &vsock_proto) { 2184*634f1a71SBobby Eshleman release_sock(sk); 2185*634f1a71SBobby Eshleman return prot->recvmsg(sk, msg, len, flags, NULL); 2186*634f1a71SBobby Eshleman } 2187*634f1a71SBobby Eshleman #endif 2188*634f1a71SBobby Eshleman 21899942c192SArseny Krasnov if (sk->sk_type == SOCK_STREAM) 219019c1b90eSArseny Krasnov err = __vsock_stream_recvmsg(sk, msg, len, flags); 21919942c192SArseny Krasnov else 21929942c192SArseny Krasnov err = __vsock_seqpacket_recvmsg(sk, msg, len, flags); 219319c1b90eSArseny Krasnov 219419c1b90eSArseny Krasnov out: 2195d021c344SAndy King release_sock(sk); 2196d021c344SAndy King return err; 2197d021c344SAndy King } 2198*634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg); 2199d021c344SAndy King 2200e38f22c8SArseniy Krasnov static int vsock_set_rcvlowat(struct sock *sk, int val) 2201e38f22c8SArseniy Krasnov { 2202e38f22c8SArseniy Krasnov const struct vsock_transport *transport; 2203e38f22c8SArseniy Krasnov struct vsock_sock *vsk; 2204e38f22c8SArseniy Krasnov 2205e38f22c8SArseniy Krasnov vsk = vsock_sk(sk); 2206e38f22c8SArseniy Krasnov 2207e38f22c8SArseniy Krasnov if (val > vsk->buffer_size) 2208e38f22c8SArseniy Krasnov return -EINVAL; 2209e38f22c8SArseniy Krasnov 2210e38f22c8SArseniy Krasnov transport = vsk->transport; 2211e38f22c8SArseniy Krasnov 2212e38f22c8SArseniy Krasnov if (transport && transport->set_rcvlowat) 2213e38f22c8SArseniy Krasnov return transport->set_rcvlowat(vsk, val); 2214e38f22c8SArseniy Krasnov 2215e38f22c8SArseniy Krasnov WRITE_ONCE(sk->sk_rcvlowat, val ? : 1); 2216e38f22c8SArseniy Krasnov return 0; 2217e38f22c8SArseniy Krasnov } 2218e38f22c8SArseniy Krasnov 2219d021c344SAndy King static const struct proto_ops vsock_stream_ops = { 2220d021c344SAndy King .family = PF_VSOCK, 2221d021c344SAndy King .owner = THIS_MODULE, 2222d021c344SAndy King .release = vsock_release, 2223d021c344SAndy King .bind = vsock_bind, 2224a9e29e55SArseny Krasnov .connect = vsock_connect, 2225d021c344SAndy King .socketpair = sock_no_socketpair, 2226d021c344SAndy King .accept = vsock_accept, 2227d021c344SAndy King .getname = vsock_getname, 2228a11e1d43SLinus Torvalds .poll = vsock_poll, 2229d021c344SAndy King .ioctl = sock_no_ioctl, 2230d021c344SAndy King .listen = vsock_listen, 2231d021c344SAndy King .shutdown = vsock_shutdown, 2232a9e29e55SArseny Krasnov .setsockopt = vsock_connectible_setsockopt, 2233a9e29e55SArseny Krasnov .getsockopt = vsock_connectible_getsockopt, 2234a9e29e55SArseny Krasnov .sendmsg = vsock_connectible_sendmsg, 2235a9e29e55SArseny Krasnov .recvmsg = vsock_connectible_recvmsg, 2236d021c344SAndy King .mmap = sock_no_mmap, 2237d021c344SAndy King .sendpage = sock_no_sendpage, 2238e38f22c8SArseniy Krasnov .set_rcvlowat = vsock_set_rcvlowat, 2239*634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 2240d021c344SAndy King }; 2241d021c344SAndy King 22420798e78bSArseny Krasnov static const struct proto_ops vsock_seqpacket_ops = { 22430798e78bSArseny Krasnov .family = PF_VSOCK, 22440798e78bSArseny Krasnov .owner = THIS_MODULE, 22450798e78bSArseny Krasnov .release = vsock_release, 22460798e78bSArseny Krasnov .bind = vsock_bind, 22470798e78bSArseny Krasnov .connect = vsock_connect, 22480798e78bSArseny Krasnov .socketpair = sock_no_socketpair, 22490798e78bSArseny Krasnov .accept = vsock_accept, 22500798e78bSArseny Krasnov .getname = vsock_getname, 22510798e78bSArseny Krasnov .poll = vsock_poll, 22520798e78bSArseny Krasnov .ioctl = sock_no_ioctl, 22530798e78bSArseny Krasnov .listen = vsock_listen, 22540798e78bSArseny Krasnov .shutdown = vsock_shutdown, 22550798e78bSArseny Krasnov .setsockopt = vsock_connectible_setsockopt, 22560798e78bSArseny Krasnov .getsockopt = vsock_connectible_getsockopt, 22570798e78bSArseny Krasnov .sendmsg = vsock_connectible_sendmsg, 22580798e78bSArseny Krasnov .recvmsg = vsock_connectible_recvmsg, 22590798e78bSArseny Krasnov .mmap = sock_no_mmap, 22600798e78bSArseny Krasnov .sendpage = sock_no_sendpage, 2261*634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 22620798e78bSArseny Krasnov }; 22630798e78bSArseny Krasnov 2264d021c344SAndy King static int vsock_create(struct net *net, struct socket *sock, 2265d021c344SAndy King int protocol, int kern) 2266d021c344SAndy King { 2267c0cfa2d8SStefano Garzarella struct vsock_sock *vsk; 226855f3e149SStefano Garzarella struct sock *sk; 2269c0cfa2d8SStefano Garzarella int ret; 227055f3e149SStefano Garzarella 2271d021c344SAndy King if (!sock) 2272d021c344SAndy King return -EINVAL; 2273d021c344SAndy King 22746cf1c5fcSAndy King if (protocol && protocol != PF_VSOCK) 2275d021c344SAndy King return -EPROTONOSUPPORT; 2276d021c344SAndy King 2277d021c344SAndy King switch (sock->type) { 2278d021c344SAndy King case SOCK_DGRAM: 2279d021c344SAndy King sock->ops = &vsock_dgram_ops; 2280d021c344SAndy King break; 2281d021c344SAndy King case SOCK_STREAM: 2282d021c344SAndy King sock->ops = &vsock_stream_ops; 2283d021c344SAndy King break; 22840798e78bSArseny Krasnov case SOCK_SEQPACKET: 22850798e78bSArseny Krasnov sock->ops = &vsock_seqpacket_ops; 22860798e78bSArseny Krasnov break; 2287d021c344SAndy King default: 2288d021c344SAndy King return -ESOCKTNOSUPPORT; 2289d021c344SAndy King } 2290d021c344SAndy King 2291d021c344SAndy King sock->state = SS_UNCONNECTED; 2292d021c344SAndy King 229355f3e149SStefano Garzarella sk = __vsock_create(net, sock, NULL, GFP_KERNEL, 0, kern); 229455f3e149SStefano Garzarella if (!sk) 229555f3e149SStefano Garzarella return -ENOMEM; 229655f3e149SStefano Garzarella 2297c0cfa2d8SStefano Garzarella vsk = vsock_sk(sk); 2298c0cfa2d8SStefano Garzarella 2299c0cfa2d8SStefano Garzarella if (sock->type == SOCK_DGRAM) { 2300c0cfa2d8SStefano Garzarella ret = vsock_assign_transport(vsk, NULL); 2301c0cfa2d8SStefano Garzarella if (ret < 0) { 2302c0cfa2d8SStefano Garzarella sock_put(sk); 2303c0cfa2d8SStefano Garzarella return ret; 2304c0cfa2d8SStefano Garzarella } 2305c0cfa2d8SStefano Garzarella } 2306c0cfa2d8SStefano Garzarella 2307c0cfa2d8SStefano Garzarella vsock_insert_unbound(vsk); 230855f3e149SStefano Garzarella 230955f3e149SStefano Garzarella return 0; 2310d021c344SAndy King } 2311d021c344SAndy King 2312d021c344SAndy King static const struct net_proto_family vsock_family_ops = { 2313d021c344SAndy King .family = AF_VSOCK, 2314d021c344SAndy King .create = vsock_create, 2315d021c344SAndy King .owner = THIS_MODULE, 2316d021c344SAndy King }; 2317d021c344SAndy King 2318d021c344SAndy King static long vsock_dev_do_ioctl(struct file *filp, 2319d021c344SAndy King unsigned int cmd, void __user *ptr) 2320d021c344SAndy King { 2321d021c344SAndy King u32 __user *p = ptr; 2322c0cfa2d8SStefano Garzarella u32 cid = VMADDR_CID_ANY; 2323d021c344SAndy King int retval = 0; 2324d021c344SAndy King 2325d021c344SAndy King switch (cmd) { 2326d021c344SAndy King case IOCTL_VM_SOCKETS_GET_LOCAL_CID: 2327c0cfa2d8SStefano Garzarella /* To be compatible with the VMCI behavior, we prioritize the 2328c0cfa2d8SStefano Garzarella * guest CID instead of well-know host CID (VMADDR_CID_HOST). 2329c0cfa2d8SStefano Garzarella */ 2330c0cfa2d8SStefano Garzarella if (transport_g2h) 2331c0cfa2d8SStefano Garzarella cid = transport_g2h->get_local_cid(); 2332c0cfa2d8SStefano Garzarella else if (transport_h2g) 2333c0cfa2d8SStefano Garzarella cid = transport_h2g->get_local_cid(); 2334c0cfa2d8SStefano Garzarella 2335c0cfa2d8SStefano Garzarella if (put_user(cid, p) != 0) 2336d021c344SAndy King retval = -EFAULT; 2337d021c344SAndy King break; 2338d021c344SAndy King 2339d021c344SAndy King default: 2340c3e448cdSColin Ian King retval = -ENOIOCTLCMD; 2341d021c344SAndy King } 2342d021c344SAndy King 2343d021c344SAndy King return retval; 2344d021c344SAndy King } 2345d021c344SAndy King 2346d021c344SAndy King static long vsock_dev_ioctl(struct file *filp, 2347d021c344SAndy King unsigned int cmd, unsigned long arg) 2348d021c344SAndy King { 2349d021c344SAndy King return vsock_dev_do_ioctl(filp, cmd, (void __user *)arg); 2350d021c344SAndy King } 2351d021c344SAndy King 2352d021c344SAndy King #ifdef CONFIG_COMPAT 2353d021c344SAndy King static long vsock_dev_compat_ioctl(struct file *filp, 2354d021c344SAndy King unsigned int cmd, unsigned long arg) 2355d021c344SAndy King { 2356d021c344SAndy King return vsock_dev_do_ioctl(filp, cmd, compat_ptr(arg)); 2357d021c344SAndy King } 2358d021c344SAndy King #endif 2359d021c344SAndy King 2360d021c344SAndy King static const struct file_operations vsock_device_ops = { 2361d021c344SAndy King .owner = THIS_MODULE, 2362d021c344SAndy King .unlocked_ioctl = vsock_dev_ioctl, 2363d021c344SAndy King #ifdef CONFIG_COMPAT 2364d021c344SAndy King .compat_ioctl = vsock_dev_compat_ioctl, 2365d021c344SAndy King #endif 2366d021c344SAndy King .open = nonseekable_open, 2367d021c344SAndy King }; 2368d021c344SAndy King 2369d021c344SAndy King static struct miscdevice vsock_device = { 2370d021c344SAndy King .name = "vsock", 2371d021c344SAndy King .fops = &vsock_device_ops, 2372d021c344SAndy King }; 2373d021c344SAndy King 2374c0cfa2d8SStefano Garzarella static int __init vsock_init(void) 2375d021c344SAndy King { 2376c0cfa2d8SStefano Garzarella int err = 0; 23772c4a336eSAndy King 2378c0cfa2d8SStefano Garzarella vsock_init_tables(); 23792c4a336eSAndy King 2380c0cfa2d8SStefano Garzarella vsock_proto.owner = THIS_MODULE; 23816ad0b2f7SAsias He vsock_device.minor = MISC_DYNAMIC_MINOR; 2382d021c344SAndy King err = misc_register(&vsock_device); 2383d021c344SAndy King if (err) { 2384d021c344SAndy King pr_err("Failed to register misc device\n"); 2385f6a835bbSGao feng goto err_reset_transport; 2386d021c344SAndy King } 2387d021c344SAndy King 2388d021c344SAndy King err = proto_register(&vsock_proto, 1); /* we want our slab */ 2389d021c344SAndy King if (err) { 2390d021c344SAndy King pr_err("Cannot register vsock protocol\n"); 2391f6a835bbSGao feng goto err_deregister_misc; 2392d021c344SAndy King } 2393d021c344SAndy King 2394d021c344SAndy King err = sock_register(&vsock_family_ops); 2395d021c344SAndy King if (err) { 2396d021c344SAndy King pr_err("could not register af_vsock (%d) address family: %d\n", 2397d021c344SAndy King AF_VSOCK, err); 2398d021c344SAndy King goto err_unregister_proto; 2399d021c344SAndy King } 2400d021c344SAndy King 2401*634f1a71SBobby Eshleman vsock_bpf_build_proto(); 2402*634f1a71SBobby Eshleman 2403d021c344SAndy King return 0; 2404d021c344SAndy King 2405d021c344SAndy King err_unregister_proto: 2406d021c344SAndy King proto_unregister(&vsock_proto); 2407f6a835bbSGao feng err_deregister_misc: 2408d021c344SAndy King misc_deregister(&vsock_device); 2409f6a835bbSGao feng err_reset_transport: 2410d021c344SAndy King return err; 2411d021c344SAndy King } 2412d021c344SAndy King 2413c0cfa2d8SStefano Garzarella static void __exit vsock_exit(void) 2414d021c344SAndy King { 2415d021c344SAndy King misc_deregister(&vsock_device); 2416d021c344SAndy King sock_unregister(AF_VSOCK); 2417d021c344SAndy King proto_unregister(&vsock_proto); 2418d021c344SAndy King } 2419d021c344SAndy King 2420daabfbcaSStefano Garzarella const struct vsock_transport *vsock_core_get_transport(struct vsock_sock *vsk) 24210b01aeb3SStefan Hajnoczi { 2422daabfbcaSStefano Garzarella return vsk->transport; 24230b01aeb3SStefan Hajnoczi } 24240b01aeb3SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_core_get_transport); 24250b01aeb3SStefan Hajnoczi 2426c0cfa2d8SStefano Garzarella int vsock_core_register(const struct vsock_transport *t, int features) 242705e489b1SStefan Hajnoczi { 24280e121905SStefano Garzarella const struct vsock_transport *t_h2g, *t_g2h, *t_dgram, *t_local; 2429c0cfa2d8SStefano Garzarella int err = mutex_lock_interruptible(&vsock_register_mutex); 2430c0cfa2d8SStefano Garzarella 2431c0cfa2d8SStefano Garzarella if (err) 2432c0cfa2d8SStefano Garzarella return err; 2433c0cfa2d8SStefano Garzarella 2434c0cfa2d8SStefano Garzarella t_h2g = transport_h2g; 2435c0cfa2d8SStefano Garzarella t_g2h = transport_g2h; 2436c0cfa2d8SStefano Garzarella t_dgram = transport_dgram; 24370e121905SStefano Garzarella t_local = transport_local; 2438c0cfa2d8SStefano Garzarella 2439c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_H2G) { 2440c0cfa2d8SStefano Garzarella if (t_h2g) { 2441c0cfa2d8SStefano Garzarella err = -EBUSY; 2442c0cfa2d8SStefano Garzarella goto err_busy; 2443c0cfa2d8SStefano Garzarella } 2444c0cfa2d8SStefano Garzarella t_h2g = t; 244505e489b1SStefan Hajnoczi } 244605e489b1SStefan Hajnoczi 2447c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_G2H) { 2448c0cfa2d8SStefano Garzarella if (t_g2h) { 2449c0cfa2d8SStefano Garzarella err = -EBUSY; 2450c0cfa2d8SStefano Garzarella goto err_busy; 2451c0cfa2d8SStefano Garzarella } 2452c0cfa2d8SStefano Garzarella t_g2h = t; 2453c0cfa2d8SStefano Garzarella } 2454c0cfa2d8SStefano Garzarella 2455c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_DGRAM) { 2456c0cfa2d8SStefano Garzarella if (t_dgram) { 2457c0cfa2d8SStefano Garzarella err = -EBUSY; 2458c0cfa2d8SStefano Garzarella goto err_busy; 2459c0cfa2d8SStefano Garzarella } 2460c0cfa2d8SStefano Garzarella t_dgram = t; 2461c0cfa2d8SStefano Garzarella } 2462c0cfa2d8SStefano Garzarella 24630e121905SStefano Garzarella if (features & VSOCK_TRANSPORT_F_LOCAL) { 24640e121905SStefano Garzarella if (t_local) { 24650e121905SStefano Garzarella err = -EBUSY; 24660e121905SStefano Garzarella goto err_busy; 24670e121905SStefano Garzarella } 24680e121905SStefano Garzarella t_local = t; 24690e121905SStefano Garzarella } 24700e121905SStefano Garzarella 2471c0cfa2d8SStefano Garzarella transport_h2g = t_h2g; 2472c0cfa2d8SStefano Garzarella transport_g2h = t_g2h; 2473c0cfa2d8SStefano Garzarella transport_dgram = t_dgram; 24740e121905SStefano Garzarella transport_local = t_local; 2475c0cfa2d8SStefano Garzarella 2476c0cfa2d8SStefano Garzarella err_busy: 2477c0cfa2d8SStefano Garzarella mutex_unlock(&vsock_register_mutex); 2478c0cfa2d8SStefano Garzarella return err; 2479c0cfa2d8SStefano Garzarella } 2480c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_core_register); 2481c0cfa2d8SStefano Garzarella 2482c0cfa2d8SStefano Garzarella void vsock_core_unregister(const struct vsock_transport *t) 2483c0cfa2d8SStefano Garzarella { 2484c0cfa2d8SStefano Garzarella mutex_lock(&vsock_register_mutex); 2485c0cfa2d8SStefano Garzarella 2486c0cfa2d8SStefano Garzarella if (transport_h2g == t) 2487c0cfa2d8SStefano Garzarella transport_h2g = NULL; 2488c0cfa2d8SStefano Garzarella 2489c0cfa2d8SStefano Garzarella if (transport_g2h == t) 2490c0cfa2d8SStefano Garzarella transport_g2h = NULL; 2491c0cfa2d8SStefano Garzarella 2492c0cfa2d8SStefano Garzarella if (transport_dgram == t) 2493c0cfa2d8SStefano Garzarella transport_dgram = NULL; 2494c0cfa2d8SStefano Garzarella 24950e121905SStefano Garzarella if (transport_local == t) 24960e121905SStefano Garzarella transport_local = NULL; 24970e121905SStefano Garzarella 2498c0cfa2d8SStefano Garzarella mutex_unlock(&vsock_register_mutex); 2499c0cfa2d8SStefano Garzarella } 2500c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_core_unregister); 2501c0cfa2d8SStefano Garzarella 2502c0cfa2d8SStefano Garzarella module_init(vsock_init); 250305e489b1SStefan Hajnoczi module_exit(vsock_exit); 2504c1eef220SCong Wang 2505d021c344SAndy King MODULE_AUTHOR("VMware, Inc."); 2506d021c344SAndy King MODULE_DESCRIPTION("VMware Virtual Socket Family"); 25071190cfdbSJorgen Hansen MODULE_VERSION("1.0.2.0-k"); 2508d021c344SAndy King MODULE_LICENSE("GPL v2"); 2509