1685a6bf8SThomas Gleixner // SPDX-License-Identifier: GPL-2.0-only 2d021c344SAndy King /* 3d021c344SAndy King * VMware vSockets Driver 4d021c344SAndy King * 5d021c344SAndy King * Copyright (C) 2007-2013 VMware, Inc. All rights reserved. 6d021c344SAndy King */ 7d021c344SAndy King 8d021c344SAndy King /* Implementation notes: 9d021c344SAndy King * 10d021c344SAndy King * - There are two kinds of sockets: those created by user action (such as 11d021c344SAndy King * calling socket(2)) and those created by incoming connection request packets. 12d021c344SAndy King * 13d021c344SAndy King * - There are two "global" tables, one for bound sockets (sockets that have 14d021c344SAndy King * specified an address that they are responsible for) and one for connected 15d021c344SAndy King * sockets (sockets that have established a connection with another socket). 16d021c344SAndy King * These tables are "global" in that all sockets on the system are placed 17d021c344SAndy King * within them. - Note, though, that the bound table contains an extra entry 18d021c344SAndy King * for a list of unbound sockets and SOCK_DGRAM sockets will always remain in 19d021c344SAndy King * that list. The bound table is used solely for lookup of sockets when packets 20d021c344SAndy King * are received and that's not necessary for SOCK_DGRAM sockets since we create 21d021c344SAndy King * a datagram handle for each and need not perform a lookup. Keeping SOCK_DGRAM 22d021c344SAndy King * sockets out of the bound hash buckets will reduce the chance of collisions 23d021c344SAndy King * when looking for SOCK_STREAM sockets and prevents us from having to check the 24d021c344SAndy King * socket type in the hash table lookups. 25d021c344SAndy King * 26d021c344SAndy King * - Sockets created by user action will either be "client" sockets that 27d021c344SAndy King * initiate a connection or "server" sockets that listen for connections; we do 28d021c344SAndy King * not support simultaneous connects (two "client" sockets connecting). 29d021c344SAndy King * 30d021c344SAndy King * - "Server" sockets are referred to as listener sockets throughout this 313b4477d2SStefan Hajnoczi * implementation because they are in the TCP_LISTEN state. When a 32ea3803c1SStefan Hajnoczi * connection request is received (the second kind of socket mentioned above), 33ea3803c1SStefan Hajnoczi * we create a new socket and refer to it as a pending socket. These pending 34ea3803c1SStefan Hajnoczi * sockets are placed on the pending connection list of the listener socket. 35ea3803c1SStefan Hajnoczi * When future packets are received for the address the listener socket is 36ea3803c1SStefan Hajnoczi * bound to, we check if the source of the packet is from one that has an 37ea3803c1SStefan Hajnoczi * existing pending connection. If it does, we process the packet for the 38ea3803c1SStefan Hajnoczi * pending socket. When that socket reaches the connected state, it is removed 39ea3803c1SStefan Hajnoczi * from the listener socket's pending list and enqueued in the listener 40ea3803c1SStefan Hajnoczi * socket's accept queue. Callers of accept(2) will accept connected sockets 41ea3803c1SStefan Hajnoczi * from the listener socket's accept queue. If the socket cannot be accepted 42ea3803c1SStefan Hajnoczi * for some reason then it is marked rejected. Once the connection is 43ea3803c1SStefan Hajnoczi * accepted, it is owned by the user process and the responsibility for cleanup 44ea3803c1SStefan Hajnoczi * falls with that user process. 45d021c344SAndy King * 46d021c344SAndy King * - It is possible that these pending sockets will never reach the connected 47d021c344SAndy King * state; in fact, we may never receive another packet after the connection 48d021c344SAndy King * request. Because of this, we must schedule a cleanup function to run in the 49d021c344SAndy King * future, after some amount of time passes where a connection should have been 50d021c344SAndy King * established. This function ensures that the socket is off all lists so it 51d021c344SAndy King * cannot be retrieved, then drops all references to the socket so it is cleaned 52d021c344SAndy King * up (sock_put() -> sk_free() -> our sk_destruct implementation). Note this 53d021c344SAndy King * function will also cleanup rejected sockets, those that reach the connected 54d021c344SAndy King * state but leave it before they have been accepted. 55d021c344SAndy King * 564192f672SStefan Hajnoczi * - Lock ordering for pending or accept queue sockets is: 574192f672SStefan Hajnoczi * 584192f672SStefan Hajnoczi * lock_sock(listener); 594192f672SStefan Hajnoczi * lock_sock_nested(pending, SINGLE_DEPTH_NESTING); 604192f672SStefan Hajnoczi * 614192f672SStefan Hajnoczi * Using explicit nested locking keeps lockdep happy since normally only one 624192f672SStefan Hajnoczi * lock of a given class may be taken at a time. 634192f672SStefan Hajnoczi * 64d021c344SAndy King * - Sockets created by user action will be cleaned up when the user process 65d021c344SAndy King * calls close(2), causing our release implementation to be called. Our release 66d021c344SAndy King * implementation will perform some cleanup then drop the last reference so our 67d021c344SAndy King * sk_destruct implementation is invoked. Our sk_destruct implementation will 68d021c344SAndy King * perform additional cleanup that's common for both types of sockets. 69d021c344SAndy King * 70d021c344SAndy King * - A socket's reference count is what ensures that the structure won't be 71d021c344SAndy King * freed. Each entry in a list (such as the "global" bound and connected tables 72d021c344SAndy King * and the listener socket's pending list and connected queue) ensures a 73d021c344SAndy King * reference. When we defer work until process context and pass a socket as our 74d021c344SAndy King * argument, we must ensure the reference count is increased to ensure the 75d021c344SAndy King * socket isn't freed before the function is run; the deferred function will 76d021c344SAndy King * then drop the reference. 773b4477d2SStefan Hajnoczi * 783b4477d2SStefan Hajnoczi * - sk->sk_state uses the TCP state constants because they are widely used by 793b4477d2SStefan Hajnoczi * other address families and exposed to userspace tools like ss(8): 803b4477d2SStefan Hajnoczi * 813b4477d2SStefan Hajnoczi * TCP_CLOSE - unconnected 823b4477d2SStefan Hajnoczi * TCP_SYN_SENT - connecting 833b4477d2SStefan Hajnoczi * TCP_ESTABLISHED - connected 843b4477d2SStefan Hajnoczi * TCP_CLOSING - disconnecting 853b4477d2SStefan Hajnoczi * TCP_LISTEN - listening 86d021c344SAndy King */ 87d021c344SAndy King 88b6459415SJakub Kicinski #include <linux/compat.h> 89d021c344SAndy King #include <linux/types.h> 90d021c344SAndy King #include <linux/bitops.h> 91d021c344SAndy King #include <linux/cred.h> 92*d55a40a6SArseniy Krasnov #include <linux/errqueue.h> 93d021c344SAndy King #include <linux/init.h> 94d021c344SAndy King #include <linux/io.h> 95d021c344SAndy King #include <linux/kernel.h> 96174cd4b1SIngo Molnar #include <linux/sched/signal.h> 97d021c344SAndy King #include <linux/kmod.h> 98d021c344SAndy King #include <linux/list.h> 99d021c344SAndy King #include <linux/miscdevice.h> 100d021c344SAndy King #include <linux/module.h> 101d021c344SAndy King #include <linux/mutex.h> 102d021c344SAndy King #include <linux/net.h> 103d021c344SAndy King #include <linux/poll.h> 1048236b08cSLepton Wu #include <linux/random.h> 105d021c344SAndy King #include <linux/skbuff.h> 106d021c344SAndy King #include <linux/smp.h> 107d021c344SAndy King #include <linux/socket.h> 108d021c344SAndy King #include <linux/stddef.h> 109d021c344SAndy King #include <linux/unistd.h> 110d021c344SAndy King #include <linux/wait.h> 111d021c344SAndy King #include <linux/workqueue.h> 112d021c344SAndy King #include <net/sock.h> 11382a54d0eSAsias He #include <net/af_vsock.h> 114*d55a40a6SArseniy Krasnov #include <uapi/linux/vm_sockets.h> 115d021c344SAndy King 116d021c344SAndy King static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr); 117d021c344SAndy King static void vsock_sk_destruct(struct sock *sk); 118d021c344SAndy King static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 119d021c344SAndy King 120d021c344SAndy King /* Protocol family. */ 121634f1a71SBobby Eshleman struct proto vsock_proto = { 122d021c344SAndy King .name = "AF_VSOCK", 123d021c344SAndy King .owner = THIS_MODULE, 124d021c344SAndy King .obj_size = sizeof(struct vsock_sock), 125634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 126634f1a71SBobby Eshleman .psock_update_sk_prot = vsock_bpf_update_proto, 127634f1a71SBobby Eshleman #endif 128d021c344SAndy King }; 129d021c344SAndy King 130d021c344SAndy King /* The default peer timeout indicates how long we will wait for a peer response 131d021c344SAndy King * to a control message. 132d021c344SAndy King */ 133d021c344SAndy King #define VSOCK_DEFAULT_CONNECT_TIMEOUT (2 * HZ) 134d021c344SAndy King 135b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_SIZE (1024 * 256) 136b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_MAX_SIZE (1024 * 256) 137b9f2b0ffSStefano Garzarella #define VSOCK_DEFAULT_BUFFER_MIN_SIZE 128 138b9f2b0ffSStefano Garzarella 139c0cfa2d8SStefano Garzarella /* Transport used for host->guest communication */ 140c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_h2g; 141c0cfa2d8SStefano Garzarella /* Transport used for guest->host communication */ 142c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_g2h; 143c0cfa2d8SStefano Garzarella /* Transport used for DGRAM communication */ 144c0cfa2d8SStefano Garzarella static const struct vsock_transport *transport_dgram; 1450e121905SStefano Garzarella /* Transport used for local communication */ 1460e121905SStefano Garzarella static const struct vsock_transport *transport_local; 147d021c344SAndy King static DEFINE_MUTEX(vsock_register_mutex); 148d021c344SAndy King 149d021c344SAndy King /**** UTILS ****/ 150d021c344SAndy King 151d021c344SAndy King /* Each bound VSocket is stored in the bind hash table and each connected 152d021c344SAndy King * VSocket is stored in the connected hash table. 153d021c344SAndy King * 154d021c344SAndy King * Unbound sockets are all put on the same list attached to the end of the hash 155d021c344SAndy King * table (vsock_unbound_sockets). Bound sockets are added to the hash table in 156d021c344SAndy King * the bucket that their local address hashes to (vsock_bound_sockets(addr) 157d021c344SAndy King * represents the list that addr hashes to). 158d021c344SAndy King * 159d021c344SAndy King * Specifically, we initialize the vsock_bind_table array to a size of 160d021c344SAndy King * VSOCK_HASH_SIZE + 1 so that vsock_bind_table[0] through 161d021c344SAndy King * vsock_bind_table[VSOCK_HASH_SIZE - 1] are for bound sockets and 162d021c344SAndy King * vsock_bind_table[VSOCK_HASH_SIZE] is for unbound sockets. The hash function 163a49dd9dcSAsias He * mods with VSOCK_HASH_SIZE to ensure this. 164d021c344SAndy King */ 165d021c344SAndy King #define MAX_PORT_RETRIES 24 166d021c344SAndy King 167a49dd9dcSAsias He #define VSOCK_HASH(addr) ((addr)->svm_port % VSOCK_HASH_SIZE) 168d021c344SAndy King #define vsock_bound_sockets(addr) (&vsock_bind_table[VSOCK_HASH(addr)]) 169d021c344SAndy King #define vsock_unbound_sockets (&vsock_bind_table[VSOCK_HASH_SIZE]) 170d021c344SAndy King 171d021c344SAndy King /* XXX This can probably be implemented in a better way. */ 172d021c344SAndy King #define VSOCK_CONN_HASH(src, dst) \ 173a49dd9dcSAsias He (((src)->svm_cid ^ (dst)->svm_port) % VSOCK_HASH_SIZE) 174d021c344SAndy King #define vsock_connected_sockets(src, dst) \ 175d021c344SAndy King (&vsock_connected_table[VSOCK_CONN_HASH(src, dst)]) 176d021c344SAndy King #define vsock_connected_sockets_vsk(vsk) \ 177d021c344SAndy King vsock_connected_sockets(&(vsk)->remote_addr, &(vsk)->local_addr) 178d021c344SAndy King 17944f20980SStefan Hajnoczi struct list_head vsock_bind_table[VSOCK_HASH_SIZE + 1]; 18044f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_bind_table); 18144f20980SStefan Hajnoczi struct list_head vsock_connected_table[VSOCK_HASH_SIZE]; 18244f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_connected_table); 18344f20980SStefan Hajnoczi DEFINE_SPINLOCK(vsock_table_lock); 18444f20980SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_table_lock); 185d021c344SAndy King 186b3a6dfe8SAsias He /* Autobind this socket to the local address if necessary. */ 187b3a6dfe8SAsias He static int vsock_auto_bind(struct vsock_sock *vsk) 188b3a6dfe8SAsias He { 189b3a6dfe8SAsias He struct sock *sk = sk_vsock(vsk); 190b3a6dfe8SAsias He struct sockaddr_vm local_addr; 191b3a6dfe8SAsias He 192b3a6dfe8SAsias He if (vsock_addr_bound(&vsk->local_addr)) 193b3a6dfe8SAsias He return 0; 194b3a6dfe8SAsias He vsock_addr_init(&local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 195b3a6dfe8SAsias He return __vsock_bind(sk, &local_addr); 196b3a6dfe8SAsias He } 197b3a6dfe8SAsias He 198c0cfa2d8SStefano Garzarella static void vsock_init_tables(void) 199d021c344SAndy King { 200d021c344SAndy King int i; 201d021c344SAndy King 202d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_bind_table); i++) 203d021c344SAndy King INIT_LIST_HEAD(&vsock_bind_table[i]); 204d021c344SAndy King 205d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) 206d021c344SAndy King INIT_LIST_HEAD(&vsock_connected_table[i]); 207d021c344SAndy King } 208d021c344SAndy King 209d021c344SAndy King static void __vsock_insert_bound(struct list_head *list, 210d021c344SAndy King struct vsock_sock *vsk) 211d021c344SAndy King { 212d021c344SAndy King sock_hold(&vsk->sk); 213d021c344SAndy King list_add(&vsk->bound_table, list); 214d021c344SAndy King } 215d021c344SAndy King 216d021c344SAndy King static void __vsock_insert_connected(struct list_head *list, 217d021c344SAndy King struct vsock_sock *vsk) 218d021c344SAndy King { 219d021c344SAndy King sock_hold(&vsk->sk); 220d021c344SAndy King list_add(&vsk->connected_table, list); 221d021c344SAndy King } 222d021c344SAndy King 223d021c344SAndy King static void __vsock_remove_bound(struct vsock_sock *vsk) 224d021c344SAndy King { 225d021c344SAndy King list_del_init(&vsk->bound_table); 226d021c344SAndy King sock_put(&vsk->sk); 227d021c344SAndy King } 228d021c344SAndy King 229d021c344SAndy King static void __vsock_remove_connected(struct vsock_sock *vsk) 230d021c344SAndy King { 231d021c344SAndy King list_del_init(&vsk->connected_table); 232d021c344SAndy King sock_put(&vsk->sk); 233d021c344SAndy King } 234d021c344SAndy King 235d021c344SAndy King static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) 236d021c344SAndy King { 237d021c344SAndy King struct vsock_sock *vsk; 238d021c344SAndy King 23936c5b48bSStefano Garzarella list_for_each_entry(vsk, vsock_bound_sockets(addr), bound_table) { 24036c5b48bSStefano Garzarella if (vsock_addr_equals_addr(addr, &vsk->local_addr)) 241d021c344SAndy King return sk_vsock(vsk); 242d021c344SAndy King 24336c5b48bSStefano Garzarella if (addr->svm_port == vsk->local_addr.svm_port && 24436c5b48bSStefano Garzarella (vsk->local_addr.svm_cid == VMADDR_CID_ANY || 24536c5b48bSStefano Garzarella addr->svm_cid == VMADDR_CID_ANY)) 24636c5b48bSStefano Garzarella return sk_vsock(vsk); 24736c5b48bSStefano Garzarella } 24836c5b48bSStefano Garzarella 249d021c344SAndy King return NULL; 250d021c344SAndy King } 251d021c344SAndy King 252d021c344SAndy King static struct sock *__vsock_find_connected_socket(struct sockaddr_vm *src, 253d021c344SAndy King struct sockaddr_vm *dst) 254d021c344SAndy King { 255d021c344SAndy King struct vsock_sock *vsk; 256d021c344SAndy King 257d021c344SAndy King list_for_each_entry(vsk, vsock_connected_sockets(src, dst), 258d021c344SAndy King connected_table) { 259990454b5SReilly Grant if (vsock_addr_equals_addr(src, &vsk->remote_addr) && 260990454b5SReilly Grant dst->svm_port == vsk->local_addr.svm_port) { 261d021c344SAndy King return sk_vsock(vsk); 262d021c344SAndy King } 263d021c344SAndy King } 264d021c344SAndy King 265d021c344SAndy King return NULL; 266d021c344SAndy King } 267d021c344SAndy King 268d021c344SAndy King static void vsock_insert_unbound(struct vsock_sock *vsk) 269d021c344SAndy King { 270d021c344SAndy King spin_lock_bh(&vsock_table_lock); 271d021c344SAndy King __vsock_insert_bound(vsock_unbound_sockets, vsk); 272d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 273d021c344SAndy King } 274d021c344SAndy King 275d021c344SAndy King void vsock_insert_connected(struct vsock_sock *vsk) 276d021c344SAndy King { 277d021c344SAndy King struct list_head *list = vsock_connected_sockets( 278d021c344SAndy King &vsk->remote_addr, &vsk->local_addr); 279d021c344SAndy King 280d021c344SAndy King spin_lock_bh(&vsock_table_lock); 281d021c344SAndy King __vsock_insert_connected(list, vsk); 282d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 283d021c344SAndy King } 284d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_insert_connected); 285d021c344SAndy King 286d021c344SAndy King void vsock_remove_bound(struct vsock_sock *vsk) 287d021c344SAndy King { 288d021c344SAndy King spin_lock_bh(&vsock_table_lock); 289d5afa82cSSunil Muthuswamy if (__vsock_in_bound_table(vsk)) 290d021c344SAndy King __vsock_remove_bound(vsk); 291d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 292d021c344SAndy King } 293d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_bound); 294d021c344SAndy King 295d021c344SAndy King void vsock_remove_connected(struct vsock_sock *vsk) 296d021c344SAndy King { 297d021c344SAndy King spin_lock_bh(&vsock_table_lock); 298d5afa82cSSunil Muthuswamy if (__vsock_in_connected_table(vsk)) 299d021c344SAndy King __vsock_remove_connected(vsk); 300d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 301d021c344SAndy King } 302d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_connected); 303d021c344SAndy King 304d021c344SAndy King struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr) 305d021c344SAndy King { 306d021c344SAndy King struct sock *sk; 307d021c344SAndy King 308d021c344SAndy King spin_lock_bh(&vsock_table_lock); 309d021c344SAndy King sk = __vsock_find_bound_socket(addr); 310d021c344SAndy King if (sk) 311d021c344SAndy King sock_hold(sk); 312d021c344SAndy King 313d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 314d021c344SAndy King 315d021c344SAndy King return sk; 316d021c344SAndy King } 317d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_find_bound_socket); 318d021c344SAndy King 319d021c344SAndy King struct sock *vsock_find_connected_socket(struct sockaddr_vm *src, 320d021c344SAndy King struct sockaddr_vm *dst) 321d021c344SAndy King { 322d021c344SAndy King struct sock *sk; 323d021c344SAndy King 324d021c344SAndy King spin_lock_bh(&vsock_table_lock); 325d021c344SAndy King sk = __vsock_find_connected_socket(src, dst); 326d021c344SAndy King if (sk) 327d021c344SAndy King sock_hold(sk); 328d021c344SAndy King 329d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 330d021c344SAndy King 331d021c344SAndy King return sk; 332d021c344SAndy King } 333d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_find_connected_socket); 334d021c344SAndy King 3356773b7dcSStefan Hajnoczi void vsock_remove_sock(struct vsock_sock *vsk) 3366773b7dcSStefan Hajnoczi { 3376773b7dcSStefan Hajnoczi vsock_remove_bound(vsk); 3386773b7dcSStefan Hajnoczi vsock_remove_connected(vsk); 3396773b7dcSStefan Hajnoczi } 3406773b7dcSStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_remove_sock); 3416773b7dcSStefan Hajnoczi 3428e6ed963SJiyong Park void vsock_for_each_connected_socket(struct vsock_transport *transport, 3438e6ed963SJiyong Park void (*fn)(struct sock *sk)) 344d021c344SAndy King { 345d021c344SAndy King int i; 346d021c344SAndy King 347d021c344SAndy King spin_lock_bh(&vsock_table_lock); 348d021c344SAndy King 349d021c344SAndy King for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) { 350d021c344SAndy King struct vsock_sock *vsk; 351d021c344SAndy King list_for_each_entry(vsk, &vsock_connected_table[i], 3528e6ed963SJiyong Park connected_table) { 3538e6ed963SJiyong Park if (vsk->transport != transport) 3548e6ed963SJiyong Park continue; 3558e6ed963SJiyong Park 356d021c344SAndy King fn(sk_vsock(vsk)); 357d021c344SAndy King } 3588e6ed963SJiyong Park } 359d021c344SAndy King 360d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 361d021c344SAndy King } 362d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_for_each_connected_socket); 363d021c344SAndy King 364d021c344SAndy King void vsock_add_pending(struct sock *listener, struct sock *pending) 365d021c344SAndy King { 366d021c344SAndy King struct vsock_sock *vlistener; 367d021c344SAndy King struct vsock_sock *vpending; 368d021c344SAndy King 369d021c344SAndy King vlistener = vsock_sk(listener); 370d021c344SAndy King vpending = vsock_sk(pending); 371d021c344SAndy King 372d021c344SAndy King sock_hold(pending); 373d021c344SAndy King sock_hold(listener); 374d021c344SAndy King list_add_tail(&vpending->pending_links, &vlistener->pending_links); 375d021c344SAndy King } 376d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_add_pending); 377d021c344SAndy King 378d021c344SAndy King void vsock_remove_pending(struct sock *listener, struct sock *pending) 379d021c344SAndy King { 380d021c344SAndy King struct vsock_sock *vpending = vsock_sk(pending); 381d021c344SAndy King 382d021c344SAndy King list_del_init(&vpending->pending_links); 383d021c344SAndy King sock_put(listener); 384d021c344SAndy King sock_put(pending); 385d021c344SAndy King } 386d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_remove_pending); 387d021c344SAndy King 388d021c344SAndy King void vsock_enqueue_accept(struct sock *listener, struct sock *connected) 389d021c344SAndy King { 390d021c344SAndy King struct vsock_sock *vlistener; 391d021c344SAndy King struct vsock_sock *vconnected; 392d021c344SAndy King 393d021c344SAndy King vlistener = vsock_sk(listener); 394d021c344SAndy King vconnected = vsock_sk(connected); 395d021c344SAndy King 396d021c344SAndy King sock_hold(connected); 397d021c344SAndy King sock_hold(listener); 398d021c344SAndy King list_add_tail(&vconnected->accept_queue, &vlistener->accept_queue); 399d021c344SAndy King } 400d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_enqueue_accept); 401d021c344SAndy King 402408624afSStefano Garzarella static bool vsock_use_local_transport(unsigned int remote_cid) 403408624afSStefano Garzarella { 404408624afSStefano Garzarella if (!transport_local) 405408624afSStefano Garzarella return false; 406408624afSStefano Garzarella 407408624afSStefano Garzarella if (remote_cid == VMADDR_CID_LOCAL) 408408624afSStefano Garzarella return true; 409408624afSStefano Garzarella 410408624afSStefano Garzarella if (transport_g2h) { 411408624afSStefano Garzarella return remote_cid == transport_g2h->get_local_cid(); 412408624afSStefano Garzarella } else { 413408624afSStefano Garzarella return remote_cid == VMADDR_CID_HOST; 414408624afSStefano Garzarella } 415408624afSStefano Garzarella } 416408624afSStefano Garzarella 4176a2c0962SStefano Garzarella static void vsock_deassign_transport(struct vsock_sock *vsk) 4186a2c0962SStefano Garzarella { 4196a2c0962SStefano Garzarella if (!vsk->transport) 4206a2c0962SStefano Garzarella return; 4216a2c0962SStefano Garzarella 4226a2c0962SStefano Garzarella vsk->transport->destruct(vsk); 4236a2c0962SStefano Garzarella module_put(vsk->transport->module); 4246a2c0962SStefano Garzarella vsk->transport = NULL; 4256a2c0962SStefano Garzarella } 4266a2c0962SStefano Garzarella 427c0cfa2d8SStefano Garzarella /* Assign a transport to a socket and call the .init transport callback. 428c0cfa2d8SStefano Garzarella * 4298cb48554SArseny Krasnov * Note: for connection oriented socket this must be called when vsk->remote_addr 4308cb48554SArseny Krasnov * is set (e.g. during the connect() or when a connection request on a listener 431c0cfa2d8SStefano Garzarella * socket is received). 432c0cfa2d8SStefano Garzarella * The vsk->remote_addr is used to decide which transport to use: 433408624afSStefano Garzarella * - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if 434408624afSStefano Garzarella * g2h is not loaded, will use local transport; 4357f816984SAndra Paraschiv * - remote CID <= VMADDR_CID_HOST or h2g is not loaded or remote flags field 4367f816984SAndra Paraschiv * includes VMADDR_FLAG_TO_HOST flag value, will use guest->host transport; 437c0cfa2d8SStefano Garzarella * - remote CID > VMADDR_CID_HOST will use host->guest transport; 438c0cfa2d8SStefano Garzarella */ 439c0cfa2d8SStefano Garzarella int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) 440c0cfa2d8SStefano Garzarella { 441c0cfa2d8SStefano Garzarella const struct vsock_transport *new_transport; 442c0cfa2d8SStefano Garzarella struct sock *sk = sk_vsock(vsk); 443c0cfa2d8SStefano Garzarella unsigned int remote_cid = vsk->remote_addr.svm_cid; 4447f816984SAndra Paraschiv __u8 remote_flags; 445039fcccaSStefano Garzarella int ret; 446c0cfa2d8SStefano Garzarella 4471b5f2ab9SAndra Paraschiv /* If the packet is coming with the source and destination CIDs higher 4481b5f2ab9SAndra Paraschiv * than VMADDR_CID_HOST, then a vsock channel where all the packets are 4491b5f2ab9SAndra Paraschiv * forwarded to the host should be established. Then the host will 4501b5f2ab9SAndra Paraschiv * need to forward the packets to the guest. 4511b5f2ab9SAndra Paraschiv * 4521b5f2ab9SAndra Paraschiv * The flag is set on the (listen) receive path (psk is not NULL). On 4531b5f2ab9SAndra Paraschiv * the connect path the flag can be set by the user space application. 4541b5f2ab9SAndra Paraschiv */ 4551b5f2ab9SAndra Paraschiv if (psk && vsk->local_addr.svm_cid > VMADDR_CID_HOST && 4561b5f2ab9SAndra Paraschiv vsk->remote_addr.svm_cid > VMADDR_CID_HOST) 4571b5f2ab9SAndra Paraschiv vsk->remote_addr.svm_flags |= VMADDR_FLAG_TO_HOST; 4581b5f2ab9SAndra Paraschiv 4597f816984SAndra Paraschiv remote_flags = vsk->remote_addr.svm_flags; 4607f816984SAndra Paraschiv 461c0cfa2d8SStefano Garzarella switch (sk->sk_type) { 462c0cfa2d8SStefano Garzarella case SOCK_DGRAM: 463c0cfa2d8SStefano Garzarella new_transport = transport_dgram; 464c0cfa2d8SStefano Garzarella break; 465c0cfa2d8SStefano Garzarella case SOCK_STREAM: 4660798e78bSArseny Krasnov case SOCK_SEQPACKET: 467408624afSStefano Garzarella if (vsock_use_local_transport(remote_cid)) 468408624afSStefano Garzarella new_transport = transport_local; 4697f816984SAndra Paraschiv else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g || 4707f816984SAndra Paraschiv (remote_flags & VMADDR_FLAG_TO_HOST)) 471c0cfa2d8SStefano Garzarella new_transport = transport_g2h; 472c0cfa2d8SStefano Garzarella else 473c0cfa2d8SStefano Garzarella new_transport = transport_h2g; 474c0cfa2d8SStefano Garzarella break; 475c0cfa2d8SStefano Garzarella default: 476c0cfa2d8SStefano Garzarella return -ESOCKTNOSUPPORT; 477c0cfa2d8SStefano Garzarella } 478c0cfa2d8SStefano Garzarella 479c0cfa2d8SStefano Garzarella if (vsk->transport) { 480c0cfa2d8SStefano Garzarella if (vsk->transport == new_transport) 481c0cfa2d8SStefano Garzarella return 0; 482c0cfa2d8SStefano Garzarella 4833f74957fSStefano Garzarella /* transport->release() must be called with sock lock acquired. 4848cb48554SArseny Krasnov * This path can only be taken during vsock_connect(), where we 4858cb48554SArseny Krasnov * have already held the sock lock. In the other cases, this 4868cb48554SArseny Krasnov * function is called on a new socket which is not assigned to 4878cb48554SArseny Krasnov * any transport. 4883f74957fSStefano Garzarella */ 489c0cfa2d8SStefano Garzarella vsk->transport->release(vsk); 4906a2c0962SStefano Garzarella vsock_deassign_transport(vsk); 491c0cfa2d8SStefano Garzarella } 492c0cfa2d8SStefano Garzarella 4936a2c0962SStefano Garzarella /* We increase the module refcnt to prevent the transport unloading 4946a2c0962SStefano Garzarella * while there are open sockets assigned to it. 4956a2c0962SStefano Garzarella */ 4966a2c0962SStefano Garzarella if (!new_transport || !try_module_get(new_transport->module)) 497c0cfa2d8SStefano Garzarella return -ENODEV; 498c0cfa2d8SStefano Garzarella 4990798e78bSArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) { 5000798e78bSArseny Krasnov if (!new_transport->seqpacket_allow || 5010798e78bSArseny Krasnov !new_transport->seqpacket_allow(remote_cid)) { 5020798e78bSArseny Krasnov module_put(new_transport->module); 5030798e78bSArseny Krasnov return -ESOCKTNOSUPPORT; 5040798e78bSArseny Krasnov } 5050798e78bSArseny Krasnov } 5060798e78bSArseny Krasnov 507039fcccaSStefano Garzarella ret = new_transport->init(vsk, psk); 508039fcccaSStefano Garzarella if (ret) { 509039fcccaSStefano Garzarella module_put(new_transport->module); 510039fcccaSStefano Garzarella return ret; 511039fcccaSStefano Garzarella } 512039fcccaSStefano Garzarella 513c0cfa2d8SStefano Garzarella vsk->transport = new_transport; 514c0cfa2d8SStefano Garzarella 515039fcccaSStefano Garzarella return 0; 516c0cfa2d8SStefano Garzarella } 517c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_assign_transport); 518c0cfa2d8SStefano Garzarella 519c0cfa2d8SStefano Garzarella bool vsock_find_cid(unsigned int cid) 520c0cfa2d8SStefano Garzarella { 521c0cfa2d8SStefano Garzarella if (transport_g2h && cid == transport_g2h->get_local_cid()) 522c0cfa2d8SStefano Garzarella return true; 523c0cfa2d8SStefano Garzarella 524c0cfa2d8SStefano Garzarella if (transport_h2g && cid == VMADDR_CID_HOST) 525c0cfa2d8SStefano Garzarella return true; 526c0cfa2d8SStefano Garzarella 527408624afSStefano Garzarella if (transport_local && cid == VMADDR_CID_LOCAL) 528408624afSStefano Garzarella return true; 529408624afSStefano Garzarella 530c0cfa2d8SStefano Garzarella return false; 531c0cfa2d8SStefano Garzarella } 532c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_find_cid); 533c0cfa2d8SStefano Garzarella 534d021c344SAndy King static struct sock *vsock_dequeue_accept(struct sock *listener) 535d021c344SAndy King { 536d021c344SAndy King struct vsock_sock *vlistener; 537d021c344SAndy King struct vsock_sock *vconnected; 538d021c344SAndy King 539d021c344SAndy King vlistener = vsock_sk(listener); 540d021c344SAndy King 541d021c344SAndy King if (list_empty(&vlistener->accept_queue)) 542d021c344SAndy King return NULL; 543d021c344SAndy King 544d021c344SAndy King vconnected = list_entry(vlistener->accept_queue.next, 545d021c344SAndy King struct vsock_sock, accept_queue); 546d021c344SAndy King 547d021c344SAndy King list_del_init(&vconnected->accept_queue); 548d021c344SAndy King sock_put(listener); 549d021c344SAndy King /* The caller will need a reference on the connected socket so we let 550d021c344SAndy King * it call sock_put(). 551d021c344SAndy King */ 552d021c344SAndy King 553d021c344SAndy King return sk_vsock(vconnected); 554d021c344SAndy King } 555d021c344SAndy King 556d021c344SAndy King static bool vsock_is_accept_queue_empty(struct sock *sk) 557d021c344SAndy King { 558d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 559d021c344SAndy King return list_empty(&vsk->accept_queue); 560d021c344SAndy King } 561d021c344SAndy King 562d021c344SAndy King static bool vsock_is_pending(struct sock *sk) 563d021c344SAndy King { 564d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 565d021c344SAndy King return !list_empty(&vsk->pending_links); 566d021c344SAndy King } 567d021c344SAndy King 568d021c344SAndy King static int vsock_send_shutdown(struct sock *sk, int mode) 569d021c344SAndy King { 570fe502c4aSStefano Garzarella struct vsock_sock *vsk = vsock_sk(sk); 571fe502c4aSStefano Garzarella 572c0cfa2d8SStefano Garzarella if (!vsk->transport) 573c0cfa2d8SStefano Garzarella return -ENODEV; 574c0cfa2d8SStefano Garzarella 575fe502c4aSStefano Garzarella return vsk->transport->shutdown(vsk, mode); 576d021c344SAndy King } 577d021c344SAndy King 578455f05ecSCong Wang static void vsock_pending_work(struct work_struct *work) 579d021c344SAndy King { 580d021c344SAndy King struct sock *sk; 581d021c344SAndy King struct sock *listener; 582d021c344SAndy King struct vsock_sock *vsk; 583d021c344SAndy King bool cleanup; 584d021c344SAndy King 585455f05ecSCong Wang vsk = container_of(work, struct vsock_sock, pending_work.work); 586d021c344SAndy King sk = sk_vsock(vsk); 587d021c344SAndy King listener = vsk->listener; 588d021c344SAndy King cleanup = true; 589d021c344SAndy King 590d021c344SAndy King lock_sock(listener); 5914192f672SStefan Hajnoczi lock_sock_nested(sk, SINGLE_DEPTH_NESTING); 592d021c344SAndy King 593d021c344SAndy King if (vsock_is_pending(sk)) { 594d021c344SAndy King vsock_remove_pending(listener, sk); 5951190cfdbSJorgen Hansen 5967976a11bSEric Dumazet sk_acceptq_removed(listener); 597d021c344SAndy King } else if (!vsk->rejected) { 598d021c344SAndy King /* We are not on the pending list and accept() did not reject 599d021c344SAndy King * us, so we must have been accepted by our user process. We 600d021c344SAndy King * just need to drop our references to the sockets and be on 601d021c344SAndy King * our way. 602d021c344SAndy King */ 603d021c344SAndy King cleanup = false; 604d021c344SAndy King goto out; 605d021c344SAndy King } 606d021c344SAndy King 607d021c344SAndy King /* We need to remove ourself from the global connected sockets list so 608d021c344SAndy King * incoming packets can't find this socket, and to reduce the reference 609d021c344SAndy King * count. 610d021c344SAndy King */ 611d021c344SAndy King vsock_remove_connected(vsk); 612d021c344SAndy King 6133b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 614d021c344SAndy King 615d021c344SAndy King out: 616d021c344SAndy King release_sock(sk); 617d021c344SAndy King release_sock(listener); 618d021c344SAndy King if (cleanup) 619d021c344SAndy King sock_put(sk); 620d021c344SAndy King 621d021c344SAndy King sock_put(sk); 622d021c344SAndy King sock_put(listener); 623d021c344SAndy King } 624d021c344SAndy King 625d021c344SAndy King /**** SOCKET OPERATIONS ****/ 626d021c344SAndy King 627a9e29e55SArseny Krasnov static int __vsock_bind_connectible(struct vsock_sock *vsk, 628d021c344SAndy King struct sockaddr_vm *addr) 629d021c344SAndy King { 630a22d3251SLepton Wu static u32 port; 631d021c344SAndy King struct sockaddr_vm new_addr; 632d021c344SAndy King 6338236b08cSLepton Wu if (!port) 634d247aabdSJason A. Donenfeld port = get_random_u32_above(LAST_RESERVED_PORT); 6358236b08cSLepton Wu 636d021c344SAndy King vsock_addr_init(&new_addr, addr->svm_cid, addr->svm_port); 637d021c344SAndy King 638d021c344SAndy King if (addr->svm_port == VMADDR_PORT_ANY) { 639d021c344SAndy King bool found = false; 640d021c344SAndy King unsigned int i; 641d021c344SAndy King 642d021c344SAndy King for (i = 0; i < MAX_PORT_RETRIES; i++) { 643d021c344SAndy King if (port <= LAST_RESERVED_PORT) 644d021c344SAndy King port = LAST_RESERVED_PORT + 1; 645d021c344SAndy King 646d021c344SAndy King new_addr.svm_port = port++; 647d021c344SAndy King 648d021c344SAndy King if (!__vsock_find_bound_socket(&new_addr)) { 649d021c344SAndy King found = true; 650d021c344SAndy King break; 651d021c344SAndy King } 652d021c344SAndy King } 653d021c344SAndy King 654d021c344SAndy King if (!found) 655d021c344SAndy King return -EADDRNOTAVAIL; 656d021c344SAndy King } else { 657d021c344SAndy King /* If port is in reserved range, ensure caller 658d021c344SAndy King * has necessary privileges. 659d021c344SAndy King */ 660d021c344SAndy King if (addr->svm_port <= LAST_RESERVED_PORT && 661d021c344SAndy King !capable(CAP_NET_BIND_SERVICE)) { 662d021c344SAndy King return -EACCES; 663d021c344SAndy King } 664d021c344SAndy King 665d021c344SAndy King if (__vsock_find_bound_socket(&new_addr)) 666d021c344SAndy King return -EADDRINUSE; 667d021c344SAndy King } 668d021c344SAndy King 669d021c344SAndy King vsock_addr_init(&vsk->local_addr, new_addr.svm_cid, new_addr.svm_port); 670d021c344SAndy King 6718cb48554SArseny Krasnov /* Remove connection oriented sockets from the unbound list and add them 6728cb48554SArseny Krasnov * to the hash table for easy lookup by its address. The unbound list 6738cb48554SArseny Krasnov * is simply an extra entry at the end of the hash table, a trick used 6748cb48554SArseny Krasnov * by AF_UNIX. 675d021c344SAndy King */ 676d021c344SAndy King __vsock_remove_bound(vsk); 677d021c344SAndy King __vsock_insert_bound(vsock_bound_sockets(&vsk->local_addr), vsk); 678d021c344SAndy King 679d021c344SAndy King return 0; 680d021c344SAndy King } 681d021c344SAndy King 682d021c344SAndy King static int __vsock_bind_dgram(struct vsock_sock *vsk, 683d021c344SAndy King struct sockaddr_vm *addr) 684d021c344SAndy King { 685fe502c4aSStefano Garzarella return vsk->transport->dgram_bind(vsk, addr); 686d021c344SAndy King } 687d021c344SAndy King 688d021c344SAndy King static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr) 689d021c344SAndy King { 690d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 691d021c344SAndy King int retval; 692d021c344SAndy King 693d021c344SAndy King /* First ensure this socket isn't already bound. */ 694d021c344SAndy King if (vsock_addr_bound(&vsk->local_addr)) 695d021c344SAndy King return -EINVAL; 696d021c344SAndy King 697d021c344SAndy King /* Now bind to the provided address or select appropriate values if 698d021c344SAndy King * none are provided (VMADDR_CID_ANY and VMADDR_PORT_ANY). Note that 699d021c344SAndy King * like AF_INET prevents binding to a non-local IP address (in most 700c0cfa2d8SStefano Garzarella * cases), we only allow binding to a local CID. 701d021c344SAndy King */ 702c0cfa2d8SStefano Garzarella if (addr->svm_cid != VMADDR_CID_ANY && !vsock_find_cid(addr->svm_cid)) 703d021c344SAndy King return -EADDRNOTAVAIL; 704d021c344SAndy King 705d021c344SAndy King switch (sk->sk_socket->type) { 706d021c344SAndy King case SOCK_STREAM: 7070798e78bSArseny Krasnov case SOCK_SEQPACKET: 708d021c344SAndy King spin_lock_bh(&vsock_table_lock); 709a9e29e55SArseny Krasnov retval = __vsock_bind_connectible(vsk, addr); 710d021c344SAndy King spin_unlock_bh(&vsock_table_lock); 711d021c344SAndy King break; 712d021c344SAndy King 713d021c344SAndy King case SOCK_DGRAM: 714d021c344SAndy King retval = __vsock_bind_dgram(vsk, addr); 715d021c344SAndy King break; 716d021c344SAndy King 717d021c344SAndy King default: 718d021c344SAndy King retval = -EINVAL; 719d021c344SAndy King break; 720d021c344SAndy King } 721d021c344SAndy King 722d021c344SAndy King return retval; 723d021c344SAndy King } 724d021c344SAndy King 725455f05ecSCong Wang static void vsock_connect_timeout(struct work_struct *work); 726455f05ecSCong Wang 727b9ca2f5fSStefano Garzarella static struct sock *__vsock_create(struct net *net, 728d021c344SAndy King struct socket *sock, 729d021c344SAndy King struct sock *parent, 730d021c344SAndy King gfp_t priority, 73111aa9c28SEric W. Biederman unsigned short type, 73211aa9c28SEric W. Biederman int kern) 733d021c344SAndy King { 734d021c344SAndy King struct sock *sk; 735d021c344SAndy King struct vsock_sock *psk; 736d021c344SAndy King struct vsock_sock *vsk; 737d021c344SAndy King 73811aa9c28SEric W. Biederman sk = sk_alloc(net, AF_VSOCK, priority, &vsock_proto, kern); 739d021c344SAndy King if (!sk) 740d021c344SAndy King return NULL; 741d021c344SAndy King 742d021c344SAndy King sock_init_data(sock, sk); 743d021c344SAndy King 744d021c344SAndy King /* sk->sk_type is normally set in sock_init_data, but only if sock is 745d021c344SAndy King * non-NULL. We make sure that our sockets always have a type by 746d021c344SAndy King * setting it here if needed. 747d021c344SAndy King */ 748d021c344SAndy King if (!sock) 749d021c344SAndy King sk->sk_type = type; 750d021c344SAndy King 751d021c344SAndy King vsk = vsock_sk(sk); 752d021c344SAndy King vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 753d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 754d021c344SAndy King 755d021c344SAndy King sk->sk_destruct = vsock_sk_destruct; 756d021c344SAndy King sk->sk_backlog_rcv = vsock_queue_rcv_skb; 757d021c344SAndy King sock_reset_flag(sk, SOCK_DONE); 758d021c344SAndy King 759d021c344SAndy King INIT_LIST_HEAD(&vsk->bound_table); 760d021c344SAndy King INIT_LIST_HEAD(&vsk->connected_table); 761d021c344SAndy King vsk->listener = NULL; 762d021c344SAndy King INIT_LIST_HEAD(&vsk->pending_links); 763d021c344SAndy King INIT_LIST_HEAD(&vsk->accept_queue); 764d021c344SAndy King vsk->rejected = false; 765d021c344SAndy King vsk->sent_request = false; 766d021c344SAndy King vsk->ignore_connecting_rst = false; 767d021c344SAndy King vsk->peer_shutdown = 0; 768455f05ecSCong Wang INIT_DELAYED_WORK(&vsk->connect_work, vsock_connect_timeout); 769455f05ecSCong Wang INIT_DELAYED_WORK(&vsk->pending_work, vsock_pending_work); 770d021c344SAndy King 771d021c344SAndy King psk = parent ? vsock_sk(parent) : NULL; 772d021c344SAndy King if (parent) { 773d021c344SAndy King vsk->trusted = psk->trusted; 774d021c344SAndy King vsk->owner = get_cred(psk->owner); 775d021c344SAndy King vsk->connect_timeout = psk->connect_timeout; 776b9f2b0ffSStefano Garzarella vsk->buffer_size = psk->buffer_size; 777b9f2b0ffSStefano Garzarella vsk->buffer_min_size = psk->buffer_min_size; 778b9f2b0ffSStefano Garzarella vsk->buffer_max_size = psk->buffer_max_size; 7791f935e8eSDavid Brazdil security_sk_clone(parent, sk); 780d021c344SAndy King } else { 781af545bb5SJeff Vander Stoep vsk->trusted = ns_capable_noaudit(&init_user_ns, CAP_NET_ADMIN); 782d021c344SAndy King vsk->owner = get_current_cred(); 783d021c344SAndy King vsk->connect_timeout = VSOCK_DEFAULT_CONNECT_TIMEOUT; 784b9f2b0ffSStefano Garzarella vsk->buffer_size = VSOCK_DEFAULT_BUFFER_SIZE; 785b9f2b0ffSStefano Garzarella vsk->buffer_min_size = VSOCK_DEFAULT_BUFFER_MIN_SIZE; 786b9f2b0ffSStefano Garzarella vsk->buffer_max_size = VSOCK_DEFAULT_BUFFER_MAX_SIZE; 787d021c344SAndy King } 788d021c344SAndy King 789d021c344SAndy King return sk; 790d021c344SAndy King } 791d021c344SAndy King 792a9e29e55SArseny Krasnov static bool sock_type_connectible(u16 type) 793a9e29e55SArseny Krasnov { 7940798e78bSArseny Krasnov return (type == SOCK_STREAM) || (type == SOCK_SEQPACKET); 795a9e29e55SArseny Krasnov } 796a9e29e55SArseny Krasnov 7970d9138ffSDexuan Cui static void __vsock_release(struct sock *sk, int level) 798d021c344SAndy King { 799d021c344SAndy King if (sk) { 800d021c344SAndy King struct sock *pending; 801d021c344SAndy King struct vsock_sock *vsk; 802d021c344SAndy King 803d021c344SAndy King vsk = vsock_sk(sk); 804d021c344SAndy King pending = NULL; /* Compiler warning. */ 805d021c344SAndy King 8060d9138ffSDexuan Cui /* When "level" is SINGLE_DEPTH_NESTING, use the nested 8070d9138ffSDexuan Cui * version to avoid the warning "possible recursive locking 8080d9138ffSDexuan Cui * detected". When "level" is 0, lock_sock_nested(sk, level) 8090d9138ffSDexuan Cui * is the same as lock_sock(sk). 8100d9138ffSDexuan Cui */ 8110d9138ffSDexuan Cui lock_sock_nested(sk, level); 8123f74957fSStefano Garzarella 8133f74957fSStefano Garzarella if (vsk->transport) 8143f74957fSStefano Garzarella vsk->transport->release(vsk); 815a9e29e55SArseny Krasnov else if (sock_type_connectible(sk->sk_type)) 8163f74957fSStefano Garzarella vsock_remove_sock(vsk); 8173f74957fSStefano Garzarella 818d021c344SAndy King sock_orphan(sk); 819d021c344SAndy King sk->sk_shutdown = SHUTDOWN_MASK; 820d021c344SAndy King 8213b7ad08bSChristophe JAILLET skb_queue_purge(&sk->sk_receive_queue); 822d021c344SAndy King 823d021c344SAndy King /* Clean up any sockets that never were accepted. */ 824d021c344SAndy King while ((pending = vsock_dequeue_accept(sk)) != NULL) { 8250d9138ffSDexuan Cui __vsock_release(pending, SINGLE_DEPTH_NESTING); 826d021c344SAndy King sock_put(pending); 827d021c344SAndy King } 828d021c344SAndy King 829d021c344SAndy King release_sock(sk); 830d021c344SAndy King sock_put(sk); 831d021c344SAndy King } 832d021c344SAndy King } 833d021c344SAndy King 834d021c344SAndy King static void vsock_sk_destruct(struct sock *sk) 835d021c344SAndy King { 836d021c344SAndy King struct vsock_sock *vsk = vsock_sk(sk); 837d021c344SAndy King 8386a2c0962SStefano Garzarella vsock_deassign_transport(vsk); 839d021c344SAndy King 840d021c344SAndy King /* When clearing these addresses, there's no need to set the family and 841d021c344SAndy King * possibly register the address family with the kernel. 842d021c344SAndy King */ 843d021c344SAndy King vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 844d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); 845d021c344SAndy King 846d021c344SAndy King put_cred(vsk->owner); 847d021c344SAndy King } 848d021c344SAndy King 849d021c344SAndy King static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) 850d021c344SAndy King { 851d021c344SAndy King int err; 852d021c344SAndy King 853d021c344SAndy King err = sock_queue_rcv_skb(sk, skb); 854d021c344SAndy King if (err) 855d021c344SAndy King kfree_skb(skb); 856d021c344SAndy King 857d021c344SAndy King return err; 858d021c344SAndy King } 859d021c344SAndy King 860b9ca2f5fSStefano Garzarella struct sock *vsock_create_connected(struct sock *parent) 861b9ca2f5fSStefano Garzarella { 862b9ca2f5fSStefano Garzarella return __vsock_create(sock_net(parent), NULL, parent, GFP_KERNEL, 863b9ca2f5fSStefano Garzarella parent->sk_type, 0); 864b9ca2f5fSStefano Garzarella } 865b9ca2f5fSStefano Garzarella EXPORT_SYMBOL_GPL(vsock_create_connected); 866b9ca2f5fSStefano Garzarella 867d021c344SAndy King s64 vsock_stream_has_data(struct vsock_sock *vsk) 868d021c344SAndy King { 869fe502c4aSStefano Garzarella return vsk->transport->stream_has_data(vsk); 870d021c344SAndy King } 871d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_stream_has_data); 872d021c344SAndy King 873634f1a71SBobby Eshleman s64 vsock_connectible_has_data(struct vsock_sock *vsk) 8740798e78bSArseny Krasnov { 8750798e78bSArseny Krasnov struct sock *sk = sk_vsock(vsk); 8760798e78bSArseny Krasnov 8770798e78bSArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) 8780798e78bSArseny Krasnov return vsk->transport->seqpacket_has_data(vsk); 8790798e78bSArseny Krasnov else 8800798e78bSArseny Krasnov return vsock_stream_has_data(vsk); 8810798e78bSArseny Krasnov } 882634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_connectible_has_data); 8830798e78bSArseny Krasnov 884d021c344SAndy King s64 vsock_stream_has_space(struct vsock_sock *vsk) 885d021c344SAndy King { 886fe502c4aSStefano Garzarella return vsk->transport->stream_has_space(vsk); 887d021c344SAndy King } 888d021c344SAndy King EXPORT_SYMBOL_GPL(vsock_stream_has_space); 889d021c344SAndy King 890f2fdcf67SArseniy Krasnov void vsock_data_ready(struct sock *sk) 891f2fdcf67SArseniy Krasnov { 892f2fdcf67SArseniy Krasnov struct vsock_sock *vsk = vsock_sk(sk); 893f2fdcf67SArseniy Krasnov 894f2fdcf67SArseniy Krasnov if (vsock_stream_has_data(vsk) >= sk->sk_rcvlowat || 895f2fdcf67SArseniy Krasnov sock_flag(sk, SOCK_DONE)) 896f2fdcf67SArseniy Krasnov sk->sk_data_ready(sk); 897f2fdcf67SArseniy Krasnov } 898f2fdcf67SArseniy Krasnov EXPORT_SYMBOL_GPL(vsock_data_ready); 899f2fdcf67SArseniy Krasnov 900d021c344SAndy King static int vsock_release(struct socket *sock) 901d021c344SAndy King { 9020d9138ffSDexuan Cui __vsock_release(sock->sk, 0); 903d021c344SAndy King sock->sk = NULL; 904d021c344SAndy King sock->state = SS_FREE; 905d021c344SAndy King 906d021c344SAndy King return 0; 907d021c344SAndy King } 908d021c344SAndy King 909d021c344SAndy King static int 910d021c344SAndy King vsock_bind(struct socket *sock, struct sockaddr *addr, int addr_len) 911d021c344SAndy King { 912d021c344SAndy King int err; 913d021c344SAndy King struct sock *sk; 914d021c344SAndy King struct sockaddr_vm *vm_addr; 915d021c344SAndy King 916d021c344SAndy King sk = sock->sk; 917d021c344SAndy King 918d021c344SAndy King if (vsock_addr_cast(addr, addr_len, &vm_addr) != 0) 919d021c344SAndy King return -EINVAL; 920d021c344SAndy King 921d021c344SAndy King lock_sock(sk); 922d021c344SAndy King err = __vsock_bind(sk, vm_addr); 923d021c344SAndy King release_sock(sk); 924d021c344SAndy King 925d021c344SAndy King return err; 926d021c344SAndy King } 927d021c344SAndy King 928d021c344SAndy King static int vsock_getname(struct socket *sock, 9299b2c45d4SDenys Vlasenko struct sockaddr *addr, int peer) 930d021c344SAndy King { 931d021c344SAndy King int err; 932d021c344SAndy King struct sock *sk; 933d021c344SAndy King struct vsock_sock *vsk; 934d021c344SAndy King struct sockaddr_vm *vm_addr; 935d021c344SAndy King 936d021c344SAndy King sk = sock->sk; 937d021c344SAndy King vsk = vsock_sk(sk); 938d021c344SAndy King err = 0; 939d021c344SAndy King 940d021c344SAndy King lock_sock(sk); 941d021c344SAndy King 942d021c344SAndy King if (peer) { 943d021c344SAndy King if (sock->state != SS_CONNECTED) { 944d021c344SAndy King err = -ENOTCONN; 945d021c344SAndy King goto out; 946d021c344SAndy King } 947d021c344SAndy King vm_addr = &vsk->remote_addr; 948d021c344SAndy King } else { 949d021c344SAndy King vm_addr = &vsk->local_addr; 950d021c344SAndy King } 951d021c344SAndy King 952d021c344SAndy King if (!vm_addr) { 953d021c344SAndy King err = -EINVAL; 954d021c344SAndy King goto out; 955d021c344SAndy King } 956d021c344SAndy King 957d021c344SAndy King /* sys_getsockname() and sys_getpeername() pass us a 958d021c344SAndy King * MAX_SOCK_ADDR-sized buffer and don't set addr_len. Unfortunately 959d021c344SAndy King * that macro is defined in socket.c instead of .h, so we hardcode its 960d021c344SAndy King * value here. 961d021c344SAndy King */ 962d021c344SAndy King BUILD_BUG_ON(sizeof(*vm_addr) > 128); 963d021c344SAndy King memcpy(addr, vm_addr, sizeof(*vm_addr)); 9649b2c45d4SDenys Vlasenko err = sizeof(*vm_addr); 965d021c344SAndy King 966d021c344SAndy King out: 967d021c344SAndy King release_sock(sk); 968d021c344SAndy King return err; 969d021c344SAndy King } 970d021c344SAndy King 971d021c344SAndy King static int vsock_shutdown(struct socket *sock, int mode) 972d021c344SAndy King { 973d021c344SAndy King int err; 974d021c344SAndy King struct sock *sk; 975d021c344SAndy King 976d021c344SAndy King /* User level uses SHUT_RD (0) and SHUT_WR (1), but the kernel uses 977d021c344SAndy King * RCV_SHUTDOWN (1) and SEND_SHUTDOWN (2), so we must increment mode 978d021c344SAndy King * here like the other address families do. Note also that the 979d021c344SAndy King * increment makes SHUT_RDWR (2) into RCV_SHUTDOWN | SEND_SHUTDOWN (3), 980d021c344SAndy King * which is what we want. 981d021c344SAndy King */ 982d021c344SAndy King mode++; 983d021c344SAndy King 984d021c344SAndy King if ((mode & ~SHUTDOWN_MASK) || !mode) 985d021c344SAndy King return -EINVAL; 986d021c344SAndy King 9878cb48554SArseny Krasnov /* If this is a connection oriented socket and it is not connected then 9888cb48554SArseny Krasnov * bail out immediately. If it is a DGRAM socket then we must first 9898cb48554SArseny Krasnov * kick the socket so that it wakes up from any sleeping calls, for 9908cb48554SArseny Krasnov * example recv(), and then afterwards return the error. 991d021c344SAndy King */ 992d021c344SAndy King 993d021c344SAndy King sk = sock->sk; 9941c5fae9cSStefano Garzarella 9951c5fae9cSStefano Garzarella lock_sock(sk); 996d021c344SAndy King if (sock->state == SS_UNCONNECTED) { 997d021c344SAndy King err = -ENOTCONN; 998a9e29e55SArseny Krasnov if (sock_type_connectible(sk->sk_type)) 9991c5fae9cSStefano Garzarella goto out; 1000d021c344SAndy King } else { 1001d021c344SAndy King sock->state = SS_DISCONNECTING; 1002d021c344SAndy King err = 0; 1003d021c344SAndy King } 1004d021c344SAndy King 1005d021c344SAndy King /* Receive and send shutdowns are treated alike. */ 1006d021c344SAndy King mode = mode & (RCV_SHUTDOWN | SEND_SHUTDOWN); 1007d021c344SAndy King if (mode) { 1008d021c344SAndy King sk->sk_shutdown |= mode; 1009d021c344SAndy King sk->sk_state_change(sk); 1010d021c344SAndy King 1011a9e29e55SArseny Krasnov if (sock_type_connectible(sk->sk_type)) { 1012d021c344SAndy King sock_reset_flag(sk, SOCK_DONE); 1013d021c344SAndy King vsock_send_shutdown(sk, mode); 1014d021c344SAndy King } 1015d021c344SAndy King } 1016d021c344SAndy King 10171c5fae9cSStefano Garzarella out: 10181c5fae9cSStefano Garzarella release_sock(sk); 1019d021c344SAndy King return err; 1020d021c344SAndy King } 1021d021c344SAndy King 1022a11e1d43SLinus Torvalds static __poll_t vsock_poll(struct file *file, struct socket *sock, 1023a11e1d43SLinus Torvalds poll_table *wait) 1024d021c344SAndy King { 1025a11e1d43SLinus Torvalds struct sock *sk; 1026a11e1d43SLinus Torvalds __poll_t mask; 1027a11e1d43SLinus Torvalds struct vsock_sock *vsk; 1028a11e1d43SLinus Torvalds 1029a11e1d43SLinus Torvalds sk = sock->sk; 1030a11e1d43SLinus Torvalds vsk = vsock_sk(sk); 1031a11e1d43SLinus Torvalds 1032a11e1d43SLinus Torvalds poll_wait(file, sk_sleep(sk), wait); 1033a11e1d43SLinus Torvalds mask = 0; 1034d021c344SAndy King 1035d021c344SAndy King if (sk->sk_err) 1036d021c344SAndy King /* Signify that there has been an error on this socket. */ 1037a9a08845SLinus Torvalds mask |= EPOLLERR; 1038d021c344SAndy King 1039d021c344SAndy King /* INET sockets treat local write shutdown and peer write shutdown as a 1040a9a08845SLinus Torvalds * case of EPOLLHUP set. 1041d021c344SAndy King */ 1042d021c344SAndy King if ((sk->sk_shutdown == SHUTDOWN_MASK) || 1043d021c344SAndy King ((sk->sk_shutdown & SEND_SHUTDOWN) && 1044d021c344SAndy King (vsk->peer_shutdown & SEND_SHUTDOWN))) { 1045a9a08845SLinus Torvalds mask |= EPOLLHUP; 1046d021c344SAndy King } 1047d021c344SAndy King 1048d021c344SAndy King if (sk->sk_shutdown & RCV_SHUTDOWN || 1049d021c344SAndy King vsk->peer_shutdown & SEND_SHUTDOWN) { 1050a9a08845SLinus Torvalds mask |= EPOLLRDHUP; 1051d021c344SAndy King } 1052d021c344SAndy King 1053d021c344SAndy King if (sock->type == SOCK_DGRAM) { 1054d021c344SAndy King /* For datagram sockets we can read if there is something in 1055d021c344SAndy King * the queue and write as long as the socket isn't shutdown for 1056d021c344SAndy King * sending. 1057d021c344SAndy King */ 10583ef7cf57SEric Dumazet if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || 1059d021c344SAndy King (sk->sk_shutdown & RCV_SHUTDOWN)) { 1060a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1061d021c344SAndy King } 1062d021c344SAndy King 1063d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) 1064a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND; 1065d021c344SAndy King 1066a9e29e55SArseny Krasnov } else if (sock_type_connectible(sk->sk_type)) { 1067c518adafSAlexander Popov const struct vsock_transport *transport; 1068c518adafSAlexander Popov 1069d021c344SAndy King lock_sock(sk); 1070d021c344SAndy King 1071c518adafSAlexander Popov transport = vsk->transport; 1072c518adafSAlexander Popov 1073d021c344SAndy King /* Listening sockets that have connections in their accept 1074d021c344SAndy King * queue can be read. 1075d021c344SAndy King */ 10763b4477d2SStefan Hajnoczi if (sk->sk_state == TCP_LISTEN 1077d021c344SAndy King && !vsock_is_accept_queue_empty(sk)) 1078a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1079d021c344SAndy King 1080d021c344SAndy King /* If there is something in the queue then we can read. */ 1081c0cfa2d8SStefano Garzarella if (transport && transport->stream_is_active(vsk) && 1082d021c344SAndy King !(sk->sk_shutdown & RCV_SHUTDOWN)) { 1083d021c344SAndy King bool data_ready_now = false; 1084ee0b3843SArseniy Krasnov int target = sock_rcvlowat(sk, 0, INT_MAX); 1085d021c344SAndy King int ret = transport->notify_poll_in( 1086ee0b3843SArseniy Krasnov vsk, target, &data_ready_now); 1087d021c344SAndy King if (ret < 0) { 1088a9a08845SLinus Torvalds mask |= EPOLLERR; 1089d021c344SAndy King } else { 1090d021c344SAndy King if (data_ready_now) 1091a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1092d021c344SAndy King 1093d021c344SAndy King } 1094d021c344SAndy King } 1095d021c344SAndy King 1096d021c344SAndy King /* Sockets whose connections have been closed, reset, or 1097d021c344SAndy King * terminated should also be considered read, and we check the 1098d021c344SAndy King * shutdown flag for that. 1099d021c344SAndy King */ 1100d021c344SAndy King if (sk->sk_shutdown & RCV_SHUTDOWN || 1101d021c344SAndy King vsk->peer_shutdown & SEND_SHUTDOWN) { 1102a9a08845SLinus Torvalds mask |= EPOLLIN | EPOLLRDNORM; 1103d021c344SAndy King } 1104d021c344SAndy King 1105d021c344SAndy King /* Connected sockets that can produce data can be written. */ 11061980c058SStefano Garzarella if (transport && sk->sk_state == TCP_ESTABLISHED) { 1107d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { 1108d021c344SAndy King bool space_avail_now = false; 1109d021c344SAndy King int ret = transport->notify_poll_out( 1110d021c344SAndy King vsk, 1, &space_avail_now); 1111d021c344SAndy King if (ret < 0) { 1112a9a08845SLinus Torvalds mask |= EPOLLERR; 1113d021c344SAndy King } else { 1114d021c344SAndy King if (space_avail_now) 1115a9a08845SLinus Torvalds /* Remove EPOLLWRBAND since INET 1116d021c344SAndy King * sockets are not setting it. 1117d021c344SAndy King */ 1118a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM; 1119d021c344SAndy King 1120d021c344SAndy King } 1121d021c344SAndy King } 1122d021c344SAndy King } 1123d021c344SAndy King 1124d021c344SAndy King /* Simulate INET socket poll behaviors, which sets 1125a9a08845SLinus Torvalds * EPOLLOUT|EPOLLWRNORM when peer is closed and nothing to read, 1126d021c344SAndy King * but local send is not shutdown. 1127d021c344SAndy King */ 1128ba3169fcSStefan Hajnoczi if (sk->sk_state == TCP_CLOSE || sk->sk_state == TCP_CLOSING) { 1129d021c344SAndy King if (!(sk->sk_shutdown & SEND_SHUTDOWN)) 1130a9a08845SLinus Torvalds mask |= EPOLLOUT | EPOLLWRNORM; 1131d021c344SAndy King 1132d021c344SAndy King } 1133d021c344SAndy King 1134d021c344SAndy King release_sock(sk); 1135d021c344SAndy King } 1136d021c344SAndy King 1137d021c344SAndy King return mask; 1138d021c344SAndy King } 1139d021c344SAndy King 1140634f1a71SBobby Eshleman static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor) 1141634f1a71SBobby Eshleman { 1142634f1a71SBobby Eshleman struct vsock_sock *vsk = vsock_sk(sk); 1143634f1a71SBobby Eshleman 1144634f1a71SBobby Eshleman return vsk->transport->read_skb(vsk, read_actor); 1145634f1a71SBobby Eshleman } 1146634f1a71SBobby Eshleman 11471b784140SYing Xue static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, 11481b784140SYing Xue size_t len) 1149d021c344SAndy King { 1150d021c344SAndy King int err; 1151d021c344SAndy King struct sock *sk; 1152d021c344SAndy King struct vsock_sock *vsk; 1153d021c344SAndy King struct sockaddr_vm *remote_addr; 1154fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1155d021c344SAndy King 1156d021c344SAndy King if (msg->msg_flags & MSG_OOB) 1157d021c344SAndy King return -EOPNOTSUPP; 1158d021c344SAndy King 1159d021c344SAndy King /* For now, MSG_DONTWAIT is always assumed... */ 1160d021c344SAndy King err = 0; 1161d021c344SAndy King sk = sock->sk; 1162d021c344SAndy King vsk = vsock_sk(sk); 1163d021c344SAndy King 1164d021c344SAndy King lock_sock(sk); 1165d021c344SAndy King 1166c518adafSAlexander Popov transport = vsk->transport; 1167c518adafSAlexander Popov 1168b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1169b3a6dfe8SAsias He if (err) 1170d021c344SAndy King goto out; 1171d021c344SAndy King 1172d021c344SAndy King 1173d021c344SAndy King /* If the provided message contains an address, use that. Otherwise 1174d021c344SAndy King * fall back on the socket's remote handle (if it has been connected). 1175d021c344SAndy King */ 1176d021c344SAndy King if (msg->msg_name && 1177d021c344SAndy King vsock_addr_cast(msg->msg_name, msg->msg_namelen, 1178d021c344SAndy King &remote_addr) == 0) { 1179d021c344SAndy King /* Ensure this address is of the right type and is a valid 1180d021c344SAndy King * destination. 1181d021c344SAndy King */ 1182d021c344SAndy King 1183d021c344SAndy King if (remote_addr->svm_cid == VMADDR_CID_ANY) 1184d021c344SAndy King remote_addr->svm_cid = transport->get_local_cid(); 1185d021c344SAndy King 1186d021c344SAndy King if (!vsock_addr_bound(remote_addr)) { 1187d021c344SAndy King err = -EINVAL; 1188d021c344SAndy King goto out; 1189d021c344SAndy King } 1190d021c344SAndy King } else if (sock->state == SS_CONNECTED) { 1191d021c344SAndy King remote_addr = &vsk->remote_addr; 1192d021c344SAndy King 1193d021c344SAndy King if (remote_addr->svm_cid == VMADDR_CID_ANY) 1194d021c344SAndy King remote_addr->svm_cid = transport->get_local_cid(); 1195d021c344SAndy King 1196d021c344SAndy King /* XXX Should connect() or this function ensure remote_addr is 1197d021c344SAndy King * bound? 1198d021c344SAndy King */ 1199d021c344SAndy King if (!vsock_addr_bound(&vsk->remote_addr)) { 1200d021c344SAndy King err = -EINVAL; 1201d021c344SAndy King goto out; 1202d021c344SAndy King } 1203d021c344SAndy King } else { 1204d021c344SAndy King err = -EINVAL; 1205d021c344SAndy King goto out; 1206d021c344SAndy King } 1207d021c344SAndy King 1208d021c344SAndy King if (!transport->dgram_allow(remote_addr->svm_cid, 1209d021c344SAndy King remote_addr->svm_port)) { 1210d021c344SAndy King err = -EINVAL; 1211d021c344SAndy King goto out; 1212d021c344SAndy King } 1213d021c344SAndy King 12140f7db23aSAl Viro err = transport->dgram_enqueue(vsk, remote_addr, msg, len); 1215d021c344SAndy King 1216d021c344SAndy King out: 1217d021c344SAndy King release_sock(sk); 1218d021c344SAndy King return err; 1219d021c344SAndy King } 1220d021c344SAndy King 1221d021c344SAndy King static int vsock_dgram_connect(struct socket *sock, 1222d021c344SAndy King struct sockaddr *addr, int addr_len, int flags) 1223d021c344SAndy King { 1224d021c344SAndy King int err; 1225d021c344SAndy King struct sock *sk; 1226d021c344SAndy King struct vsock_sock *vsk; 1227d021c344SAndy King struct sockaddr_vm *remote_addr; 1228d021c344SAndy King 1229d021c344SAndy King sk = sock->sk; 1230d021c344SAndy King vsk = vsock_sk(sk); 1231d021c344SAndy King 1232d021c344SAndy King err = vsock_addr_cast(addr, addr_len, &remote_addr); 1233d021c344SAndy King if (err == -EAFNOSUPPORT && remote_addr->svm_family == AF_UNSPEC) { 1234d021c344SAndy King lock_sock(sk); 1235d021c344SAndy King vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, 1236d021c344SAndy King VMADDR_PORT_ANY); 1237d021c344SAndy King sock->state = SS_UNCONNECTED; 1238d021c344SAndy King release_sock(sk); 1239d021c344SAndy King return 0; 1240d021c344SAndy King } else if (err != 0) 1241d021c344SAndy King return -EINVAL; 1242d021c344SAndy King 1243d021c344SAndy King lock_sock(sk); 1244d021c344SAndy King 1245b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1246b3a6dfe8SAsias He if (err) 1247d021c344SAndy King goto out; 1248d021c344SAndy King 1249fe502c4aSStefano Garzarella if (!vsk->transport->dgram_allow(remote_addr->svm_cid, 1250d021c344SAndy King remote_addr->svm_port)) { 1251d021c344SAndy King err = -EINVAL; 1252d021c344SAndy King goto out; 1253d021c344SAndy King } 1254d021c344SAndy King 1255d021c344SAndy King memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); 1256d021c344SAndy King sock->state = SS_CONNECTED; 1257d021c344SAndy King 1258634f1a71SBobby Eshleman /* sock map disallows redirection of non-TCP sockets with sk_state != 1259634f1a71SBobby Eshleman * TCP_ESTABLISHED (see sock_map_redirect_allowed()), so we set 1260634f1a71SBobby Eshleman * TCP_ESTABLISHED here to allow redirection of connected vsock dgrams. 1261634f1a71SBobby Eshleman * 1262634f1a71SBobby Eshleman * This doesn't seem to be abnormal state for datagram sockets, as the 1263634f1a71SBobby Eshleman * same approach can be see in other datagram socket types as well 1264634f1a71SBobby Eshleman * (such as unix sockets). 1265634f1a71SBobby Eshleman */ 1266634f1a71SBobby Eshleman sk->sk_state = TCP_ESTABLISHED; 1267634f1a71SBobby Eshleman 1268d021c344SAndy King out: 1269d021c344SAndy King release_sock(sk); 1270d021c344SAndy King return err; 1271d021c344SAndy King } 1272d021c344SAndy King 1273634f1a71SBobby Eshleman int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, 12741b784140SYing Xue size_t len, int flags) 1275d021c344SAndy King { 1276634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 1277634f1a71SBobby Eshleman const struct proto *prot; 1278634f1a71SBobby Eshleman #endif 1279634f1a71SBobby Eshleman struct vsock_sock *vsk; 1280634f1a71SBobby Eshleman struct sock *sk; 1281634f1a71SBobby Eshleman 1282634f1a71SBobby Eshleman sk = sock->sk; 1283634f1a71SBobby Eshleman vsk = vsock_sk(sk); 1284634f1a71SBobby Eshleman 1285634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 1286634f1a71SBobby Eshleman prot = READ_ONCE(sk->sk_prot); 1287634f1a71SBobby Eshleman if (prot != &vsock_proto) 1288634f1a71SBobby Eshleman return prot->recvmsg(sk, msg, len, flags, NULL); 1289634f1a71SBobby Eshleman #endif 1290fe502c4aSStefano Garzarella 1291fe502c4aSStefano Garzarella return vsk->transport->dgram_dequeue(vsk, msg, len, flags); 1292d021c344SAndy King } 1293634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg); 1294d021c344SAndy King 1295d021c344SAndy King static const struct proto_ops vsock_dgram_ops = { 1296d021c344SAndy King .family = PF_VSOCK, 1297d021c344SAndy King .owner = THIS_MODULE, 1298d021c344SAndy King .release = vsock_release, 1299d021c344SAndy King .bind = vsock_bind, 1300d021c344SAndy King .connect = vsock_dgram_connect, 1301d021c344SAndy King .socketpair = sock_no_socketpair, 1302d021c344SAndy King .accept = sock_no_accept, 1303d021c344SAndy King .getname = vsock_getname, 1304a11e1d43SLinus Torvalds .poll = vsock_poll, 1305d021c344SAndy King .ioctl = sock_no_ioctl, 1306d021c344SAndy King .listen = sock_no_listen, 1307d021c344SAndy King .shutdown = vsock_shutdown, 1308d021c344SAndy King .sendmsg = vsock_dgram_sendmsg, 1309d021c344SAndy King .recvmsg = vsock_dgram_recvmsg, 1310d021c344SAndy King .mmap = sock_no_mmap, 1311634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 1312d021c344SAndy King }; 1313d021c344SAndy King 1314380feae0SPeng Tao static int vsock_transport_cancel_pkt(struct vsock_sock *vsk) 1315380feae0SPeng Tao { 1316fe502c4aSStefano Garzarella const struct vsock_transport *transport = vsk->transport; 1317fe502c4aSStefano Garzarella 13185d1cbcc9SNorbert Slusarek if (!transport || !transport->cancel_pkt) 1319380feae0SPeng Tao return -EOPNOTSUPP; 1320380feae0SPeng Tao 1321380feae0SPeng Tao return transport->cancel_pkt(vsk); 1322380feae0SPeng Tao } 1323380feae0SPeng Tao 1324d021c344SAndy King static void vsock_connect_timeout(struct work_struct *work) 1325d021c344SAndy King { 1326d021c344SAndy King struct sock *sk; 1327d021c344SAndy King struct vsock_sock *vsk; 1328d021c344SAndy King 1329455f05ecSCong Wang vsk = container_of(work, struct vsock_sock, connect_work.work); 1330d021c344SAndy King sk = sk_vsock(vsk); 1331d021c344SAndy King 1332d021c344SAndy King lock_sock(sk); 13333b4477d2SStefan Hajnoczi if (sk->sk_state == TCP_SYN_SENT && 1334d021c344SAndy King (sk->sk_shutdown != SHUTDOWN_MASK)) { 13353b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1336a3e7b29eSPeilin Ye sk->sk_socket->state = SS_UNCONNECTED; 1337d021c344SAndy King sk->sk_err = ETIMEDOUT; 1338e3ae2365SAlexander Aring sk_error_report(sk); 13393d0bc44dSNorbert Slusarek vsock_transport_cancel_pkt(vsk); 1340d021c344SAndy King } 1341d021c344SAndy King release_sock(sk); 1342d021c344SAndy King 1343d021c344SAndy King sock_put(sk); 1344d021c344SAndy King } 1345d021c344SAndy King 1346a9e29e55SArseny Krasnov static int vsock_connect(struct socket *sock, struct sockaddr *addr, 1347d021c344SAndy King int addr_len, int flags) 1348d021c344SAndy King { 1349d021c344SAndy King int err; 1350d021c344SAndy King struct sock *sk; 1351d021c344SAndy King struct vsock_sock *vsk; 1352fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1353d021c344SAndy King struct sockaddr_vm *remote_addr; 1354d021c344SAndy King long timeout; 1355d021c344SAndy King DEFINE_WAIT(wait); 1356d021c344SAndy King 1357d021c344SAndy King err = 0; 1358d021c344SAndy King sk = sock->sk; 1359d021c344SAndy King vsk = vsock_sk(sk); 1360d021c344SAndy King 1361d021c344SAndy King lock_sock(sk); 1362d021c344SAndy King 1363d021c344SAndy King /* XXX AF_UNSPEC should make us disconnect like AF_INET. */ 1364d021c344SAndy King switch (sock->state) { 1365d021c344SAndy King case SS_CONNECTED: 1366d021c344SAndy King err = -EISCONN; 1367d021c344SAndy King goto out; 1368d021c344SAndy King case SS_DISCONNECTING: 1369d021c344SAndy King err = -EINVAL; 1370d021c344SAndy King goto out; 1371d021c344SAndy King case SS_CONNECTING: 1372d021c344SAndy King /* This continues on so we can move sock into the SS_CONNECTED 1373d021c344SAndy King * state once the connection has completed (at which point err 1374d021c344SAndy King * will be set to zero also). Otherwise, we will either wait 1375d021c344SAndy King * for the connection or return -EALREADY should this be a 1376d021c344SAndy King * non-blocking call. 1377d021c344SAndy King */ 1378d021c344SAndy King err = -EALREADY; 1379c7cd82b9SEiichi Tsukata if (flags & O_NONBLOCK) 1380c7cd82b9SEiichi Tsukata goto out; 1381d021c344SAndy King break; 1382d021c344SAndy King default: 13833b4477d2SStefan Hajnoczi if ((sk->sk_state == TCP_LISTEN) || 1384d021c344SAndy King vsock_addr_cast(addr, addr_len, &remote_addr) != 0) { 1385d021c344SAndy King err = -EINVAL; 1386d021c344SAndy King goto out; 1387d021c344SAndy King } 1388d021c344SAndy King 1389c0cfa2d8SStefano Garzarella /* Set the remote address that we are connecting to. */ 1390c0cfa2d8SStefano Garzarella memcpy(&vsk->remote_addr, remote_addr, 1391c0cfa2d8SStefano Garzarella sizeof(vsk->remote_addr)); 1392c0cfa2d8SStefano Garzarella 1393c0cfa2d8SStefano Garzarella err = vsock_assign_transport(vsk, NULL); 1394c0cfa2d8SStefano Garzarella if (err) 1395c0cfa2d8SStefano Garzarella goto out; 1396c0cfa2d8SStefano Garzarella 1397c0cfa2d8SStefano Garzarella transport = vsk->transport; 1398c0cfa2d8SStefano Garzarella 1399d021c344SAndy King /* The hypervisor and well-known contexts do not have socket 1400d021c344SAndy King * endpoints. 1401d021c344SAndy King */ 1402c0cfa2d8SStefano Garzarella if (!transport || 1403c0cfa2d8SStefano Garzarella !transport->stream_allow(remote_addr->svm_cid, 1404d021c344SAndy King remote_addr->svm_port)) { 1405d021c344SAndy King err = -ENETUNREACH; 1406d021c344SAndy King goto out; 1407d021c344SAndy King } 1408d021c344SAndy King 1409b3a6dfe8SAsias He err = vsock_auto_bind(vsk); 1410b3a6dfe8SAsias He if (err) 1411d021c344SAndy King goto out; 1412d021c344SAndy King 14133b4477d2SStefan Hajnoczi sk->sk_state = TCP_SYN_SENT; 1414d021c344SAndy King 1415d021c344SAndy King err = transport->connect(vsk); 1416d021c344SAndy King if (err < 0) 1417d021c344SAndy King goto out; 1418d021c344SAndy King 1419d021c344SAndy King /* Mark sock as connecting and set the error code to in 1420d021c344SAndy King * progress in case this is a non-blocking connect. 1421d021c344SAndy King */ 1422d021c344SAndy King sock->state = SS_CONNECTING; 1423d021c344SAndy King err = -EINPROGRESS; 1424d021c344SAndy King } 1425d021c344SAndy King 1426d021c344SAndy King /* The receive path will handle all communication until we are able to 1427d021c344SAndy King * enter the connected state. Here we wait for the connection to be 1428d021c344SAndy King * completed or a notification of an error. 1429d021c344SAndy King */ 1430d021c344SAndy King timeout = vsk->connect_timeout; 1431d021c344SAndy King prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1432d021c344SAndy King 14333b4477d2SStefan Hajnoczi while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) { 1434d021c344SAndy King if (flags & O_NONBLOCK) { 1435d021c344SAndy King /* If we're not going to block, we schedule a timeout 1436d021c344SAndy King * function to generate a timeout on the connection 1437d021c344SAndy King * attempt, in case the peer doesn't respond in a 1438d021c344SAndy King * timely manner. We hold on to the socket until the 1439d021c344SAndy King * timeout fires. 1440d021c344SAndy King */ 1441d021c344SAndy King sock_hold(sk); 14427e97cfedSPeilin Ye 14437e97cfedSPeilin Ye /* If the timeout function is already scheduled, 14447e97cfedSPeilin Ye * reschedule it, then ungrab the socket refcount to 14457e97cfedSPeilin Ye * keep it balanced. 14467e97cfedSPeilin Ye */ 14477e97cfedSPeilin Ye if (mod_delayed_work(system_wq, &vsk->connect_work, 14487e97cfedSPeilin Ye timeout)) 14497e97cfedSPeilin Ye sock_put(sk); 1450d021c344SAndy King 1451d021c344SAndy King /* Skip ahead to preserve error code set above. */ 1452d021c344SAndy King goto out_wait; 1453d021c344SAndy King } 1454d021c344SAndy King 1455d021c344SAndy King release_sock(sk); 1456d021c344SAndy King timeout = schedule_timeout(timeout); 1457d021c344SAndy King lock_sock(sk); 1458d021c344SAndy King 1459d021c344SAndy King if (signal_pending(current)) { 1460d021c344SAndy King err = sock_intr_errno(timeout); 1461c7ff9cffSLongpeng(Mike) sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE; 1462f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1463380feae0SPeng Tao vsock_transport_cancel_pkt(vsk); 1464b9208492SSeth Forshee vsock_remove_connected(vsk); 1465f7f9b5e7SClaudio Imbrenda goto out_wait; 14666d4486efSZhuang Shengen } else if ((sk->sk_state != TCP_ESTABLISHED) && (timeout == 0)) { 1467d021c344SAndy King err = -ETIMEDOUT; 14683b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1469f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1470380feae0SPeng Tao vsock_transport_cancel_pkt(vsk); 1471f7f9b5e7SClaudio Imbrenda goto out_wait; 1472d021c344SAndy King } 1473d021c344SAndy King 1474d021c344SAndy King prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1475d021c344SAndy King } 1476d021c344SAndy King 1477d021c344SAndy King if (sk->sk_err) { 1478d021c344SAndy King err = -sk->sk_err; 14793b4477d2SStefan Hajnoczi sk->sk_state = TCP_CLOSE; 1480f7f9b5e7SClaudio Imbrenda sock->state = SS_UNCONNECTED; 1481f7f9b5e7SClaudio Imbrenda } else { 1482d021c344SAndy King err = 0; 1483f7f9b5e7SClaudio Imbrenda } 1484d021c344SAndy King 1485d021c344SAndy King out_wait: 1486d021c344SAndy King finish_wait(sk_sleep(sk), &wait); 1487d021c344SAndy King out: 1488d021c344SAndy King release_sock(sk); 1489d021c344SAndy King return err; 1490d021c344SAndy King } 1491d021c344SAndy King 1492cdfbabfbSDavid Howells static int vsock_accept(struct socket *sock, struct socket *newsock, int flags, 1493cdfbabfbSDavid Howells bool kern) 1494d021c344SAndy King { 1495d021c344SAndy King struct sock *listener; 1496d021c344SAndy King int err; 1497d021c344SAndy King struct sock *connected; 1498d021c344SAndy King struct vsock_sock *vconnected; 1499d021c344SAndy King long timeout; 1500d021c344SAndy King DEFINE_WAIT(wait); 1501d021c344SAndy King 1502d021c344SAndy King err = 0; 1503d021c344SAndy King listener = sock->sk; 1504d021c344SAndy King 1505d021c344SAndy King lock_sock(listener); 1506d021c344SAndy King 1507a9e29e55SArseny Krasnov if (!sock_type_connectible(sock->type)) { 1508d021c344SAndy King err = -EOPNOTSUPP; 1509d021c344SAndy King goto out; 1510d021c344SAndy King } 1511d021c344SAndy King 15123b4477d2SStefan Hajnoczi if (listener->sk_state != TCP_LISTEN) { 1513d021c344SAndy King err = -EINVAL; 1514d021c344SAndy King goto out; 1515d021c344SAndy King } 1516d021c344SAndy King 1517d021c344SAndy King /* Wait for children sockets to appear; these are the new sockets 1518d021c344SAndy King * created upon connection establishment. 1519d021c344SAndy King */ 15207e0afbdfSStefano Garzarella timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK); 1521d021c344SAndy King prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE); 1522d021c344SAndy King 1523d021c344SAndy King while ((connected = vsock_dequeue_accept(listener)) == NULL && 1524d021c344SAndy King listener->sk_err == 0) { 1525d021c344SAndy King release_sock(listener); 1526d021c344SAndy King timeout = schedule_timeout(timeout); 1527f7f9b5e7SClaudio Imbrenda finish_wait(sk_sleep(listener), &wait); 1528d021c344SAndy King lock_sock(listener); 1529d021c344SAndy King 1530d021c344SAndy King if (signal_pending(current)) { 1531d021c344SAndy King err = sock_intr_errno(timeout); 1532f7f9b5e7SClaudio Imbrenda goto out; 1533d021c344SAndy King } else if (timeout == 0) { 1534d021c344SAndy King err = -EAGAIN; 1535f7f9b5e7SClaudio Imbrenda goto out; 1536d021c344SAndy King } 1537d021c344SAndy King 1538d021c344SAndy King prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE); 1539d021c344SAndy King } 1540f7f9b5e7SClaudio Imbrenda finish_wait(sk_sleep(listener), &wait); 1541d021c344SAndy King 1542d021c344SAndy King if (listener->sk_err) 1543d021c344SAndy King err = -listener->sk_err; 1544d021c344SAndy King 1545d021c344SAndy King if (connected) { 15467976a11bSEric Dumazet sk_acceptq_removed(listener); 1547d021c344SAndy King 15484192f672SStefan Hajnoczi lock_sock_nested(connected, SINGLE_DEPTH_NESTING); 1549d021c344SAndy King vconnected = vsock_sk(connected); 1550d021c344SAndy King 1551d021c344SAndy King /* If the listener socket has received an error, then we should 1552d021c344SAndy King * reject this socket and return. Note that we simply mark the 1553d021c344SAndy King * socket rejected, drop our reference, and let the cleanup 1554d021c344SAndy King * function handle the cleanup; the fact that we found it in 1555d021c344SAndy King * the listener's accept queue guarantees that the cleanup 1556d021c344SAndy King * function hasn't run yet. 1557d021c344SAndy King */ 1558d021c344SAndy King if (err) { 1559d021c344SAndy King vconnected->rejected = true; 1560f7f9b5e7SClaudio Imbrenda } else { 1561d021c344SAndy King newsock->state = SS_CONNECTED; 1562d021c344SAndy King sock_graft(connected, newsock); 1563f7f9b5e7SClaudio Imbrenda } 1564f7f9b5e7SClaudio Imbrenda 1565d021c344SAndy King release_sock(connected); 1566d021c344SAndy King sock_put(connected); 1567d021c344SAndy King } 1568d021c344SAndy King 1569d021c344SAndy King out: 1570d021c344SAndy King release_sock(listener); 1571d021c344SAndy King return err; 1572d021c344SAndy King } 1573d021c344SAndy King 1574d021c344SAndy King static int vsock_listen(struct socket *sock, int backlog) 1575d021c344SAndy King { 1576d021c344SAndy King int err; 1577d021c344SAndy King struct sock *sk; 1578d021c344SAndy King struct vsock_sock *vsk; 1579d021c344SAndy King 1580d021c344SAndy King sk = sock->sk; 1581d021c344SAndy King 1582d021c344SAndy King lock_sock(sk); 1583d021c344SAndy King 1584a9e29e55SArseny Krasnov if (!sock_type_connectible(sk->sk_type)) { 1585d021c344SAndy King err = -EOPNOTSUPP; 1586d021c344SAndy King goto out; 1587d021c344SAndy King } 1588d021c344SAndy King 1589d021c344SAndy King if (sock->state != SS_UNCONNECTED) { 1590d021c344SAndy King err = -EINVAL; 1591d021c344SAndy King goto out; 1592d021c344SAndy King } 1593d021c344SAndy King 1594d021c344SAndy King vsk = vsock_sk(sk); 1595d021c344SAndy King 1596d021c344SAndy King if (!vsock_addr_bound(&vsk->local_addr)) { 1597d021c344SAndy King err = -EINVAL; 1598d021c344SAndy King goto out; 1599d021c344SAndy King } 1600d021c344SAndy King 1601d021c344SAndy King sk->sk_max_ack_backlog = backlog; 16023b4477d2SStefan Hajnoczi sk->sk_state = TCP_LISTEN; 1603d021c344SAndy King 1604d021c344SAndy King err = 0; 1605d021c344SAndy King 1606d021c344SAndy King out: 1607d021c344SAndy King release_sock(sk); 1608d021c344SAndy King return err; 1609d021c344SAndy King } 1610d021c344SAndy King 1611b9f2b0ffSStefano Garzarella static void vsock_update_buffer_size(struct vsock_sock *vsk, 1612b9f2b0ffSStefano Garzarella const struct vsock_transport *transport, 1613b9f2b0ffSStefano Garzarella u64 val) 1614b9f2b0ffSStefano Garzarella { 1615b9f2b0ffSStefano Garzarella if (val > vsk->buffer_max_size) 1616b9f2b0ffSStefano Garzarella val = vsk->buffer_max_size; 1617b9f2b0ffSStefano Garzarella 1618b9f2b0ffSStefano Garzarella if (val < vsk->buffer_min_size) 1619b9f2b0ffSStefano Garzarella val = vsk->buffer_min_size; 1620b9f2b0ffSStefano Garzarella 1621b9f2b0ffSStefano Garzarella if (val != vsk->buffer_size && 1622b9f2b0ffSStefano Garzarella transport && transport->notify_buffer_size) 1623b9f2b0ffSStefano Garzarella transport->notify_buffer_size(vsk, &val); 1624b9f2b0ffSStefano Garzarella 1625b9f2b0ffSStefano Garzarella vsk->buffer_size = val; 1626b9f2b0ffSStefano Garzarella } 1627b9f2b0ffSStefano Garzarella 1628a9e29e55SArseny Krasnov static int vsock_connectible_setsockopt(struct socket *sock, 1629d021c344SAndy King int level, 1630d021c344SAndy King int optname, 1631a7b75c5aSChristoph Hellwig sockptr_t optval, 1632d021c344SAndy King unsigned int optlen) 1633d021c344SAndy King { 1634d021c344SAndy King int err; 1635d021c344SAndy King struct sock *sk; 1636d021c344SAndy King struct vsock_sock *vsk; 1637fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1638d021c344SAndy King u64 val; 1639d021c344SAndy King 1640d021c344SAndy King if (level != AF_VSOCK) 1641d021c344SAndy King return -ENOPROTOOPT; 1642d021c344SAndy King 1643d021c344SAndy King #define COPY_IN(_v) \ 1644d021c344SAndy King do { \ 1645d021c344SAndy King if (optlen < sizeof(_v)) { \ 1646d021c344SAndy King err = -EINVAL; \ 1647d021c344SAndy King goto exit; \ 1648d021c344SAndy King } \ 1649a7b75c5aSChristoph Hellwig if (copy_from_sockptr(&_v, optval, sizeof(_v)) != 0) { \ 1650d021c344SAndy King err = -EFAULT; \ 1651d021c344SAndy King goto exit; \ 1652d021c344SAndy King } \ 1653d021c344SAndy King } while (0) 1654d021c344SAndy King 1655d021c344SAndy King err = 0; 1656d021c344SAndy King sk = sock->sk; 1657d021c344SAndy King vsk = vsock_sk(sk); 1658d021c344SAndy King 1659d021c344SAndy King lock_sock(sk); 1660d021c344SAndy King 1661c518adafSAlexander Popov transport = vsk->transport; 1662c518adafSAlexander Popov 1663d021c344SAndy King switch (optname) { 1664d021c344SAndy King case SO_VM_SOCKETS_BUFFER_SIZE: 1665d021c344SAndy King COPY_IN(val); 1666b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, val); 1667d021c344SAndy King break; 1668d021c344SAndy King 1669d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MAX_SIZE: 1670d021c344SAndy King COPY_IN(val); 1671b9f2b0ffSStefano Garzarella vsk->buffer_max_size = val; 1672b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, vsk->buffer_size); 1673d021c344SAndy King break; 1674d021c344SAndy King 1675d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MIN_SIZE: 1676d021c344SAndy King COPY_IN(val); 1677b9f2b0ffSStefano Garzarella vsk->buffer_min_size = val; 1678b9f2b0ffSStefano Garzarella vsock_update_buffer_size(vsk, transport, vsk->buffer_size); 1679d021c344SAndy King break; 1680d021c344SAndy King 16814c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_NEW: 16824c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD: { 16834c1e34c0SRichard Palethorpe struct __kernel_sock_timeval tv; 16844c1e34c0SRichard Palethorpe 16854c1e34c0SRichard Palethorpe err = sock_copy_user_timeval(&tv, optval, optlen, 16864c1e34c0SRichard Palethorpe optname == SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD); 16874c1e34c0SRichard Palethorpe if (err) 16884c1e34c0SRichard Palethorpe break; 1689d021c344SAndy King if (tv.tv_sec >= 0 && tv.tv_usec < USEC_PER_SEC && 1690d021c344SAndy King tv.tv_sec < (MAX_SCHEDULE_TIMEOUT / HZ - 1)) { 1691d021c344SAndy King vsk->connect_timeout = tv.tv_sec * HZ + 16924c1e34c0SRichard Palethorpe DIV_ROUND_UP((unsigned long)tv.tv_usec, (USEC_PER_SEC / HZ)); 1693d021c344SAndy King if (vsk->connect_timeout == 0) 1694d021c344SAndy King vsk->connect_timeout = 1695d021c344SAndy King VSOCK_DEFAULT_CONNECT_TIMEOUT; 1696d021c344SAndy King 1697d021c344SAndy King } else { 1698d021c344SAndy King err = -ERANGE; 1699d021c344SAndy King } 1700d021c344SAndy King break; 1701d021c344SAndy King } 1702d021c344SAndy King 1703d021c344SAndy King default: 1704d021c344SAndy King err = -ENOPROTOOPT; 1705d021c344SAndy King break; 1706d021c344SAndy King } 1707d021c344SAndy King 1708d021c344SAndy King #undef COPY_IN 1709d021c344SAndy King 1710d021c344SAndy King exit: 1711d021c344SAndy King release_sock(sk); 1712d021c344SAndy King return err; 1713d021c344SAndy King } 1714d021c344SAndy King 1715a9e29e55SArseny Krasnov static int vsock_connectible_getsockopt(struct socket *sock, 1716d021c344SAndy King int level, int optname, 1717d021c344SAndy King char __user *optval, 1718d021c344SAndy King int __user *optlen) 1719d021c344SAndy King { 1720685c3f2fSRichard Palethorpe struct sock *sk = sock->sk; 1721685c3f2fSRichard Palethorpe struct vsock_sock *vsk = vsock_sk(sk); 1722685c3f2fSRichard Palethorpe 1723685c3f2fSRichard Palethorpe union { 1724685c3f2fSRichard Palethorpe u64 val64; 17254c1e34c0SRichard Palethorpe struct old_timeval32 tm32; 1726685c3f2fSRichard Palethorpe struct __kernel_old_timeval tm; 17274c1e34c0SRichard Palethorpe struct __kernel_sock_timeval stm; 1728685c3f2fSRichard Palethorpe } v; 1729685c3f2fSRichard Palethorpe 1730685c3f2fSRichard Palethorpe int lv = sizeof(v.val64); 1731d021c344SAndy King int len; 1732d021c344SAndy King 1733d021c344SAndy King if (level != AF_VSOCK) 1734d021c344SAndy King return -ENOPROTOOPT; 1735d021c344SAndy King 1736685c3f2fSRichard Palethorpe if (get_user(len, optlen)) 1737685c3f2fSRichard Palethorpe return -EFAULT; 1738d021c344SAndy King 1739685c3f2fSRichard Palethorpe memset(&v, 0, sizeof(v)); 1740d021c344SAndy King 1741d021c344SAndy King switch (optname) { 1742d021c344SAndy King case SO_VM_SOCKETS_BUFFER_SIZE: 1743685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_size; 1744d021c344SAndy King break; 1745d021c344SAndy King 1746d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MAX_SIZE: 1747685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_max_size; 1748d021c344SAndy King break; 1749d021c344SAndy King 1750d021c344SAndy King case SO_VM_SOCKETS_BUFFER_MIN_SIZE: 1751685c3f2fSRichard Palethorpe v.val64 = vsk->buffer_min_size; 1752d021c344SAndy King break; 1753d021c344SAndy King 17544c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_NEW: 17554c1e34c0SRichard Palethorpe case SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD: 17564c1e34c0SRichard Palethorpe lv = sock_get_timeout(vsk->connect_timeout, &v, 17574c1e34c0SRichard Palethorpe optname == SO_VM_SOCKETS_CONNECT_TIMEOUT_OLD); 1758d021c344SAndy King break; 1759685c3f2fSRichard Palethorpe 1760d021c344SAndy King default: 1761d021c344SAndy King return -ENOPROTOOPT; 1762d021c344SAndy King } 1763d021c344SAndy King 1764685c3f2fSRichard Palethorpe if (len < lv) 1765685c3f2fSRichard Palethorpe return -EINVAL; 1766685c3f2fSRichard Palethorpe if (len > lv) 1767685c3f2fSRichard Palethorpe len = lv; 1768685c3f2fSRichard Palethorpe if (copy_to_user(optval, &v, len)) 1769d021c344SAndy King return -EFAULT; 1770d021c344SAndy King 1771685c3f2fSRichard Palethorpe if (put_user(len, optlen)) 1772685c3f2fSRichard Palethorpe return -EFAULT; 1773d021c344SAndy King 1774d021c344SAndy King return 0; 1775d021c344SAndy King } 1776d021c344SAndy King 1777a9e29e55SArseny Krasnov static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg, 17781b784140SYing Xue size_t len) 1779d021c344SAndy King { 1780d021c344SAndy King struct sock *sk; 1781d021c344SAndy King struct vsock_sock *vsk; 1782fe502c4aSStefano Garzarella const struct vsock_transport *transport; 1783d021c344SAndy King ssize_t total_written; 1784d021c344SAndy King long timeout; 1785d021c344SAndy King int err; 1786d021c344SAndy King struct vsock_transport_send_notify_data send_data; 1787499fde66SWANG Cong DEFINE_WAIT_FUNC(wait, woken_wake_function); 1788d021c344SAndy King 1789d021c344SAndy King sk = sock->sk; 1790d021c344SAndy King vsk = vsock_sk(sk); 1791d021c344SAndy King total_written = 0; 1792d021c344SAndy King err = 0; 1793d021c344SAndy King 1794d021c344SAndy King if (msg->msg_flags & MSG_OOB) 1795d021c344SAndy King return -EOPNOTSUPP; 1796d021c344SAndy King 1797d021c344SAndy King lock_sock(sk); 1798d021c344SAndy King 1799c518adafSAlexander Popov transport = vsk->transport; 1800c518adafSAlexander Popov 18018cb48554SArseny Krasnov /* Callers should not provide a destination with connection oriented 18028cb48554SArseny Krasnov * sockets. 18038cb48554SArseny Krasnov */ 1804d021c344SAndy King if (msg->msg_namelen) { 18053b4477d2SStefan Hajnoczi err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP; 1806d021c344SAndy King goto out; 1807d021c344SAndy King } 1808d021c344SAndy King 1809d021c344SAndy King /* Send data only if both sides are not shutdown in the direction. */ 1810d021c344SAndy King if (sk->sk_shutdown & SEND_SHUTDOWN || 1811d021c344SAndy King vsk->peer_shutdown & RCV_SHUTDOWN) { 1812d021c344SAndy King err = -EPIPE; 1813d021c344SAndy King goto out; 1814d021c344SAndy King } 1815d021c344SAndy King 1816c0cfa2d8SStefano Garzarella if (!transport || sk->sk_state != TCP_ESTABLISHED || 1817d021c344SAndy King !vsock_addr_bound(&vsk->local_addr)) { 1818d021c344SAndy King err = -ENOTCONN; 1819d021c344SAndy King goto out; 1820d021c344SAndy King } 1821d021c344SAndy King 1822d021c344SAndy King if (!vsock_addr_bound(&vsk->remote_addr)) { 1823d021c344SAndy King err = -EDESTADDRREQ; 1824d021c344SAndy King goto out; 1825d021c344SAndy King } 1826d021c344SAndy King 1827d021c344SAndy King /* Wait for room in the produce queue to enqueue our user's data. */ 1828d021c344SAndy King timeout = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); 1829d021c344SAndy King 1830d021c344SAndy King err = transport->notify_send_init(vsk, &send_data); 1831d021c344SAndy King if (err < 0) 1832d021c344SAndy King goto out; 1833d021c344SAndy King 1834d021c344SAndy King while (total_written < len) { 1835d021c344SAndy King ssize_t written; 1836d021c344SAndy King 1837499fde66SWANG Cong add_wait_queue(sk_sleep(sk), &wait); 1838d021c344SAndy King while (vsock_stream_has_space(vsk) == 0 && 1839d021c344SAndy King sk->sk_err == 0 && 1840d021c344SAndy King !(sk->sk_shutdown & SEND_SHUTDOWN) && 1841d021c344SAndy King !(vsk->peer_shutdown & RCV_SHUTDOWN)) { 1842d021c344SAndy King 1843d021c344SAndy King /* Don't wait for non-blocking sockets. */ 1844d021c344SAndy King if (timeout == 0) { 1845d021c344SAndy King err = -EAGAIN; 1846499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1847f7f9b5e7SClaudio Imbrenda goto out_err; 1848d021c344SAndy King } 1849d021c344SAndy King 1850d021c344SAndy King err = transport->notify_send_pre_block(vsk, &send_data); 1851f7f9b5e7SClaudio Imbrenda if (err < 0) { 1852499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1853f7f9b5e7SClaudio Imbrenda goto out_err; 1854f7f9b5e7SClaudio Imbrenda } 1855d021c344SAndy King 1856d021c344SAndy King release_sock(sk); 1857499fde66SWANG Cong timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); 1858d021c344SAndy King lock_sock(sk); 1859d021c344SAndy King if (signal_pending(current)) { 1860d021c344SAndy King err = sock_intr_errno(timeout); 1861499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1862f7f9b5e7SClaudio Imbrenda goto out_err; 1863d021c344SAndy King } else if (timeout == 0) { 1864d021c344SAndy King err = -EAGAIN; 1865499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1866f7f9b5e7SClaudio Imbrenda goto out_err; 1867d021c344SAndy King } 1868d021c344SAndy King } 1869499fde66SWANG Cong remove_wait_queue(sk_sleep(sk), &wait); 1870d021c344SAndy King 1871d021c344SAndy King /* These checks occur both as part of and after the loop 1872d021c344SAndy King * conditional since we need to check before and after 1873d021c344SAndy King * sleeping. 1874d021c344SAndy King */ 1875d021c344SAndy King if (sk->sk_err) { 1876d021c344SAndy King err = -sk->sk_err; 1877f7f9b5e7SClaudio Imbrenda goto out_err; 1878d021c344SAndy King } else if ((sk->sk_shutdown & SEND_SHUTDOWN) || 1879d021c344SAndy King (vsk->peer_shutdown & RCV_SHUTDOWN)) { 1880d021c344SAndy King err = -EPIPE; 1881f7f9b5e7SClaudio Imbrenda goto out_err; 1882d021c344SAndy King } 1883d021c344SAndy King 1884d021c344SAndy King err = transport->notify_send_pre_enqueue(vsk, &send_data); 1885d021c344SAndy King if (err < 0) 1886f7f9b5e7SClaudio Imbrenda goto out_err; 1887d021c344SAndy King 1888d021c344SAndy King /* Note that enqueue will only write as many bytes as are free 1889d021c344SAndy King * in the produce queue, so we don't need to ensure len is 1890d021c344SAndy King * smaller than the queue size. It is the caller's 1891d021c344SAndy King * responsibility to check how many bytes we were able to send. 1892d021c344SAndy King */ 1893d021c344SAndy King 1894fbe70c48SArseny Krasnov if (sk->sk_type == SOCK_SEQPACKET) { 1895fbe70c48SArseny Krasnov written = transport->seqpacket_enqueue(vsk, 1896fbe70c48SArseny Krasnov msg, len - total_written); 1897fbe70c48SArseny Krasnov } else { 1898fbe70c48SArseny Krasnov written = transport->stream_enqueue(vsk, 1899fbe70c48SArseny Krasnov msg, len - total_written); 1900fbe70c48SArseny Krasnov } 1901c43170b7SBobby Eshleman 1902d021c344SAndy King if (written < 0) { 1903c43170b7SBobby Eshleman err = written; 1904f7f9b5e7SClaudio Imbrenda goto out_err; 1905d021c344SAndy King } 1906d021c344SAndy King 1907d021c344SAndy King total_written += written; 1908d021c344SAndy King 1909d021c344SAndy King err = transport->notify_send_post_enqueue( 1910d021c344SAndy King vsk, written, &send_data); 1911d021c344SAndy King if (err < 0) 1912f7f9b5e7SClaudio Imbrenda goto out_err; 1913d021c344SAndy King 1914d021c344SAndy King } 1915d021c344SAndy King 1916f7f9b5e7SClaudio Imbrenda out_err: 1917fbe70c48SArseny Krasnov if (total_written > 0) { 1918fbe70c48SArseny Krasnov /* Return number of written bytes only if: 1919fbe70c48SArseny Krasnov * 1) SOCK_STREAM socket. 1920fbe70c48SArseny Krasnov * 2) SOCK_SEQPACKET socket when whole buffer is sent. 1921fbe70c48SArseny Krasnov */ 1922fbe70c48SArseny Krasnov if (sk->sk_type == SOCK_STREAM || total_written == len) 1923d021c344SAndy King err = total_written; 1924fbe70c48SArseny Krasnov } 1925d021c344SAndy King out: 1926d021c344SAndy King release_sock(sk); 1927d021c344SAndy King return err; 1928d021c344SAndy King } 1929d021c344SAndy King 19300de5b2e6SStefano Garzarella static int vsock_connectible_wait_data(struct sock *sk, 19310de5b2e6SStefano Garzarella struct wait_queue_entry *wait, 1932b3f7fd54SArseny Krasnov long timeout, 1933b3f7fd54SArseny Krasnov struct vsock_transport_recv_notify_data *recv_data, 1934b3f7fd54SArseny Krasnov size_t target) 1935b3f7fd54SArseny Krasnov { 1936b3f7fd54SArseny Krasnov const struct vsock_transport *transport; 1937b3f7fd54SArseny Krasnov struct vsock_sock *vsk; 1938b3f7fd54SArseny Krasnov s64 data; 1939b3f7fd54SArseny Krasnov int err; 1940b3f7fd54SArseny Krasnov 1941b3f7fd54SArseny Krasnov vsk = vsock_sk(sk); 1942b3f7fd54SArseny Krasnov err = 0; 1943b3f7fd54SArseny Krasnov transport = vsk->transport; 1944b3f7fd54SArseny Krasnov 1945466a8533SDexuan Cui while (1) { 1946b3f7fd54SArseny Krasnov prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE); 1947466a8533SDexuan Cui data = vsock_connectible_has_data(vsk); 1948466a8533SDexuan Cui if (data != 0) 1949466a8533SDexuan Cui break; 1950b3f7fd54SArseny Krasnov 1951b3f7fd54SArseny Krasnov if (sk->sk_err != 0 || 1952b3f7fd54SArseny Krasnov (sk->sk_shutdown & RCV_SHUTDOWN) || 1953b3f7fd54SArseny Krasnov (vsk->peer_shutdown & SEND_SHUTDOWN)) { 1954b3f7fd54SArseny Krasnov break; 1955b3f7fd54SArseny Krasnov } 1956b3f7fd54SArseny Krasnov 1957b3f7fd54SArseny Krasnov /* Don't wait for non-blocking sockets. */ 1958b3f7fd54SArseny Krasnov if (timeout == 0) { 1959b3f7fd54SArseny Krasnov err = -EAGAIN; 1960b3f7fd54SArseny Krasnov break; 1961b3f7fd54SArseny Krasnov } 1962b3f7fd54SArseny Krasnov 1963b3f7fd54SArseny Krasnov if (recv_data) { 1964b3f7fd54SArseny Krasnov err = transport->notify_recv_pre_block(vsk, target, recv_data); 1965b3f7fd54SArseny Krasnov if (err < 0) 1966b3f7fd54SArseny Krasnov break; 1967b3f7fd54SArseny Krasnov } 1968b3f7fd54SArseny Krasnov 1969b3f7fd54SArseny Krasnov release_sock(sk); 1970b3f7fd54SArseny Krasnov timeout = schedule_timeout(timeout); 1971b3f7fd54SArseny Krasnov lock_sock(sk); 1972b3f7fd54SArseny Krasnov 1973b3f7fd54SArseny Krasnov if (signal_pending(current)) { 1974b3f7fd54SArseny Krasnov err = sock_intr_errno(timeout); 1975b3f7fd54SArseny Krasnov break; 1976b3f7fd54SArseny Krasnov } else if (timeout == 0) { 1977b3f7fd54SArseny Krasnov err = -EAGAIN; 1978b3f7fd54SArseny Krasnov break; 1979b3f7fd54SArseny Krasnov } 1980b3f7fd54SArseny Krasnov } 1981b3f7fd54SArseny Krasnov 1982b3f7fd54SArseny Krasnov finish_wait(sk_sleep(sk), wait); 1983b3f7fd54SArseny Krasnov 1984b3f7fd54SArseny Krasnov if (err) 1985b3f7fd54SArseny Krasnov return err; 1986b3f7fd54SArseny Krasnov 1987b3f7fd54SArseny Krasnov /* Internal transport error when checking for available 1988b3f7fd54SArseny Krasnov * data. XXX This should be changed to a connection 1989b3f7fd54SArseny Krasnov * reset in a later change. 1990b3f7fd54SArseny Krasnov */ 1991b3f7fd54SArseny Krasnov if (data < 0) 1992b3f7fd54SArseny Krasnov return -ENOMEM; 1993b3f7fd54SArseny Krasnov 1994b3f7fd54SArseny Krasnov return data; 1995b3f7fd54SArseny Krasnov } 1996b3f7fd54SArseny Krasnov 199719c1b90eSArseny Krasnov static int __vsock_stream_recvmsg(struct sock *sk, struct msghdr *msg, 199819c1b90eSArseny Krasnov size_t len, int flags) 1999d021c344SAndy King { 2000d021c344SAndy King struct vsock_transport_recv_notify_data recv_data; 200119c1b90eSArseny Krasnov const struct vsock_transport *transport; 200219c1b90eSArseny Krasnov struct vsock_sock *vsk; 200319c1b90eSArseny Krasnov ssize_t copied; 200419c1b90eSArseny Krasnov size_t target; 200519c1b90eSArseny Krasnov long timeout; 200619c1b90eSArseny Krasnov int err; 2007d021c344SAndy King 2008d021c344SAndy King DEFINE_WAIT(wait); 2009d021c344SAndy King 2010d021c344SAndy King vsk = vsock_sk(sk); 2011c518adafSAlexander Popov transport = vsk->transport; 2012c518adafSAlexander Popov 2013d021c344SAndy King /* We must not copy less than target bytes into the user's buffer 2014d021c344SAndy King * before returning successfully, so we wait for the consume queue to 2015d021c344SAndy King * have that much data to consume before dequeueing. Note that this 2016d021c344SAndy King * makes it impossible to handle cases where target is greater than the 2017d021c344SAndy King * queue size. 2018d021c344SAndy King */ 2019d021c344SAndy King target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); 2020d021c344SAndy King if (target >= transport->stream_rcvhiwat(vsk)) { 2021d021c344SAndy King err = -ENOMEM; 2022d021c344SAndy King goto out; 2023d021c344SAndy King } 2024d021c344SAndy King timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 2025d021c344SAndy King copied = 0; 2026d021c344SAndy King 2027d021c344SAndy King err = transport->notify_recv_init(vsk, target, &recv_data); 2028d021c344SAndy King if (err < 0) 2029d021c344SAndy King goto out; 2030d021c344SAndy King 2031d021c344SAndy King 2032d021c344SAndy King while (1) { 2033f7f9b5e7SClaudio Imbrenda ssize_t read; 2034f7f9b5e7SClaudio Imbrenda 20350de5b2e6SStefano Garzarella err = vsock_connectible_wait_data(sk, &wait, timeout, 20360de5b2e6SStefano Garzarella &recv_data, target); 2037b3f7fd54SArseny Krasnov if (err <= 0) 2038b3f7fd54SArseny Krasnov break; 2039d021c344SAndy King 2040b3f7fd54SArseny Krasnov err = transport->notify_recv_pre_dequeue(vsk, target, 2041b3f7fd54SArseny Krasnov &recv_data); 2042d021c344SAndy King if (err < 0) 2043d021c344SAndy King break; 2044d021c344SAndy King 2045b3f7fd54SArseny Krasnov read = transport->stream_dequeue(vsk, msg, len - copied, flags); 2046d021c344SAndy King if (read < 0) { 204702ab696fSArseniy Krasnov err = read; 2048d021c344SAndy King break; 2049d021c344SAndy King } 2050d021c344SAndy King 2051d021c344SAndy King copied += read; 2052d021c344SAndy King 2053b3f7fd54SArseny Krasnov err = transport->notify_recv_post_dequeue(vsk, target, read, 2054d021c344SAndy King !(flags & MSG_PEEK), &recv_data); 2055d021c344SAndy King if (err < 0) 2056f7f9b5e7SClaudio Imbrenda goto out; 2057d021c344SAndy King 2058d021c344SAndy King if (read >= target || flags & MSG_PEEK) 2059d021c344SAndy King break; 2060d021c344SAndy King 2061d021c344SAndy King target -= read; 2062d021c344SAndy King } 2063d021c344SAndy King 2064d021c344SAndy King if (sk->sk_err) 2065d021c344SAndy King err = -sk->sk_err; 2066d021c344SAndy King else if (sk->sk_shutdown & RCV_SHUTDOWN) 2067d021c344SAndy King err = 0; 2068d021c344SAndy King 2069dedc58e0SIan Campbell if (copied > 0) 2070d021c344SAndy King err = copied; 2071d021c344SAndy King 2072d021c344SAndy King out: 207319c1b90eSArseny Krasnov return err; 207419c1b90eSArseny Krasnov } 207519c1b90eSArseny Krasnov 20769942c192SArseny Krasnov static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg, 20779942c192SArseny Krasnov size_t len, int flags) 20789942c192SArseny Krasnov { 20799942c192SArseny Krasnov const struct vsock_transport *transport; 20809942c192SArseny Krasnov struct vsock_sock *vsk; 20818fc92b7cSArseny Krasnov ssize_t msg_len; 20829942c192SArseny Krasnov long timeout; 20839942c192SArseny Krasnov int err = 0; 20849942c192SArseny Krasnov DEFINE_WAIT(wait); 20859942c192SArseny Krasnov 20869942c192SArseny Krasnov vsk = vsock_sk(sk); 20879942c192SArseny Krasnov transport = vsk->transport; 20889942c192SArseny Krasnov 20899942c192SArseny Krasnov timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 20909942c192SArseny Krasnov 20910de5b2e6SStefano Garzarella err = vsock_connectible_wait_data(sk, &wait, timeout, NULL, 0); 20929942c192SArseny Krasnov if (err <= 0) 20939942c192SArseny Krasnov goto out; 20949942c192SArseny Krasnov 20958fc92b7cSArseny Krasnov msg_len = transport->seqpacket_dequeue(vsk, msg, flags); 20969942c192SArseny Krasnov 20978fc92b7cSArseny Krasnov if (msg_len < 0) { 209802ab696fSArseniy Krasnov err = msg_len; 20999942c192SArseny Krasnov goto out; 21009942c192SArseny Krasnov } 21019942c192SArseny Krasnov 21029942c192SArseny Krasnov if (sk->sk_err) { 21039942c192SArseny Krasnov err = -sk->sk_err; 21049942c192SArseny Krasnov } else if (sk->sk_shutdown & RCV_SHUTDOWN) { 21059942c192SArseny Krasnov err = 0; 21069942c192SArseny Krasnov } else { 21079942c192SArseny Krasnov /* User sets MSG_TRUNC, so return real length of 21089942c192SArseny Krasnov * packet. 21099942c192SArseny Krasnov */ 21109942c192SArseny Krasnov if (flags & MSG_TRUNC) 21118fc92b7cSArseny Krasnov err = msg_len; 21129942c192SArseny Krasnov else 21139942c192SArseny Krasnov err = len - msg_data_left(msg); 21149942c192SArseny Krasnov 21159942c192SArseny Krasnov /* Always set MSG_TRUNC if real length of packet is 21169942c192SArseny Krasnov * bigger than user's buffer. 21179942c192SArseny Krasnov */ 21188fc92b7cSArseny Krasnov if (msg_len > len) 21199942c192SArseny Krasnov msg->msg_flags |= MSG_TRUNC; 21209942c192SArseny Krasnov } 21219942c192SArseny Krasnov 21229942c192SArseny Krasnov out: 21239942c192SArseny Krasnov return err; 21249942c192SArseny Krasnov } 21259942c192SArseny Krasnov 2126634f1a71SBobby Eshleman int 212719c1b90eSArseny Krasnov vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, 212819c1b90eSArseny Krasnov int flags) 212919c1b90eSArseny Krasnov { 213019c1b90eSArseny Krasnov struct sock *sk; 213119c1b90eSArseny Krasnov struct vsock_sock *vsk; 213219c1b90eSArseny Krasnov const struct vsock_transport *transport; 2133634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 2134634f1a71SBobby Eshleman const struct proto *prot; 2135634f1a71SBobby Eshleman #endif 213619c1b90eSArseny Krasnov int err; 213719c1b90eSArseny Krasnov 213819c1b90eSArseny Krasnov sk = sock->sk; 2139*d55a40a6SArseniy Krasnov 2140*d55a40a6SArseniy Krasnov if (unlikely(flags & MSG_ERRQUEUE)) 2141*d55a40a6SArseniy Krasnov return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, VSOCK_RECVERR); 2142*d55a40a6SArseniy Krasnov 214319c1b90eSArseny Krasnov vsk = vsock_sk(sk); 214419c1b90eSArseny Krasnov err = 0; 214519c1b90eSArseny Krasnov 214619c1b90eSArseny Krasnov lock_sock(sk); 214719c1b90eSArseny Krasnov 214819c1b90eSArseny Krasnov transport = vsk->transport; 214919c1b90eSArseny Krasnov 215019c1b90eSArseny Krasnov if (!transport || sk->sk_state != TCP_ESTABLISHED) { 215119c1b90eSArseny Krasnov /* Recvmsg is supposed to return 0 if a peer performs an 215219c1b90eSArseny Krasnov * orderly shutdown. Differentiate between that case and when a 215319c1b90eSArseny Krasnov * peer has not connected or a local shutdown occurred with the 215419c1b90eSArseny Krasnov * SOCK_DONE flag. 215519c1b90eSArseny Krasnov */ 215619c1b90eSArseny Krasnov if (sock_flag(sk, SOCK_DONE)) 215719c1b90eSArseny Krasnov err = 0; 215819c1b90eSArseny Krasnov else 215919c1b90eSArseny Krasnov err = -ENOTCONN; 216019c1b90eSArseny Krasnov 216119c1b90eSArseny Krasnov goto out; 216219c1b90eSArseny Krasnov } 216319c1b90eSArseny Krasnov 216419c1b90eSArseny Krasnov if (flags & MSG_OOB) { 216519c1b90eSArseny Krasnov err = -EOPNOTSUPP; 216619c1b90eSArseny Krasnov goto out; 216719c1b90eSArseny Krasnov } 216819c1b90eSArseny Krasnov 216919c1b90eSArseny Krasnov /* We don't check peer_shutdown flag here since peer may actually shut 217019c1b90eSArseny Krasnov * down, but there can be data in the queue that a local socket can 217119c1b90eSArseny Krasnov * receive. 217219c1b90eSArseny Krasnov */ 217319c1b90eSArseny Krasnov if (sk->sk_shutdown & RCV_SHUTDOWN) { 217419c1b90eSArseny Krasnov err = 0; 217519c1b90eSArseny Krasnov goto out; 217619c1b90eSArseny Krasnov } 217719c1b90eSArseny Krasnov 217819c1b90eSArseny Krasnov /* It is valid on Linux to pass in a zero-length receive buffer. This 217919c1b90eSArseny Krasnov * is not an error. We may as well bail out now. 218019c1b90eSArseny Krasnov */ 218119c1b90eSArseny Krasnov if (!len) { 218219c1b90eSArseny Krasnov err = 0; 218319c1b90eSArseny Krasnov goto out; 218419c1b90eSArseny Krasnov } 218519c1b90eSArseny Krasnov 2186634f1a71SBobby Eshleman #ifdef CONFIG_BPF_SYSCALL 2187634f1a71SBobby Eshleman prot = READ_ONCE(sk->sk_prot); 2188634f1a71SBobby Eshleman if (prot != &vsock_proto) { 2189634f1a71SBobby Eshleman release_sock(sk); 2190634f1a71SBobby Eshleman return prot->recvmsg(sk, msg, len, flags, NULL); 2191634f1a71SBobby Eshleman } 2192634f1a71SBobby Eshleman #endif 2193634f1a71SBobby Eshleman 21949942c192SArseny Krasnov if (sk->sk_type == SOCK_STREAM) 219519c1b90eSArseny Krasnov err = __vsock_stream_recvmsg(sk, msg, len, flags); 21969942c192SArseny Krasnov else 21979942c192SArseny Krasnov err = __vsock_seqpacket_recvmsg(sk, msg, len, flags); 219819c1b90eSArseny Krasnov 219919c1b90eSArseny Krasnov out: 2200d021c344SAndy King release_sock(sk); 2201d021c344SAndy King return err; 2202d021c344SAndy King } 2203634f1a71SBobby Eshleman EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg); 2204d021c344SAndy King 2205e38f22c8SArseniy Krasnov static int vsock_set_rcvlowat(struct sock *sk, int val) 2206e38f22c8SArseniy Krasnov { 2207e38f22c8SArseniy Krasnov const struct vsock_transport *transport; 2208e38f22c8SArseniy Krasnov struct vsock_sock *vsk; 2209e38f22c8SArseniy Krasnov 2210e38f22c8SArseniy Krasnov vsk = vsock_sk(sk); 2211e38f22c8SArseniy Krasnov 2212e38f22c8SArseniy Krasnov if (val > vsk->buffer_size) 2213e38f22c8SArseniy Krasnov return -EINVAL; 2214e38f22c8SArseniy Krasnov 2215e38f22c8SArseniy Krasnov transport = vsk->transport; 2216e38f22c8SArseniy Krasnov 2217e38f22c8SArseniy Krasnov if (transport && transport->set_rcvlowat) 2218e38f22c8SArseniy Krasnov return transport->set_rcvlowat(vsk, val); 2219e38f22c8SArseniy Krasnov 2220e38f22c8SArseniy Krasnov WRITE_ONCE(sk->sk_rcvlowat, val ? : 1); 2221e38f22c8SArseniy Krasnov return 0; 2222e38f22c8SArseniy Krasnov } 2223e38f22c8SArseniy Krasnov 2224d021c344SAndy King static const struct proto_ops vsock_stream_ops = { 2225d021c344SAndy King .family = PF_VSOCK, 2226d021c344SAndy King .owner = THIS_MODULE, 2227d021c344SAndy King .release = vsock_release, 2228d021c344SAndy King .bind = vsock_bind, 2229a9e29e55SArseny Krasnov .connect = vsock_connect, 2230d021c344SAndy King .socketpair = sock_no_socketpair, 2231d021c344SAndy King .accept = vsock_accept, 2232d021c344SAndy King .getname = vsock_getname, 2233a11e1d43SLinus Torvalds .poll = vsock_poll, 2234d021c344SAndy King .ioctl = sock_no_ioctl, 2235d021c344SAndy King .listen = vsock_listen, 2236d021c344SAndy King .shutdown = vsock_shutdown, 2237a9e29e55SArseny Krasnov .setsockopt = vsock_connectible_setsockopt, 2238a9e29e55SArseny Krasnov .getsockopt = vsock_connectible_getsockopt, 2239a9e29e55SArseny Krasnov .sendmsg = vsock_connectible_sendmsg, 2240a9e29e55SArseny Krasnov .recvmsg = vsock_connectible_recvmsg, 2241d021c344SAndy King .mmap = sock_no_mmap, 2242e38f22c8SArseniy Krasnov .set_rcvlowat = vsock_set_rcvlowat, 2243634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 2244d021c344SAndy King }; 2245d021c344SAndy King 22460798e78bSArseny Krasnov static const struct proto_ops vsock_seqpacket_ops = { 22470798e78bSArseny Krasnov .family = PF_VSOCK, 22480798e78bSArseny Krasnov .owner = THIS_MODULE, 22490798e78bSArseny Krasnov .release = vsock_release, 22500798e78bSArseny Krasnov .bind = vsock_bind, 22510798e78bSArseny Krasnov .connect = vsock_connect, 22520798e78bSArseny Krasnov .socketpair = sock_no_socketpair, 22530798e78bSArseny Krasnov .accept = vsock_accept, 22540798e78bSArseny Krasnov .getname = vsock_getname, 22550798e78bSArseny Krasnov .poll = vsock_poll, 22560798e78bSArseny Krasnov .ioctl = sock_no_ioctl, 22570798e78bSArseny Krasnov .listen = vsock_listen, 22580798e78bSArseny Krasnov .shutdown = vsock_shutdown, 22590798e78bSArseny Krasnov .setsockopt = vsock_connectible_setsockopt, 22600798e78bSArseny Krasnov .getsockopt = vsock_connectible_getsockopt, 22610798e78bSArseny Krasnov .sendmsg = vsock_connectible_sendmsg, 22620798e78bSArseny Krasnov .recvmsg = vsock_connectible_recvmsg, 22630798e78bSArseny Krasnov .mmap = sock_no_mmap, 2264634f1a71SBobby Eshleman .read_skb = vsock_read_skb, 22650798e78bSArseny Krasnov }; 22660798e78bSArseny Krasnov 2267d021c344SAndy King static int vsock_create(struct net *net, struct socket *sock, 2268d021c344SAndy King int protocol, int kern) 2269d021c344SAndy King { 2270c0cfa2d8SStefano Garzarella struct vsock_sock *vsk; 227155f3e149SStefano Garzarella struct sock *sk; 2272c0cfa2d8SStefano Garzarella int ret; 227355f3e149SStefano Garzarella 2274d021c344SAndy King if (!sock) 2275d021c344SAndy King return -EINVAL; 2276d021c344SAndy King 22776cf1c5fcSAndy King if (protocol && protocol != PF_VSOCK) 2278d021c344SAndy King return -EPROTONOSUPPORT; 2279d021c344SAndy King 2280d021c344SAndy King switch (sock->type) { 2281d021c344SAndy King case SOCK_DGRAM: 2282d021c344SAndy King sock->ops = &vsock_dgram_ops; 2283d021c344SAndy King break; 2284d021c344SAndy King case SOCK_STREAM: 2285d021c344SAndy King sock->ops = &vsock_stream_ops; 2286d021c344SAndy King break; 22870798e78bSArseny Krasnov case SOCK_SEQPACKET: 22880798e78bSArseny Krasnov sock->ops = &vsock_seqpacket_ops; 22890798e78bSArseny Krasnov break; 2290d021c344SAndy King default: 2291d021c344SAndy King return -ESOCKTNOSUPPORT; 2292d021c344SAndy King } 2293d021c344SAndy King 2294d021c344SAndy King sock->state = SS_UNCONNECTED; 2295d021c344SAndy King 229655f3e149SStefano Garzarella sk = __vsock_create(net, sock, NULL, GFP_KERNEL, 0, kern); 229755f3e149SStefano Garzarella if (!sk) 229855f3e149SStefano Garzarella return -ENOMEM; 229955f3e149SStefano Garzarella 2300c0cfa2d8SStefano Garzarella vsk = vsock_sk(sk); 2301c0cfa2d8SStefano Garzarella 2302c0cfa2d8SStefano Garzarella if (sock->type == SOCK_DGRAM) { 2303c0cfa2d8SStefano Garzarella ret = vsock_assign_transport(vsk, NULL); 2304c0cfa2d8SStefano Garzarella if (ret < 0) { 2305c0cfa2d8SStefano Garzarella sock_put(sk); 2306c0cfa2d8SStefano Garzarella return ret; 2307c0cfa2d8SStefano Garzarella } 2308c0cfa2d8SStefano Garzarella } 2309c0cfa2d8SStefano Garzarella 2310c0cfa2d8SStefano Garzarella vsock_insert_unbound(vsk); 231155f3e149SStefano Garzarella 231255f3e149SStefano Garzarella return 0; 2313d021c344SAndy King } 2314d021c344SAndy King 2315d021c344SAndy King static const struct net_proto_family vsock_family_ops = { 2316d021c344SAndy King .family = AF_VSOCK, 2317d021c344SAndy King .create = vsock_create, 2318d021c344SAndy King .owner = THIS_MODULE, 2319d021c344SAndy King }; 2320d021c344SAndy King 2321d021c344SAndy King static long vsock_dev_do_ioctl(struct file *filp, 2322d021c344SAndy King unsigned int cmd, void __user *ptr) 2323d021c344SAndy King { 2324d021c344SAndy King u32 __user *p = ptr; 2325c0cfa2d8SStefano Garzarella u32 cid = VMADDR_CID_ANY; 2326d021c344SAndy King int retval = 0; 2327d021c344SAndy King 2328d021c344SAndy King switch (cmd) { 2329d021c344SAndy King case IOCTL_VM_SOCKETS_GET_LOCAL_CID: 2330c0cfa2d8SStefano Garzarella /* To be compatible with the VMCI behavior, we prioritize the 2331c0cfa2d8SStefano Garzarella * guest CID instead of well-know host CID (VMADDR_CID_HOST). 2332c0cfa2d8SStefano Garzarella */ 2333c0cfa2d8SStefano Garzarella if (transport_g2h) 2334c0cfa2d8SStefano Garzarella cid = transport_g2h->get_local_cid(); 2335c0cfa2d8SStefano Garzarella else if (transport_h2g) 2336c0cfa2d8SStefano Garzarella cid = transport_h2g->get_local_cid(); 2337c0cfa2d8SStefano Garzarella 2338c0cfa2d8SStefano Garzarella if (put_user(cid, p) != 0) 2339d021c344SAndy King retval = -EFAULT; 2340d021c344SAndy King break; 2341d021c344SAndy King 2342d021c344SAndy King default: 2343c3e448cdSColin Ian King retval = -ENOIOCTLCMD; 2344d021c344SAndy King } 2345d021c344SAndy King 2346d021c344SAndy King return retval; 2347d021c344SAndy King } 2348d021c344SAndy King 2349d021c344SAndy King static long vsock_dev_ioctl(struct file *filp, 2350d021c344SAndy King unsigned int cmd, unsigned long arg) 2351d021c344SAndy King { 2352d021c344SAndy King return vsock_dev_do_ioctl(filp, cmd, (void __user *)arg); 2353d021c344SAndy King } 2354d021c344SAndy King 2355d021c344SAndy King #ifdef CONFIG_COMPAT 2356d021c344SAndy King static long vsock_dev_compat_ioctl(struct file *filp, 2357d021c344SAndy King unsigned int cmd, unsigned long arg) 2358d021c344SAndy King { 2359d021c344SAndy King return vsock_dev_do_ioctl(filp, cmd, compat_ptr(arg)); 2360d021c344SAndy King } 2361d021c344SAndy King #endif 2362d021c344SAndy King 2363d021c344SAndy King static const struct file_operations vsock_device_ops = { 2364d021c344SAndy King .owner = THIS_MODULE, 2365d021c344SAndy King .unlocked_ioctl = vsock_dev_ioctl, 2366d021c344SAndy King #ifdef CONFIG_COMPAT 2367d021c344SAndy King .compat_ioctl = vsock_dev_compat_ioctl, 2368d021c344SAndy King #endif 2369d021c344SAndy King .open = nonseekable_open, 2370d021c344SAndy King }; 2371d021c344SAndy King 2372d021c344SAndy King static struct miscdevice vsock_device = { 2373d021c344SAndy King .name = "vsock", 2374d021c344SAndy King .fops = &vsock_device_ops, 2375d021c344SAndy King }; 2376d021c344SAndy King 2377c0cfa2d8SStefano Garzarella static int __init vsock_init(void) 2378d021c344SAndy King { 2379c0cfa2d8SStefano Garzarella int err = 0; 23802c4a336eSAndy King 2381c0cfa2d8SStefano Garzarella vsock_init_tables(); 23822c4a336eSAndy King 2383c0cfa2d8SStefano Garzarella vsock_proto.owner = THIS_MODULE; 23846ad0b2f7SAsias He vsock_device.minor = MISC_DYNAMIC_MINOR; 2385d021c344SAndy King err = misc_register(&vsock_device); 2386d021c344SAndy King if (err) { 2387d021c344SAndy King pr_err("Failed to register misc device\n"); 2388f6a835bbSGao feng goto err_reset_transport; 2389d021c344SAndy King } 2390d021c344SAndy King 2391d021c344SAndy King err = proto_register(&vsock_proto, 1); /* we want our slab */ 2392d021c344SAndy King if (err) { 2393d021c344SAndy King pr_err("Cannot register vsock protocol\n"); 2394f6a835bbSGao feng goto err_deregister_misc; 2395d021c344SAndy King } 2396d021c344SAndy King 2397d021c344SAndy King err = sock_register(&vsock_family_ops); 2398d021c344SAndy King if (err) { 2399d021c344SAndy King pr_err("could not register af_vsock (%d) address family: %d\n", 2400d021c344SAndy King AF_VSOCK, err); 2401d021c344SAndy King goto err_unregister_proto; 2402d021c344SAndy King } 2403d021c344SAndy King 2404634f1a71SBobby Eshleman vsock_bpf_build_proto(); 2405634f1a71SBobby Eshleman 2406d021c344SAndy King return 0; 2407d021c344SAndy King 2408d021c344SAndy King err_unregister_proto: 2409d021c344SAndy King proto_unregister(&vsock_proto); 2410f6a835bbSGao feng err_deregister_misc: 2411d021c344SAndy King misc_deregister(&vsock_device); 2412f6a835bbSGao feng err_reset_transport: 2413d021c344SAndy King return err; 2414d021c344SAndy King } 2415d021c344SAndy King 2416c0cfa2d8SStefano Garzarella static void __exit vsock_exit(void) 2417d021c344SAndy King { 2418d021c344SAndy King misc_deregister(&vsock_device); 2419d021c344SAndy King sock_unregister(AF_VSOCK); 2420d021c344SAndy King proto_unregister(&vsock_proto); 2421d021c344SAndy King } 2422d021c344SAndy King 2423daabfbcaSStefano Garzarella const struct vsock_transport *vsock_core_get_transport(struct vsock_sock *vsk) 24240b01aeb3SStefan Hajnoczi { 2425daabfbcaSStefano Garzarella return vsk->transport; 24260b01aeb3SStefan Hajnoczi } 24270b01aeb3SStefan Hajnoczi EXPORT_SYMBOL_GPL(vsock_core_get_transport); 24280b01aeb3SStefan Hajnoczi 2429c0cfa2d8SStefano Garzarella int vsock_core_register(const struct vsock_transport *t, int features) 243005e489b1SStefan Hajnoczi { 24310e121905SStefano Garzarella const struct vsock_transport *t_h2g, *t_g2h, *t_dgram, *t_local; 2432c0cfa2d8SStefano Garzarella int err = mutex_lock_interruptible(&vsock_register_mutex); 2433c0cfa2d8SStefano Garzarella 2434c0cfa2d8SStefano Garzarella if (err) 2435c0cfa2d8SStefano Garzarella return err; 2436c0cfa2d8SStefano Garzarella 2437c0cfa2d8SStefano Garzarella t_h2g = transport_h2g; 2438c0cfa2d8SStefano Garzarella t_g2h = transport_g2h; 2439c0cfa2d8SStefano Garzarella t_dgram = transport_dgram; 24400e121905SStefano Garzarella t_local = transport_local; 2441c0cfa2d8SStefano Garzarella 2442c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_H2G) { 2443c0cfa2d8SStefano Garzarella if (t_h2g) { 2444c0cfa2d8SStefano Garzarella err = -EBUSY; 2445c0cfa2d8SStefano Garzarella goto err_busy; 2446c0cfa2d8SStefano Garzarella } 2447c0cfa2d8SStefano Garzarella t_h2g = t; 244805e489b1SStefan Hajnoczi } 244905e489b1SStefan Hajnoczi 2450c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_G2H) { 2451c0cfa2d8SStefano Garzarella if (t_g2h) { 2452c0cfa2d8SStefano Garzarella err = -EBUSY; 2453c0cfa2d8SStefano Garzarella goto err_busy; 2454c0cfa2d8SStefano Garzarella } 2455c0cfa2d8SStefano Garzarella t_g2h = t; 2456c0cfa2d8SStefano Garzarella } 2457c0cfa2d8SStefano Garzarella 2458c0cfa2d8SStefano Garzarella if (features & VSOCK_TRANSPORT_F_DGRAM) { 2459c0cfa2d8SStefano Garzarella if (t_dgram) { 2460c0cfa2d8SStefano Garzarella err = -EBUSY; 2461c0cfa2d8SStefano Garzarella goto err_busy; 2462c0cfa2d8SStefano Garzarella } 2463c0cfa2d8SStefano Garzarella t_dgram = t; 2464c0cfa2d8SStefano Garzarella } 2465c0cfa2d8SStefano Garzarella 24660e121905SStefano Garzarella if (features & VSOCK_TRANSPORT_F_LOCAL) { 24670e121905SStefano Garzarella if (t_local) { 24680e121905SStefano Garzarella err = -EBUSY; 24690e121905SStefano Garzarella goto err_busy; 24700e121905SStefano Garzarella } 24710e121905SStefano Garzarella t_local = t; 24720e121905SStefano Garzarella } 24730e121905SStefano Garzarella 2474c0cfa2d8SStefano Garzarella transport_h2g = t_h2g; 2475c0cfa2d8SStefano Garzarella transport_g2h = t_g2h; 2476c0cfa2d8SStefano Garzarella transport_dgram = t_dgram; 24770e121905SStefano Garzarella transport_local = t_local; 2478c0cfa2d8SStefano Garzarella 2479c0cfa2d8SStefano Garzarella err_busy: 2480c0cfa2d8SStefano Garzarella mutex_unlock(&vsock_register_mutex); 2481c0cfa2d8SStefano Garzarella return err; 2482c0cfa2d8SStefano Garzarella } 2483c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_core_register); 2484c0cfa2d8SStefano Garzarella 2485c0cfa2d8SStefano Garzarella void vsock_core_unregister(const struct vsock_transport *t) 2486c0cfa2d8SStefano Garzarella { 2487c0cfa2d8SStefano Garzarella mutex_lock(&vsock_register_mutex); 2488c0cfa2d8SStefano Garzarella 2489c0cfa2d8SStefano Garzarella if (transport_h2g == t) 2490c0cfa2d8SStefano Garzarella transport_h2g = NULL; 2491c0cfa2d8SStefano Garzarella 2492c0cfa2d8SStefano Garzarella if (transport_g2h == t) 2493c0cfa2d8SStefano Garzarella transport_g2h = NULL; 2494c0cfa2d8SStefano Garzarella 2495c0cfa2d8SStefano Garzarella if (transport_dgram == t) 2496c0cfa2d8SStefano Garzarella transport_dgram = NULL; 2497c0cfa2d8SStefano Garzarella 24980e121905SStefano Garzarella if (transport_local == t) 24990e121905SStefano Garzarella transport_local = NULL; 25000e121905SStefano Garzarella 2501c0cfa2d8SStefano Garzarella mutex_unlock(&vsock_register_mutex); 2502c0cfa2d8SStefano Garzarella } 2503c0cfa2d8SStefano Garzarella EXPORT_SYMBOL_GPL(vsock_core_unregister); 2504c0cfa2d8SStefano Garzarella 2505c0cfa2d8SStefano Garzarella module_init(vsock_init); 250605e489b1SStefan Hajnoczi module_exit(vsock_exit); 2507c1eef220SCong Wang 2508d021c344SAndy King MODULE_AUTHOR("VMware, Inc."); 2509d021c344SAndy King MODULE_DESCRIPTION("VMware Virtual Socket Family"); 25101190cfdbSJorgen Hansen MODULE_VERSION("1.0.2.0-k"); 2511d021c344SAndy King MODULE_LICENSE("GPL v2"); 2512