UDP-WG Implementation
Loading...
Searching...
No Matches
network.h
1#pragma once
2
3#include <poll.h> // For the poll function for timeouts.
4#include <map> // For our address -> fd map
5
6#include "wireguard.h" // For WireGuard
7
8using namespace shared;
9
23namespace network {
24
31 class queue {
32 private:
33
34 // It shouldn't be surprising that the Network Thread gets privileged
35 // access here, but what sometimes happens is if a signal is received,
36 // say the user hitting Ctrl+C, while the Main Thread is listening
37 // for packets, then it is taken out of the queue function, leaving
38 // it locked, which can then lockup the entire application as both
39 // the Main and Network Threads try and flush the in/out.
40 //
41 // The solution? Let the Network Thread bypass the mutex, but ONLY
42 // on shutdown. Since we're already shutting down, a double-write/read
43 // isn't an issue with the alternative is a deadlock that hangs
44 // the program on abort.
45 friend void thread(port_t, wireguard::config);
46
47 // The mutex to mediate access.
48 std::mutex lock;
49
50 // A queue of packets. Why isn't this an actual queue? It's so our
51 // pop function doesn't need to mangle the structure to iterate through it. It
52 // was originally a std::queue, but pop required adding invalid packets to a
53 // separate queue, since there was no iteration support.
54 std::vector<udp::packet> packets;
55
56 // Through a lot of trial and error, I've found that sometimes the Network
57 // Thread and Queue can deadlock when the Network Queue is at the out stage,
58 // and trying to determine whether the Queue is empty, while a queue is being
59 // polled for packets. The solution is to simply make empty/size non-blocking,
60 // but to do this in a thread-safe manner, we make it an atomic value, and
61 // update it ourselves.
62 std::atomic<size_t> length = 0;
63
64 public:
65
66 // Initialize the objects.
67 queue() = default;
68
69
75 void enqueue(const udp::packet& in) {
76 std::lock_guard<std::mutex> guard(lock);
77 packets.emplace_back(in);
78 length = packets.size();
79 }
80
81
89 udp::packet pop(const tag type = NONE, const size_t& iterations = -1) {
90 udp::packet ret;
91
92 // We continuously monitor the queue until we find a packet of the correct type.
93 for (size_t x = 0; x < iterations; ++x) {
94
95 // Lock the queue
96 lock.lock();
97
98 // Search.
99 for (auto iter = packets.begin(); iter != packets.end(); ++iter) {
100 if (type == NONE || iter->data()[0] == type) {
101 auto ret = *iter;
102 packets.erase(iter);
103 length = packets.size();
104 lock.unlock();
105 return ret;
106 }
107 }
108
109 // Unlock and sleep.
110 lock.unlock();
111 if (x != iterations -1) shared::sleep();
112 }
113 throw std::runtime_error("Timeout!");
114 }
115
116
123 udp::packet pop(const std::vector<uint8_t>& types, const size_t& iterations = -1) {
124 for (size_t x = 0; x < iterations; ++x) {
125 for (const auto& type : types) {
126 try {return pop(type, 1);}
127 catch (std::runtime_error&) {}
129 }
130 }
131 throw std::runtime_error("Timeout!");
132 }
133
134
139 bool empty() {return length == 0;}
140
141
146 size_t size() {return length;}
147
153 void flush() {
154 std::lock_guard<std::mutex> guard(lock);
155 packets.clear();
156 }
157 };
158
159
160 // Define the global in/out queue for the Network Thread.
161 queue in, out;
162
163
169 void thread(port_t port, wireguard::config wg = {}) {
170
171 /*
172 * Let me try my best explain everything. Firstly, the network thread is
173 * broken up into lambda functions which handle essential operations,
174 * such as sending a packet across a FD, attaching to sockets,
175 * and brokering new connections. There were two main design
176 * philosophies:
177 * 1. It should be impossible to communicate across
178 * the network without doing it through the Network Thread.
179 * The network thread holds a monopoly on FDs, which
180 * means the main thread cannot actually talk to the peers.
181 * it can only send packets to the network thread and let
182 * the thread handle it. This significantly reduces
183 * the complexity needed in sending packets for the Main Thread.
184 * 2. The thread should be the only privileged component regarding
185 * UDP packets. The thread is the only part of the code that
186 * is friends with the packet, and as thus is the only one who
187 * can modify the source information to something other than
188 * the host (For WireGuard).
189 * These decisions have made the networking of this application really
190 * nice to use, just give a destination and data and let the Network
191 * Thread handle it.
192 *
193 * Another thing that needs to be addressed is the two modes that a
194 * network thread can be in (Yes, there can be more than one). At the
195 * beginning of program execution, the main thread creates the
196 * special network_thread variable, who is distinct because it does
197 * not take a WireGuard configuration. This thread is the fundamental
198 * thread and will teardown the application if it fails, and
199 * communicates its status directly through stat. It also doesn't
200 * deal with any of the WireGuard communication, as it exists to
201 * send standard UDP, or broker a WireGuard connection.
202 *
203 * When a user wants to setup a WireGuard connection, the Main Network
204 * Thread will pass the initial packets across, at least until the
205 * server, or responder, spawns a new network thread that serves as
206 * the client's endpoint. What does this mean in practical terms?
207 * 1. A new WireGuard thread is spawned for each WireGuard connection
208 * on the machine where it is the server.
209 * 2. The WireGuard Thread binds to a random socket which acts as the endpoint
210 * for the client. Any packets sent to this port will be encrypted
211 * and sent to the client. Any packets the client sends to the
212 * endpoint will be decrypted, and the plaintext UDP will be sent
213 * to the intended target with the source spoofed back to the server.
214 * 3. The WireGuard thread is ephemeral. While the Network Thread can
215 * be communicated with the global in/out queues, the WireGuard thread
216 * has its own internal queue that does nothing more than route packets from and to the client.
217 * 4. The WireGuard thread cannot affect the Main/Network threads. It is prohibited
218 * from modifying the stat, and any errors simply teardown the thread,
219 * leaving the rest of the program unchanged.
220 *
221 * To make it more clear, the documentation henceforth will refer to the Main Network
222 * Thread run on startup as The Network Thread, whereas all threads for WireGuard will
223 * be collectively called The WireGuard (WG) Threads.
224 */
225
226 /******************************************************************************
227 * ESSENTIALS *
228 *-----------------------------------------------------------------------------*
229 * This section contains all the things that need to be done immediately upon *
230 * running the thread, such as setting the stat for the Network Thread *
231 * It also includes variable definitions for things that needs to be captured *
232 * by the lambdas, so must be therefore defined before them. *
233 *******************************************************************************/
234
235 // The Main Thread waits until the Network Thread is finished initializing before doing anything.
236 if (!wg.on) shared::stat = INIT;
237
238 // For verbose output.
239 const std::string TNAME = wg.on ? "WIREGUARD" : "NETWORK";
240
241 // The WG thread doesn't do anything more than sent packets
242 // to their intended target, so it uses a local queue.
243 // In this sceneraio the flow of packets is:
244 // Client (ENC) -> WG Thread's IN -> DECRYPT -> WG Thread's OUT -> Recipient.
245 // Peer -> WG Thread's IN -> ENCRYPT -> WG Thread's OUT -> Client (ENC).
246 //
247 // This is as opposed to the Network Thread, which is:
248 // Main Thread -> Network Thread's OUT -> Peer
249 // Peer -> Network Thread's IN -> Main Thread.
250 queue local_in, local_out;
251 queue& thread_in = wg.on ? local_in : in;
252 queue& thread_out = wg.on ? local_out: out;
253
254 // This keeps track of all active connections. The Main Thread
255 // Only has access to a destination, so when they sent us a packet
256 // We lookup to see if that connection has an existing FD to send to.
257 std::map<con_t, fd_t> fds;
258
259 // The socket that we listen for new connections on.
260 int sock = -1;
261
262 // Store the last second we tried re-keying.
263 uint64_t last_second = 0;
264
265 // The thread needs to communicate using it's real port.
266 auto thread_self = self;
267
268 // We send heartbeat packets on each iteration
269 // to ensure the connection is still up.
270 // This is just a packet with no destination.
271 auto heartbeat = udp::packet({}, "");
272
273
274 /******************************************************************************
275 * LAMBDA FUNCTIONS *
276 *----------------------------------------------------------------------------*
277 * This section contains all the helper utilities that the we use within the *
278 * core loop. We use Lambdas here not only so we don't clutter the namespace *
279 * with functions, but also to prevent sensitive functions, like those *
280 * related to FD manipulation, from being accessed outside the the threads *
281 ******************************************************************************/
282
283
290 auto send_fd = [&TNAME, &wg, &fds](const udp::packet& p, const fd_t& fd) {
291
292 // Create a buffer string of the UDP packet.
293 auto buffer = p.buffer();
294
295 // We are interested in OUT.
296 struct pollfd f;
297 f.fd = fd;
298 f.events = POLLOUT;
299 switch (poll(&f, 1, 100)) {
300 case -1: output("Send Error (poll)", TNAME, ERROR); break;
301 case 0:
302 close(fd);
303 fds.erase(p.destination().num);
304 break;
305 default:
306 if (send(fd, buffer.c_str(), buffer.length(), MSG_NOSIGNAL) < 1)
307 throw std::runtime_error("Failed to send packet.");
308 }
309 };
310
311
318 auto open_socket = [&TNAME, &wg](const port_t& p) {
319
320 // Create the socket.
321 output("Setting up Listening Socket...", TNAME, INFO);
322 auto sock = socket(AF_INET, SOCK_STREAM, 0);
323 if (sock == -1) {
324 output("Failed to initialize socket!", TNAME, ERROR);
325 if (!wg.on) shared::stat = TERMINATE;
326 return -1;
327 }
328 sockaddr_in server = {
329 .sin_family = AF_INET,
330 .sin_port = htons(p),
331 .sin_addr = {.s_addr = INADDR_ANY}
332 };
333
334 // Set a timeout, and allow reuse of the address (This way, a crash
335 // does not require us to wait the WAIT_TIME before the address is
336 // available to bind again.
337 struct timeval timeout = {.tv_usec = 100};
338 int yes = 1;
339 setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));
340 setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes));
341
342 // Bind.
343 if (bind(sock, (struct sockaddr*)&server, sizeof(server)) == -1) {
344 output("Failed to bind to socket!", TNAME, ERROR);
345 if (!wg.on) shared::stat = TERMINATE;
346 return -1;
347 }
348
349 // Return the FD.
350 return sock;
351 };
352
353
364 auto establish_connection = [&TNAME, &send_fd, &wg, &thread_self](const connection& dest) {
365 output("New destination! Establishing connection...", TNAME, INFO);
366
367 // Make the socket we'll communicate over.
368 auto fd = socket(AF_INET, SOCK_STREAM, 0);
369 if (fd == -1) {
370 output("Failed to create socket to destination!", TNAME, WARN);
371 return -1;
372 }
373 sockaddr_in connection = {
374 .sin_family = AF_INET,
375 .sin_port = htons(dest.pair.p),
376 .sin_addr = {.s_addr = dest.pair.a},
377 };
378
379 // Try and connect to them.
380 if (connect(fd, (struct sockaddr*)&connection, sizeof(connection)) == -1) {
381 output("Failed to connect to destination!", TNAME, WARN);
382 return -1;
383 }
384
385 // The FD should never block.
386 int flags = fcntl(fd, F_GETFL, 0);
387 fcntl(fd, F_SETFL, flags | O_NONBLOCK);
388
389 // Once we have a connection, send a packet to give them our IP information.
390 // This way, both of us can associate IP+Port to FD.
391 try {
392 send_fd({thread_self, dest, "Hello!"}, fd);
393 return fd;
394 }
395 catch (std::runtime_error& c) {return -1;}
396 };
397
398
405 auto cleanup = [&sock, &fds, &thread_in, &thread_out, &wg, &TNAME]() {
406 output("Shutting down...", TNAME, INFO);
407 close(sock);
408 for (auto f = fds.begin(); f != fds.end(); ++f) close(f->second);
409
410 // The Network Thread has the authority to directly manipulate
411 // the queues if we're on cleanup. This prevents a SIGNAL from
412 // interrupting a thread that is waiting, causing a deadlock.
413 // The WG thread is the sole owner of its threads, so this
414 // doesn't cause any concern, but it allows the Network Thread
415 // to free up the Main Thread if it's waiting when we abort.
416 thread_in.packets.clear(); thread_out.packets.clear();
417 thread_in.lock.unlock(); thread_out.lock.unlock();
418
419 output("Goodbye!", TNAME, INFO);
420 if (!wg.on) shared::stat = TERMINATE;
421 };
422
423
432 auto wait_for = [&thread_in](const fd_t& fd, const tag& tag) {
433 udp::packet response;
434 while (true) {
435 try {
436 response = udp::packet(fd);
437 if (response.data()[0] == tag) break;
438
439 // Invalid packets are enqueued for later processing.
440 else thread_in.enqueue(response);
441 }
442 catch (std::runtime_error&) {}
443 }
444 return response;
445 };
446
447
448 /******************************************************************************
449 * INIT *
450 *----------------------------------------------------------------------------*
451 * The core initialization. We need to bind to our listening socket, and for *
452 * the WireGuard Threads, ensure we have an FD for the client by sending and *
453 * receiving a message. *
454 ******************************************************************************/
455
456 // Hello!
457 output("Hello :)", TNAME, INFO);
458
459 // First, create our listening socket. This is used between network
460 // threads to establish a FD to communicate.
461 // The WG Threads don't specify port, and we randomly assign one.
462 if (port == 0) {
463 std::uniform_int_distribution<std::mt19937::result_type> port_dist(2000,9000);
464 do {
465 port = port_dist(rng);
466 sock = open_socket(port);
467 } while (sock == -1);
468 }
469 else sock = open_socket(port);
470 if (sock == -1) {
471 output("Failed to open socket!", TNAME, ERROR);
472 cleanup(); return;
473 }
474 thread_self.pair.p = port;
475 int client_fd = -1;
476
477 // We can now listen for new connection
478 output("Listening!", TNAME, INFO);
479 if (!wg.on) shared::stat = READY;
480
481
482 /*
483 * The WG Threads need to send a packet to the Client, not because
484 * they need to know the endpoint, although it is kinda fun to ping
485 * yourself by sending a packet to it, but more so because we need
486 * the WG Thread to assign the client an FD. This lets it teardown
487 * if the Client ever fails a heartbeat, and helps majorly with
488 * re-keying.
489 */
490 if (wg.on) {
491
492 // Establish the connection.
493 output("Sending location to client", TNAME, INFO);
494 client_fd = establish_connection(wg.src);
495
496 // Send the packet across the FD.
497 sys_packet info;
498 info.source = thread_self;
499 auto packet = udp::packet(wg.src, info, sizeof(sys_packet));
500 send_fd(packet, client_fd);
501
502 // Put the client in the database.
503 fds[wg.src.num] = client_fd;
504 }
505
506 /******************************************************************************
507 * MAIN LOOP *
508 *----------------------------------------------------------------------------*
509 * This is the core loop of the threads, where they will do most of *
510 * their work. We only terminate if the global stat tells us to, which *
511 * can only be changed by the Main Thread (Such as if the user exits), or the *
512 * Network Thread (Such as if it cannot bind to its listening socket). There *
513 * are 5 main steps that the thread performs on each iteration, going in *
514 * order: *
515 * 1. Sleep. There are no semaphores or other such threading objects for *
516 * waking the thread up when it needs to do something. We instead sleep *
517 * for 100 microseconds for each interaction. Why at the start? So the re-key *
518 * stage works better. *
519 * 2. Re-Key: This is exclusive to the WG Threads, but if a connection has *
520 * either reached the rekey limit for messages sent or duration of connection *
521 * The WG Thread will initiate a key exchange to generate new Transport Keys, *
522 * identifiers, and reset the send/recv counters. It will also reset the *
523 * timestamp. The Reference stipulates that when the rekey limit is reached *
524 * That the server should just politely inform them every TIMEOUT, and only *
525 * refuse to communicate once the REJECT limit has been reached (See 6.1 and *
526 * 6.2). We abide by these constraints, and while the initial handshake *
527 * requires explicit confirmation, re-keys are done transparently without *
528 * confirmation. If you haven't noticed, rekeying changes the mode on the the *
529 * home screen, so you can tell when a rekeying takes place. *
530 * 3. Listen: The thread checks its listening socket to see if any new *
531 * requests were made. These requests are made from other threads on remote *
532 * instances. While some aspects of the network, such as establishing a *
533 * WireGuard connection, require user consent, the threads lack the ability *
534 * to ask explicitly, and generates the FD behind the scenes. This is nice, *
535 * because if the user sends a packet to a destination that the thread *
536 * doesn't know, the endpoint will be resolved without hassle from either *
537 * side. *
538 * 4. Send: The thread cycles through every packet in the out queue. If the *
539 * destination on the UDP header is known, it grabs the associated FD and *
540 * sends the message across. If the destination is unknown, it sends a *
541 * request to the destination, which the destination will notice in step 3, *
542 * generate a FD, and then use that newly generated FD to send the packet. *
543 * 5. Receive: The thread cycles through all FDs that it currently knows, and *
544 * polls them for new packets sent by peers. If any are found, they are *
545 * placed in the in queue for the Main Thread's consumption. If the thread *
546 * is a WG Thread, these packets will be inspected. If they are originating *
547 * from the client, the server will decrypt them, read the contained UDP *
548 * packet, and will spoof the source so that it points to the endpoint, *
549 * rather than the actual client. That way, the recipient will have no idea *
550 * who the original sender was, and will reply back to the endpoint. *
551 * Likewise, packets that hit the endpoint that aren't from the source and *
552 * encrypted and sent in a TransportPacmet back to the client so that they *
553 * can decrypt and read them. This works seamlessly so that two peers can *
554 * communicate without even realizing that the WireGuard endpoint was between *
555 * them. This step also involves the heartbeat, where the thread will send *
556 * a ping to every known connection. If any of the them fail to respond, we *
557 * presume the connection to be lost, and close the FD and delete it from the *
558 * database. *
559 ******************************************************************************/
560 while(shared::stat != TERMINATE) {
561
562 /*
563 * Sleep, the easiest step ;)
564 */
565 sleep();
566
567 /*
568 * Re-Key.
569 *
570 * If you look at the Handshake function, you may be confused why
571 * this function doesn't just use that code, and why it apparently is
572 * more complicated. The problem is that Handshake() uses the in/out queue
573 * and those queues are populated by THIS thread. If we sent a packet on
574 * the out queue, it will only be sent across the wire at step 4 of this
575 * loop, which means we cannot have any blocking behavior. This leaves us
576 * with two options:
577 * 1. Make all requests non-blocking, and orchestrate a dizzying
578 * amount of flags to know what step of the handshake we're on
579 * and pick up once we get the information.
580 * 2. Be mean and stop all communication until the peer responds.
581 * This behavior only applies to a WireGuard thread, which means
582 * the only communication is from the client and to the client. Therefore
583 * there's no qualms about simply stalling the client until they make
584 * up their mind. The wait_for() lambda does exactly this, it simply
585 * reads the FD of the client, pushing any irrelevant packets to the in
586 * queue, and only resumes normal behavior once the correct packet is sent.
587 * No packets are lost, but the thread isn't budging until the client plays
588 * nice.
589 */
590 if (wg.on) {
591
592 // We need to ensure we don't bombard the client with requests, so
593 // we only ping every RKEY_TOUT period.
594 auto current = Timestamp();
595 auto time_since = TimestampDiff(current, wg.timestamp);
596
597 // Save the connection, because the handshake clears it.
598 auto connection = wg.src;
599 bool fatal = time_since > wireguard::RJECT_TIME || wg.recv > wireguard::RJECT_MSGS;
600
601 // If we've passed the RKEY threshold, we need to re-key.
602 if (time_since > wireguard::REKEY_TIME || wg.recv > wireguard::REKEY_MSGS) {
603
604 // Figure out how many seconds have passed.
605 auto seconds = *reinterpret_cast<uint64_t*>(current.bytes());
606
607 // Only attempt a wireguard::REKEY within an acceptable frequency.
608 if (seconds % wireguard::REKEY_TOUT == 0 && seconds != last_second) {
609
610 // Update so we only send one packet for any given second.
611 last_second = seconds;
612
613 // Put our public key into a sys_packet, and send it across.
614 sys_packet info = {.source = thread_self, .rekey = true};
615 memcpy(&info.pub[0], wireguard::pair.pub().bytes(), 32);
616 auto packet = udp::packet(connection, info, sizeof(info));
617 auto fd = fds[wg.src.num];
618 auto peer = wg.src;
619 send_fd(packet, fd);
620
621 try {
622
623 // Wait until the client responds, and they better send their public key back.
624 // This is blocking, so the client HAS to respond, even if they reject the
625 // re key.
626 auto pub = wait_for(fd, SYS).cast<sys_packet>();
627 crypto::string remote_pub = {&pub.pub[0], 32};
628
629 // If they agreed.
630 if (pub.source.num == connection.num) {
631
632 // This is just a slightly modified version of Handshake(), using wait_for and send_fd
633 // to account for the lack of queues. No cookies, either.
634 wireguard::InitPacket init_packet;
635 wireguard::ResponsePacket response_packet;
636
637 crypto::string C, H;
638 crypto::keypair ephemeral;
639 wireguard::Handshake1(ephemeral, remote_pub, wg, init_packet, true, C, H);
640 send_fd({peer, init_packet.Serialize()}, fd);
641
642 auto response = wait_for(fd, WG_HANDSHAKE);
643 response_packet = wireguard::ResponsePacket(response.data());
644 wireguard::Handshake2(ephemeral, remote_pub, wg, response_packet, true, C, H);
645 wg.src = peer;
646 packet = wireguard::TransportPacket::Create(wg, {peer, "Hello!"});
647 send_fd(packet, fd);
648
649 output("Completed Re-Exchange!", TNAME, INFO);
650 pick_mode();
651 wg.on = true;
652 wg.timestamp = current;
653
654 // Packets coming in can be encrypted with our new Transport Keys,
655 // but any communication that the client sent prior will be encrypted
656 // with now invalid Transport Keys, and thus are lost as random noise.
657 // We could save these packets, store them decrypted, and then re-encrypt
658 // them with new Transport Keys, but in truth I don't see how the client
659 // would ever be able to slip in a packet: they can't refuse a re-key,
660 // and it would require a purposely difficult client to actively refuse
661 // rekeys while sending more packets. In this situation, I think the
662 // server is perfectly reasonable in dropping those packets.
663 local_out.flush();
664
665 // On this continue, the timestamp has been updated, and the send/recv has been
666 // cleared, so we're good for another period.
667 continue;
668 }
669 }
670
671 // If they didn't agree or errored, make it clear in the log.
672 catch (std::runtime_error&) {}
673 if (fatal) {
674 output("Client refused wireguard::REKEY after exceeding rejection limit! WireGuard connection is terminating!", TNAME, ERROR);
675 }
676 else {
677 output("Client refused wireguard::REKEY after exceeding wireguard::REKEY threshold. Packets will not be processed until a re-key has been completed", TNAME, WARN);
678 }
679 }
680 }
681
682 // If we've only reached the rekey stage, we still process packets.
683 // However, once we're over fatal, we loop back up to the top, which
684 // means we do nothing but sleep, and check for a rekey. No
685 // other communication is sent.
686 if (fatal) continue;
687 }
688
689
690 /*
691 * Listen.
692 *
693 * Listening requires successfully returning from listen(), and then
694 * seeing if accept() returned a non -1 value. The returned value is the
695 * FD we can use. A failure to listen would mean an inability to accept
696 * new connections, and as such is considered a fatal error that
697 * tears down the thread.
698 */
699 if (listen(sock, 255) != 0) {
700 output("Failed to listen to socket!", TNAME, ERROR);
701 cleanup(); return;
702 }
703 else {
704
705 // Accept any connection.
706 sockaddr_in peer;
707 socklen_t size = sizeof(peer);
708 fd_t connection = accept(sock, (struct sockaddr*)&peer, &size);
709 if (connection != -1) {
710
711 // If we have a peer, get the FD. We don't know what
712 // the other process' IP/Port is, so we need them
713 // to send us a packet that contains it.
714 output("New connection!", TNAME, INFO);
715
716 // Get the packet.
717 try {
718
719 // Get the packet from the FD. This is the remote network thread
720 // sending us a packet so we can grab it's IP and store it.
721 auto p = udp::packet(connection);
722
723 // Store the source for this associated IP
724 auto src = p.source();
725 fds[src.num] = connection;
726 output("Connected!", TNAME, INFO);
727 }
728
729 // If there's any failure, let them know.
730 catch (std::runtime_error& c) {output(c.what(), TNAME, ERROR);}
731 }
732 }
733
734
735 /*
736 * Send
737 *
738 * We iterate through every packet in the out_thread (Which is not frozen in place)
739 * but is mediated such that we can't read while the Main Thread writes, and resolves
740 * the FD for whatever the Main Thread threw on the UDP headers. Then, we send it
741 * to the intended target.
742 */
743 while (!thread_out.empty()) {
744
745 // Get the packet, and where we're supposed to send it to.
746 auto p = thread_out.pop();
747 auto dest = p.destination();
748
749 /*
750 * This is a special case. Sometimes, say the client of a WireGuard connection wants
751 * to terminate the connection, the Main Thread needs to be able to tell the Network Thread
752 * to close a connection. This would allow for the closure to be picked up by the server,
753 * who could then close the socket and WG thread gracefully. To do this, the Main Thread
754 * needs only to send an empty UDP packet with a 0'd source. If we detect this, we close
755 * whatever FD we have for the destination.
756 */
757 if (p.source().num == 0 && fds.count(dest.num)) {
758 output("Closing destination: " + connection_string(dest), TNAME, INFO);
759 close(fds[dest.num]);
760 fds.erase(dest.num);
761 continue;
762 }
763
764 // If the destination isn't known establish the connection.
765 if (!fds.count(dest.num)) {
766 fds[dest.num] = establish_connection(dest);
767 }
768
769 // Send the packet.
770 auto fd = fds.at(dest.num);
771 try {
772 send_fd(p, fd);
773 output("Sent Packet to " + connection_string(dest), TNAME, INFO);
774 }
775 catch (std::runtime_error& c) {output(c.what(), TNAME, WARN);}
776 }
777
778
779 /*
780 * Receive
781 *
782 * This section is where most of the magic happens for the thread. Firstly,
783 * we iterate through every known connection, but before blindly placing them
784 * into the pollfd array to poll them, we sent an empty heartbeat packet. This
785 * packet actually causes an exception on the other side, so nobody except
786 * for the threads will ever see it, but if the current thread is unable
787 * to send it, that means the connection is severed, and the thread
788 * gracefully closes the connection on its side.
789 *
790 * Once the heartbeat cleanup has ran through, and we have a list of active
791 * connections, we poll them for any new packets. For the Network Thread,
792 * these new packets are placed on the in queue for the Main Thread to deal
793 * with, but if this is a WG Thread, we have special behavior. If the
794 * packet was sent by the client, that means its an encrypted UDP packet
795 * within a plaintext UDP packet. We decrypt that internal packet with
796 * our shared transport key, and retrieve the packet the client wants to
797 * send. We then do something sneaky by forging the source to point back
798 * at ourselves, and then send it off. When the recipient receives the
799 * packet, it will be sourced from the End Point, and thus they will
800 * only be able to reply by sending a packet at us, without knowing
801 * the IP and Port of the original client. When a message that isn't
802 * by our client hits the endpoint, we firstly modify the destination
803 * so that it correctly points to the client, and then encrypt it
804 * with our Transport Key before shipping it back. That way, the client
805 * will receive a packet that is addressed to them, from the actual
806 * recipient, and one that has been encrypted in transit (At least
807 * across the VPN). If we wanted peer-to-peer encryption, the
808 * second peer would just need to establish their own WireGuard
809 * connection with the same server.
810 */
811 if (fds.size() != 0) {
812
813 // Generate a list of fds.
814 struct pollfd fs[fds.size()] = {};
815 size_t x = 0;
816
817 // Send a heartbeat to each connection, remove those that fail.
818 for (auto f = fds.begin(); f != fds.end();) {
819 try {
820 send_fd(heartbeat, f->second);
821
822 // If it passed, add it to the list.
823 fs[x].fd = f->second;
824 fs[x].events = POLLIN;
825 ++f; ++x;
826 }
827
828 // If the packet couldn't be sent.
829 catch (std::runtime_error& c) {
830 output("Heartbeat failure. Removing connection", TNAME, INFO);
831 close(f->second);
832
833 // Remember how we pinged the client at the beginning of the WG Thread setup,
834 // and how I said that was important. This is why. We need to figure out when
835 // the client has disappeared, because once they do, the whole thread needs to
836 // shut down.
837 if (f->first == wg.src.num) {
838 output("Client disconnected. Thank you for using WireGuard!", TNAME, INFO);
839 cleanup();
840 return;
841 }
842 fds.erase(f++);
843 }
844 }
845
846 // Poll our connections with POLLIN.
847 auto ready = poll(&fs[0], x, 100);
848
849 // A failure to poll prevents us from receiving packets. Fatal error.
850 if (ready == -1) {
851 output("Failed to poll connections!", TNAME, ERROR);
852 cleanup(); return;
853 }
854
855 // A positive means that one of our connections sent something.
856 else if (ready > 0) {
857 for (size_t y = 0; y < x; ++y) {
858 // If they sent something, get the packet!
859 if (fs[y].revents & POLLIN) {
860 try {
861 auto packet = udp::packet(fs[y].fd);
862
863 // If wireguard is on.
864 if (wg.on) {
865
866 // If it's from the source, decrypt, spoof source, send it to our out queue to be
867 // sent next iteration.
868 if (packet.source().num == wg.src.num) {
869 output("Received packet from client. Decrypting and routing", TNAME, INFO);
870 auto decrypted = wireguard::TransportPacket::Return(wg, packet);
871 decrypted.set_source(thread_self);
872 thread_out.enqueue(decrypted);
873 }
874
875 // If it's from someone else, send it across encrypted.
876 else {
877 output("Received packet from another peer. Sending to client", TNAME, INFO);
878 auto encrypted = wireguard::TransportPacket::Create(wg, packet);
879 thread_out.enqueue(encrypted);
880 }
881 }
882
883 // The Network Thread just sends the packets to the in queue.
884 else {
885 thread_in.enqueue(packet);
886 output("Received Packet from " + connection_string(packet.source()), TNAME, INFO);
887 }
888 }
889
890 // These happen when the heartbeat is sent, since it's a malformed packet.
891 // We can just ignore it.
892 catch (std::runtime_error& c) {}
893
894 // This happens if we read to a socket that has closed. Close it.
895 catch (std::length_error&) {
896 output("Failed to read from connection!", TNAME, WARN);
897 close(fs[y].fd);
898 fds.erase(fs[y].fd);
899 }
900 }
901 }
902 }
903 }
904 }
905
906 // Cleanup and return.
907 cleanup();
908 return;
909 }
910}
A simple private-public keypair.
Definition crypto.h:284
A cryptographically secure string.
Definition crypto.h:63
A thread-safe queue.
Definition network.h:31
size_t size()
Get the current size of the queue.
Definition network.h:146
void enqueue(const udp::packet &in)
Enqueue a packet.
Definition network.h:75
bool empty()
Check if the queue is empty.
Definition network.h:139
udp::packet pop(const tag type=NONE, const size_t &iterations=-1)
Remove the first packet of the type byte.
Definition network.h:89
udp::packet pop(const std::vector< uint8_t > &types, const size_t &iterations=-1)
Pop with multiple valid types.
Definition network.h:123
void flush()
Flush the queue.
Definition network.h:153
friend void thread(port_t, wireguard::config)
The Network Thread.
Definition network.h:169
A UDP Packet.
Definition udp.h:59
std::string data() const
Return the data.
Definition udp.h:351
The initial packet sent from initiator to responder.
Definition wireguard.h:260
std::string Serialize()
Serialize the packet.
Definition wireguard.h:213
The packet sent by the responder to the initiator during the handshake.
Definition wireguard.h:283
static udp::packet Return(config &config, const udp::packet &packet)
Receive an encrypted WireGuard communication.
Definition wireguard.h:374
The core networking namespace.
Definition network.h:23
void thread(port_t port, wireguard::config wg={})
The Network Thread.
Definition network.h:169
The shared namespace.
Definition shared.h:19
void sleep(const size_t &milliseconds=100)
Sleep.
Definition shared.h:198
size_t TimestampDiff(const crypto::string &a, const crypto::string &b=Timestamp())
Return the time between each timestamp in seconds.
Definition shared.h:121
void output(const std::string &message, const std::string &thread="SYS", const message_type &t=STANDARD)
Output to STDOUT, thread safe.
Definition shared.h:208
std::string connection_string(const connection &c)
Get a string representation of the connection.
Definition shared.h:58
crypto::string Timestamp()
Get the current TAI64N timestamp.
Definition shared.h:72
void Handshake2(const crypto::keypair &init_ephemeral, const crypto::string &remote_pub, config &con, ResponsePacket &msg, const bool &init, crypto::string &C, crypto::string &H)
Complete the Handshake.
Definition wireguard.h:629
void Handshake1(crypto::keypair &ephemeral, const crypto::string &remote_pub, config &con, InitPacket &msg, const bool &init, crypto::string &C, crypto::string &H)
The first half of the Handshake process.
Definition wireguard.h:426
A system packet.
Definition shared.h:159
A WireGuard Configuration.
Definition wireguard.h:113
Definition shared.h:45