d5fb584a | 03-Jun-2024 |
Abhilash Raju <abhilash.kollam@gmail.com> |
MTLS Client: Enabling mtls support in http_client
http_client currently does not uses mtls client certificates. This is a good feature for both authentication and authorization purpose. It will help
MTLS Client: Enabling mtls support in http_client
http_client currently does not uses mtls client certificates. This is a good feature for both authentication and authorization purpose. It will help external servers to trust the identity of bmc for better security.This patch will add MTLS client certificate support for bmcweb
This is a needed feature to support secure redfish aggregation between BMCs. To support secure aggregation BMCs should be provisioned with CA signed certificate with an authorized username as the subject name field of the certificate. With the support of strong MTLS authentication from Bmcweb server we can use the MTLS path to enable secure redfish aggregation among BMCs. This can avoid complexities and extra API calls needed for token based approach.
Tested by:
Aggregation Test1:
1) Setup two instance of romulus qemu session at different ports.This will act as two BMCs 2) Installed CA root certificates at /etc/ssl/certs/authority in both BMCs 3) Installed server.pem and client.pem entity certificates signed by the root CA at /etc/ssl/certs/https folder in both BMCs 4) Enable aggregation for Bmcweb. 5) Fired several redfish queries to BMC1
Result Observed that the aggregation worked fine. User session created using username mentined in the CN field of certificate.
Aggregation Test2:
Followed same steps from Aggregation Test1 with modification in step 3 In step3 installed only the server.pem.
Result
Bmcweb ran as usual. But aggregation failed to collect resources from BMC2. No crash observed.
Redfish Event Test:
Subscribed for redfish events using test server. Fired redfish test events from BMC.
Result: Events reached server successfully.
Change-Id: Id8cccf9beec77da0f16adb72d52f3adf46347d06 Signed-off-by: Abhilash Raju <abhilash.kollam@gmail.com> Signed-off-by: Ed Tanous <etanous@nvidia.com>
show more ...
|
099225cc | 28-Mar-2024 |
Ed Tanous <ed@tanous.net> |
Make cert generate for readonly directories
When run from a development PC, we shouldn't REQUIRE that the cert directory exists or is writable.
This commit reworks the SSL cert generation to genera
Make cert generate for readonly directories
When run from a development PC, we shouldn't REQUIRE that the cert directory exists or is writable.
This commit reworks the SSL cert generation to generate a string with the certification info, instead of writing it to disk and reading it back. This allows bmcweb to start up in read-only environments, or environments where there isn't access to the key information.
Tested: Launching the application on a dev desktop without an ssl directory present no longer crashes.
Change-Id: I0d44eb1ce8d298986c5560803ca2d72958d3707c Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
2ecde74f | 01-Jun-2024 |
Abhilash Raju <abhilash.kollam@gmail.com> |
http_client: Fixing bug in retry after a close call
After a close call httpclient is not starting with fresh socket to restart the connection. Send failure is observed in cases where new connection
http_client: Fixing bug in retry after a close call
After a close call httpclient is not starting with fresh socket to restart the connection. Send failure is observed in cases where new connection is started from doResolve.Calling restartConnection instead of doResolve did fix the issue.
Tested By: Running developer test on use cases such as redfish aggregation where number of retries are smaller.
Change-Id: I12f6a73fbafd14f482807f34ffa1e02fad944fc1 Signed-off-by: Abhilash Raju <abhilash.kollam@gmail.com>
show more ...
|
a3b9eb98 | 03-Jun-2024 |
Ed Tanous <ed@tanous.net> |
Make SSE pass
Redfish protocol validator is failing SSE. This is due to a clause in the Redfish specification that requires a "json" error to be returned when the SSE URI is hit with a standard req
Make SSE pass
Redfish protocol validator is failing SSE. This is due to a clause in the Redfish specification that requires a "json" error to be returned when the SSE URI is hit with a standard request.
In what exists today, we return 4XX (method not allowed) but because this is handled by the HTTP layer, it's not possible to return the correct Redfish payloads for when that 4XX happens within the Redfish tree, because there is in fact a route that matches, that route just doesn't support the type that we need.
This commit rearranges the router such that there are now 4 classes of rules.
1. "verb" rules. These are GET/POST/PATCH type, and they are stored using the existing PerMethod array index. 2. "upgrade" rules. These are for websocket or SSE routes that we expect to upgrade to another route 3. 404 routes. These are called in the case where no route exists with that given URI pattern, and no routes exist in the table for any verb. 4. 405 method not allowed. These are called in the case where routes exist in the tree for some method, but not for the method the user requested.
To accomplish this, some minor refactors are implemented to separate out the 4xx handlers to be their own variables, rather than just existing at an index at the end of the verb table. This in turn means that getRouteByIndex now changes to allow getting the route by PerMethod instance, rather than index.
Tested: unit tests pass (okish coverage) Redfish protocol validator passes (with the exception of #277, which fails identically before and after). SSE tests now pass. Redfish service validator passes.
Change-Id: I555c50f392cb12ecbc39fbadbae6a3d50f4d1b23 Signed-off-by: Ed Tanous <etanous@nvidia.com>
show more ...
|
0242baff | 16-May-2024 |
Ed Tanous <ed@tanous.net> |
Implement Chunking for unix sockets
Response::openFd was added recently to allow handlers to pass in a file descriptor to be used to read. This worked great for files, but had some trouble with uni
Implement Chunking for unix sockets
Response::openFd was added recently to allow handlers to pass in a file descriptor to be used to read. This worked great for files, but had some trouble with unix sockets. First, unix sockets have no known length that we can get. They are fed by another client until that client decides to stop sending data and sends an EOF. HTTP in general needs to set the Content-Length header before starting a reply, so the previous code just passes an error back.
HTTP has a concept of HTTP chunking, where a payload might not have a known size, but can still be downloaded in chunks. Beast has handling for this that we can enable that just deals with this at the protocol layer silently. This patch enables that.
In addition, a unix socket very likely might not have data and will block on the read call. Blocking in an async reactor is bad, and especially bad when you don't know how large a payload is to be expected, so it's possible those bytes will never come. This commit sets all FDs into O_NONBLOCK[1] mode when they're sent to a response, then handles the subsequent EWOULDBLOCK and EAGAIN messages when beast propagates them to the http connection class. When these messages are received, the doWrite loop is simply re-executed directly, attempting to read from the socket again. For "slow" unix sockets, this very likely results in some wasted cycles where we read 0 bytes from the socket, so shouldn't be used for eventing purposes, given that bmcweb is essentially in a spin loop while waiting for data, but given that this is generally used for handling chunking of large payloads being generated, and while spinning, other reactor operations can still progress, this seems like a reasonable compromise.
[1] https://www.gnu.org/software/libc/manual/html_node/Open_002dtime-Flags.html
Tested: The next patch in this series includes an example of explicitly adding a unix socket as a response target, using the CredentialsPipe that bmcweb already has. When this handler is present, curl shows the response data, including the newlines (when dumped to a file)
``` curl -vvvv -k --user "root:0penBmc" https://192.168.7.2/testpipe -o output.txt ```
Loading the webui works as expected, logging in produces the overview page as expected, and network console shows no failed requests.
Redfish service validator passes.
Change-Id: I8bd8586ae138f5b55033b78df95c798aa1d014db Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
c8491cb0 | 06-May-2024 |
Ed Tanous <ed@tanous.net> |
Move under exception handler
Static analysis still sometimes flags that this throws, even through clang-tidy doesn't. Move it under the exception handler.
Tested: Logging still works.
Change-Id:
Move under exception handler
Static analysis still sometimes flags that this throws, even through clang-tidy doesn't. Move it under the exception handler.
Tested: Logging still works.
Change-Id: I67425749b97b0a259746840c7b9a9b4834dfe52e Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
83328316 | 09-May-2024 |
Ed Tanous <ed@tanous.net> |
Fix lesser used options
25b54dba775b31021a3a4677eb79e9771bcb97f7 missed several cases where we had ifndef instead of ifdef. because these weren't the defaults, these don't show up as failures when
Fix lesser used options
25b54dba775b31021a3a4677eb79e9771bcb97f7 missed several cases where we had ifndef instead of ifdef. because these weren't the defaults, these don't show up as failures when testing.
Tested: Redfish service validator passes. Inspection primarily. Mechanical change.
Change-Id: I3f6915a97eb44d071795aed76476c6bee7e8ed27 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
17c47245 | 08-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Move logging args
Args captured by logging functions should be captured by rvalue, and use std::forward to get perfect forwarding. In addition, separate out the various std::out lines.
While we're
Move logging args
Args captured by logging functions should be captured by rvalue, and use std::forward to get perfect forwarding. In addition, separate out the various std::out lines.
While we're here, also try to optimize a little. We should ideally be writing each log line to the output once, and ideally not use iostreams, which induce a lot of overhead.
Similar to spdlog[1] (which at one point this codebase used), construct the string, then call fwrite and fflush once, rather than calling std::cout repeatedly.
Now that we don't have a dependency on iostreams anymore, we can remove it from the places where it has snuck in.
Tested: Logging still functions as before. Logs present.
[1] https://github.com/gabime/spdlog/blob/27cb4c76708608465c413f6d0e6b8d99a4d84302/include/spdlog/sinks/stdout_sinks-inl.h#L70C7-L70C13
Change-Id: I1dd4739e06eb506d68989a066d122109b71b92cd Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
102a4cda | 15-Apr-2024 |
Jonathan Doman <jonathan.doman@intel.com> |
Manage Request with shared_ptr
This is an attempt to solve a class of use-after-move bugs on the Request objects which have popped up several times. This more clearly identifies code which owns the
Manage Request with shared_ptr
This is an attempt to solve a class of use-after-move bugs on the Request objects which have popped up several times. This more clearly identifies code which owns the Request objects and has a need to keep it alive. Currently it's just the `Connection` (or `HTTP2Connection`) (which needs to access Request headers while sending the response), and the `validatePrivilege()` function (which needs to temporarily own the Request while doing an asynchronous D-Bus call). Route handlers are provided a non-owning `Request&` for immediate use and required to not hold the `Request&` for future use.
Tested: Redfish validator passes (with a few unrelated fails). Redfish URLs are sent to a browser as HTML instead of raw JSON.
Change-Id: Id581fda90b6bceddd08a5dc7ff0a04b91e7394bf Signed-off-by: Jonathan Doman <jonathan.doman@intel.com> Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
e428b440 | 29-Mar-2024 |
Ed Tanous <ed@tanous.net> |
Increase the file buffer
When we added file buffer, this number was picked arbitrarily. Prior to the file body patch series, files were buffered entirely in ram, regardless of what size they were.
Increase the file buffer
When we added file buffer, this number was picked arbitrarily. Prior to the file body patch series, files were buffered entirely in ram, regardless of what size they were. While not doing that was an improvement, I suspect that we were overly conservative in the buffer size.
Nginx picks a default buffer size somewhere in the 8k - 64k range dependent on what paths the code takes. Using the higher end of that range seems like a better starting point, but generally we have more ram on the bmc than we have users.
Increase the buffer to 64K.
Tested: Unit tests pass.
[1] https://docs.nginx.com/nginx-management-suite/acm/how-to/policies/http-backend-configuration/#buffers
Change-Id: Idb472ccae02a8519c0976aab07b45562e327ce9b Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
25b54dba | 17-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Bring consistency to config options
The configuration options that exist in bmcweb are an amalgimation of CROW options, CMAKE options using #define, pre-bmcweb ifdef mechanisms and meson options usi
Bring consistency to config options
The configuration options that exist in bmcweb are an amalgimation of CROW options, CMAKE options using #define, pre-bmcweb ifdef mechanisms and meson options using a config file. This history has led to a lot of different ways to configure code in the codebase itself, which has led to problems, and issues in consistency.
ifdef options do no compile time checking of code not within the branch. This is good when you have optional dependencies, but not great when you're trying to ensure both options compile.
This commit moves all internal configuration options to: 1. A namespace called bmcweb 2. A naming scheme matching the meson option. hyphens are replaced with underscores, and the option is uppercased. This consistent transform allows matching up option keys with their code counterparts, without naming changes. 3. All options are bool true = enabled, and any options with _ENABLED or _DISABLED postfixes have those postfixes removed. (note, there are still some options with disable in the name, those are left as-is) 4. All options are now constexpr booleans, without an explicit compare.
To accomplish this, unfortunately an option list in config/meson.build is required, given that meson doesn't provide a way to dump all options, as is a manual entry in bmcweb_config.h.in, in addition to the meson_options. This obsoletes the map in the main meson.build, which helps some of the complexity.
Now that we've done this, we have some rules that will be documented. 1. Runtime behavior changes should be added as a constexpr bool to bmcweb_config.h 2. Options that require optionally pulling in a dependency shall use an ifdef, defined in the primary meson.build. (note, there are no options that currently meet this class, but it's included for completeness.)
Note, that this consolidation means that at configure time, all options are printed. This is a good thing and allows direct comparison of configs in log files.
Tested: Code compiles Server boots, and shows options configured in the default build. (HTTPS, log level, etc)
Change-Id: I94e79a56bcdc01755036e4e7278c7e69e25809ce Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
88c7c427 | 06-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Use fadvise to trigger sequential reading
Nginx and other webservers use fadvise to inform the kernel of in-order reading. We should to the same.
Tested: Webui loads correctly, no direct performan
Use fadvise to trigger sequential reading
Nginx and other webservers use fadvise to inform the kernel of in-order reading. We should to the same.
Tested: Webui loads correctly, no direct performance benefits immediately obvious, but likely would operate better under load.
Change-Id: I4acce316c719df7df012cea8cb89237b28932c15 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
95c6307a | 26-Mar-2024 |
Ed Tanous <ed@tanous.net> |
Break out formatters
In the change made to move to std::format, we defined some custom type formatters in logging.hpp. This had the unintended effect of making all compile units pull in the majorit
Break out formatters
In the change made to move to std::format, we defined some custom type formatters in logging.hpp. This had the unintended effect of making all compile units pull in the majority of boost::url, and nlohmann::json as includes.
This commit breaks out boost and json formatters into their own separate includes.
Tested: Code compiles. Logging changes only.
Change-Id: I6a788533169f10e19130a1910cd3be0cc729b020 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
499b5b4d | 06-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Add static webpack etag support
Webpack (which is what vue uses to compress its HTML) is capable of generating hashes of files when it produces the dist files[1].
This gets generated in the form of
Add static webpack etag support
Webpack (which is what vue uses to compress its HTML) is capable of generating hashes of files when it produces the dist files[1].
This gets generated in the form of <filename>.<hash>.<extension>
This commit attempts to detect these patterns, and enable etag caching to speed up webui load times. It detects these patterns, grabs the hash for the file, and returns it in the Etag header[2].
The behavior is implemented such that: If the file has an etag, the etag header is returned. If the request has an If-None-Match header, and that header matches, only 304 is returned.
Tested: Tests were run on qemu S7106 bmcweb with default error logging level, and HTTP/2 enabled, along with svg optimization patches.
Run scripts/generate_auth_certificate.py to set up TLS certificates. (valid TLS certs are required for HTTP caching to work properly in some browsers). Load the webui. Note that DOM load takes 1.10 seconds, Load takes 1.10 seconds, and all requests return 200 OK. Refresh the GUI. Note that most resources now return 304, and DOM time is reduced to 279 milliseconds and load is reduced to 280 milliseconds. DOM load (which is what the BMC has control over) is decreased by a factor of 3-4X. Setting chrome to "Fast 5g" throttling in the network tab shows a more pronounced difference, 1.28S load time vs 3.96S.
BMC also shows 477KB transferred on the wire, versus 2.3KB transferred on the wire. This has the potential to significantly reduce the load on the BMC when the webui refreshes.
[1] https://webpack.js.org/guides/caching/ [2] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag
Change-Id: I68aa7ef75533506d98e8fce10bb04a494dc49669 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
1d1d7784 | 09-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Fix large content error codes
When pushing multi-part payloads, it's quite helpful if the server supports the header field of "Expect: 100-Continue". What this does, is on a large file push, allows
Fix large content error codes
When pushing multi-part payloads, it's quite helpful if the server supports the header field of "Expect: 100-Continue". What this does, is on a large file push, allows the server to possibly reject a request before the payload is actually sent, thereby saving bandwidth, and giving the user more information.
Bmcweb, since commit 3909dc82a003893812f598434d6c4558107afa28 by James (merged July 2020) has simply closed the connection if a user attempts to send too much data, thereby making the bmcweb implementation simpler.
Unfortunately, to a security tester, this has the appearance on the network as a crash, which will likely then get filed as a "verify this isn't failing" bug.
In addition, the default args on curl multipart upload enable the Expect: 100-Continue behavior, so folks testing must've just been disabling that behavior.
Bmcweb should just support the right thing here. Unfortunately, closing a connection uncleanly is easy. Closing a connection cleanly is difficult. This requires a pretty large refactor of the http connection class to accomplish.
Tested: Create files of various size and try to send them (Note, default body limit is 30 MB) and upload them with an without a username.
``` dd if=/dev/zero of=zeros-file bs=1048576 count=16 of=16mb.txt
curl -k --location POST https://192.168.7.2/redfish/v1/UpdateService/update -F 'UpdateParameters={"Targets":["/redfish/v1/Managers/bmc"]} ;type=application/json' -F UpdateFile=@32mb.txt -v ```
No Username: 32MB returns < HTTP/1.1 413 Payload Too Large 16MB returns < HTTP/1.1 401 Unauthorized
With Username 32MB returns < HTTP/1.1 413 Payload Too Large 16MB returns < HTTP/1.1 400 Bad Request
Note, in all cases except the last one, the payload is never sent from curl.
Redfish protocol validator fails no new tests (SSE failure still present).
Redfish service validator passes.
Change-Id: I72bc8bbc49a05555c31dc7209292f846ec411d43 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
52c15028 | 19-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Fix http2 use after free bug
In the below code, we move out of Response, then use it to set unauthorized, which never gets returned to the user. This results in the browser showing an empty 200 ok
Fix http2 use after free bug
In the below code, we move out of Response, then use it to set unauthorized, which never gets returned to the user. This results in the browser showing an empty 200 ok request, because while the request was propagated rejected, the 401 error code didn't get propagated to the user.
Tested: If not logged in on a chrome browser: /redfish/v1 -> Returns the UI /refish/v1/AccountService -> returns a forward to the webui login page.
If logged into the webui. /redfish/v1/AccountService now returns the expected HTML redfish representation of the json response.
Change-Id: I2c906f818367ebb253b3e6097e6787ba4c215e0a Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
003301a2 | 16-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Change ssl stream implementations
Boost beast ssl_stream is just a wrapper around asio ssl_stream, and aims to optimize the case where we're writing small payloads (one or two bytes.) which needs to
Change ssl stream implementations
Boost beast ssl_stream is just a wrapper around asio ssl_stream, and aims to optimize the case where we're writing small payloads (one or two bytes.) which needs to be optimized in SSL.
bmcweb never writes one or two bytes, we almost always write the full payload of what we received, so there's no reason to take the binary size overhead, and additional boost headers that this implementation requires.
Tested: This drops the on-target binary size by 2.6%
Redfish service validator passes.
Change-Id: Ie1ae6f197f8e5ed70cf4abc6be9b1b382c42d64d Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
5b90429a | 16-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Add missing headers
Most of these were found by breaking every redfish class handler into its own compile unit:
When that's done, these missing headers become compile errors. We should just fix the
Add missing headers
Most of these were found by breaking every redfish class handler into its own compile unit:
When that's done, these missing headers become compile errors. We should just fix them.
In addition, this allows us to enable automatic header checking in clang-tidy using misc-header-cleaner. Because the compiler can now "see" all the defines, it no longer tries to remove headers that it thinks are unused.
[1] https://github.com/openbmc/bmcweb/commit/4fdee9e39e9f03122ee16a6fb251a380681f56ac
Tested: Code compiles.
Change-Id: Ifa27ac4a512362b7ded7cc3068648dc4aea6ad7b Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
8db83747 | 13-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Clean up BMCWEB_ENABLE_SSL
This macro came originally from CROW_ENABLE_SSL, and was used as a macro to optionally compile without openssl being required.
OpenSSL has been pulled into many other dep
Clean up BMCWEB_ENABLE_SSL
This macro came originally from CROW_ENABLE_SSL, and was used as a macro to optionally compile without openssl being required.
OpenSSL has been pulled into many other dependencies, and has been functionally required to be included for a long time, so there's no reason to hold onto this macro.
Remove most uses of the macro, and for the couple functional places the macro is used, transition to a constexpr if to enable the TLS paths.
This allows a large simplification of code in some places.
Tested: Redfish service validator passes.
Change-Id: Iebd46a68e5e417b6031479e24be3c21bef782f4c Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
6dbe9bea | 14-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Remove OpenSSL warnings ignore
If we include OpenSSL in extern "C" blocks consistently, c++ warnings no longer appear. This means we can remove the special case from meson.
Tested: Code compiles w
Remove OpenSSL warnings ignore
If we include OpenSSL in extern "C" blocks consistently, c++ warnings no longer appear. This means we can remove the special case from meson.
Tested: Code compiles when built locally on an ubuntu 22.04 system.
Change-Id: I5add4113b32cd88b7fdd874174c845425a7c287a Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
4d69861f | 06-Feb-2024 |
Ed Tanous <ed@tanous.net> |
Use beast message_generator
Beast 331 added the message_generator class, which allows deduplicating some templated code for the HTTP parser. When we use it, we can drop our binary size, and ensure
Use beast message_generator
Beast 331 added the message_generator class, which allows deduplicating some templated code for the HTTP parser. When we use it, we can drop our binary size, and ensure that we have code reuse.
This saves 2.2% on the compressed binary size.
Tested: Redfish service validator passes.
Change-Id: I5540d52dc256adfb62507c67ea642a9ea86d27ee Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
8e8245db | 12-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Fix nullptr failures for image upload
Several places that call *req.ioService were missing nullptr checks. Add them, and fix the one case where it might not be filled in.
Tested: With HTTP2 enable
Fix nullptr failures for image upload
Several places that call *req.ioService were missing nullptr checks. Add them, and fix the one case where it might not be filled in.
Tested: With HTTP2 enabled, the following command succeeds. ``` curl -k https://192.168.7.2/redfish/v1/UpdateService/update -F 'UpdateParameters={"Targets":["/redfish/v1/Managers/bmc"]} ;type=application/json' --user "root:0penBmc" -F UpdateFile=@/home/ed/bmcweb/16mb.txt -v -H "Expect:" ```
Change-Id: I81e7944c22f5922d461bf5d231086c7468a16e62 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
44106f34 | 06-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Fix buffer_copy
boost::asio::buffer_copy returns an integer of the number of values copied. Some static analysis tools mark that value as nodiscard, although it should never fail. Audit all uses o
Fix buffer_copy
boost::asio::buffer_copy returns an integer of the number of values copied. Some static analysis tools mark that value as nodiscard, although it should never fail. Audit all uses of buffer_copy, and make sure that they're using the return value. In theory this should have no change on the behavior.
Change-Id: I6af39b5347954c2932cf3d4e48e96ff9ae01583a Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
4a7fbefd | 06-Apr-2024 |
Ed Tanous <ed@tanous.net> |
Fix large copies with url_view and segments_view
Despite these objects being called "view" they are still relatively large, as clang-tidy correctly flags, and we ignore.
Change all function uses to
Fix large copies with url_view and segments_view
Despite these objects being called "view" they are still relatively large, as clang-tidy correctly flags, and we ignore.
Change all function uses to capture by: const boost::urls::url_view_base&
Which is the base class of all boost URL types, and any class (url, url_view, etc) is convertible to that base.
Change-Id: I8ee2ea3f4cfba38331303a7e4eb520a2b6f8ba92 Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|
d9e89dfd | 27-Mar-2024 |
Ed Tanous <ed@tanous.net> |
Simplify router
Now that we only support string types in the router we no longer need to build a "Tag" to be used for constructing argument types. Now, we can just track the number of arguments, wh
Simplify router
Now that we only support string types in the router we no longer need to build a "Tag" to be used for constructing argument types. Now, we can just track the number of arguments, which simplifies the code significantly, and removes the need to convert to and from the tag to parameter counts.
This in turn deletes a lot of code in the router, removing the need for tracking tag types.
Tested: Redfish service validator passes. Unit tests pass.
Change-Id: Ide1d665dc1984552681e8c05952b38073d5e32dd Signed-off-by: Ed Tanous <ed@tanous.net>
show more ...
|