Skip to content

feat(p3): implement wasi:tls#12174

Open
rvolosatovs wants to merge 7 commits intobytecodealliance:mainfrom
rvolosatovs:feat/wasip3-tls
Open

feat(p3): implement wasi:tls#12174
rvolosatovs wants to merge 7 commits intobytecodealliance:mainfrom
rvolosatovs:feat/wasip3-tls

Conversation

@rvolosatovs
Copy link
Member

@rvolosatovs rvolosatovs commented Dec 16, 2025

This is the implementation of current p3 draft WebAssembly/wasi-tls#17

refs #12102

@github-actions github-actions bot added the wasmtime:api Related to the API of the `wasmtime` crate itself label Dec 16, 2025
@rvolosatovs rvolosatovs force-pushed the feat/wasip3-tls branch 3 times, most recently from 4c9f3b8 to 187b386 Compare February 17, 2026 12:11
@rvolosatovs rvolosatovs marked this pull request as ready for review February 17, 2026 12:15
@rvolosatovs rvolosatovs requested review from a team as code owners February 17, 2026 12:15
@rvolosatovs rvolosatovs requested review from cfallin and removed request for a team February 17, 2026 12:15
Copy link
Member

@alexcrichton alexcrichton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm finding host/mod.rs pretty gnarly here especially with Waker logic and such to the point that I'd have to dig more into what rustls is offering here to double-check all the logic. That being said I'd also be fine to defer to @badeend in terms of review on that.

I'll note though that this is a pretty beefy implementation with relatively light testing. Would it be possible to enhance the tests here or does the current test basically not pass unless all the bits and bobs are present?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize that this is duplicating the p2 tests already present, but given that this is basically modeled as "run the thing" that could also be modeled as a p3_cli_* test run as part of tests/all/cli_tests.rs where it bottoms out in wasmtime run -S... foo.wasm.

I might recommend moving more in that direction than having librarified test here to make it a bit more uniform to run tests

mod client;
mod types;

macro_rules! mk_push {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this was copied from elsewhere, but personally I'd say that these macros are probably overkill given the that they're mostly one-liners around table.$method(thing) and the type annotations on Resource<T> is typically enough to guide everything type-inference wide. The benefit of these macros would be the extra error context information, but given how rarely these will all be triggered I'm not sure it's worth the complexity.

))
}

async fn connect<T>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

connect should perform the handshake and not return before that has succeeded or failed. But in its current form this method doesn't do any I/O.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In its new form connect awaits the handshake, but doesn't report the status.
Failures during the handshake phase are reported as "success" and only surfaced later on on its streams.
Maybe the easiest way to validate the intended behavior is to update test_tls_invalid_certificate to only rely on the result returned by connect and don't attempt any I/O on the streams/futures afterwards

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. I would also prefer connect to return an error on connection failures - the only "simple" way of doing that with the existing API I can come up with would be changing handshake oneshot element value to be a Result, would that address your concern?

connect and don't attempt any I/O on the streams/futures afterwards

In current design the guest drives the I/O, so receive stream must be polled for connect to be able to do work. With the existing Wasmtime API, the only way for the host to consume data from a StreamReader is to pipe it into a consumer implementation, that's what receive and send do now - they "register" a consumer. There is no way to deregister a consumer at a later point and there is an additional hazard in that dropping an unpiped StreamReader would leak it. So even if the relevant StreamReaders would be plumbed through to connect implementation and proper bookkeeping would be implemented in drop, we would not be able to directly read bytes from the stream in connect.

The only other option then is for consumer itself to spawn a (Tokio) task - given wasip3 implementation direction and design so far, I believe we want to avoid having to do that unless absolutely necessary. In my personal opinion if we cannot use Rust async machinery directly in host implementation without relying on Tokio tasks and message-passing, it suggests that we should probably revisit the Wasmtime's async support APIs to fix that. I don't think we should be reimplementing https://github.com/bytecodealliance/wasmtime/blob/d42d0b6df22d8d7be21212899a9cae21767c6992/crates/wasi/src/p2/write_stream.rs in wasip3

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The WIT file mentions:

Closing the cleartext stream will cause a close_notify packet to be emitted on the returned output stream.

I don't see where graceful shutdown is handled in the current implementation. I would expect a call to send_close_notify somewhere

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is what I originally had, but have eventually removed and brought back. The challenge is that send_close_notify can only be called after handshake has finished. I made sure that's the case by carefully arranging the oneshot sends from connect

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wasi-tls create already has a bunch of these types in crates/wasi-tls/src/lib.rs. E.g. WasiTls<'a>, WasiTlsCtx, WasiTlsCtxBuilder. Can the P3 implementation make use of those existing ones?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eventually, these should be unified. The current approach is consistent with all other p3 interface implementations and we do not know yet what will be necessary to implement #12174 (comment)

return Poll::Pending;
}

let state = match conn.process_new_packets() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice process_new_packets appear multiple times in this file.

The rustls documentation mentions that this method should only be called after a successful call to Connection::read_tls. This lines up with how e.g. tokio-rustls does it, where the term process_new_packets appears only once in the entire crate, right after the call to read_tls.

I think only CiphertextConsumer has to use process_new_packets, and the other ones can get by without:

  • The returned state variable is used as a heuristic for the capacity of .as_direct(..). A fixed size could work too, optionally bounded by dst.remaining().
  • peer_has_closed is also surfaced from the regular read call, when it returns Ok(0)

Comment on lines 231 to 233
ciphertext_producer.take().map(Waker::wake);
plaintext_consumer.take().map(Waker::wake);
plaintext_producer.take().map(Waker::wake);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error returned by process_new_packets only affects the read_tls side. The other directions may still continue to work. From rustls docs:

After an error is received from process_new_packets, you should not call read_tls any more (it will fill up buffers to no purpose). However, you may call the other methods on the connection, including write, send_close_notify, and write_tls. Most likely you will want to call write_tls to send any alerts queued by the error and then close the underlying connection.

So I don't know if waking everybody up is the right thing to do.

Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Signed-off-by: Roman Volosatovs <rvolosatovs@riseup.net>
Copy link
Member

@badeend badeend left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexcrichton I've given mod.rs & client.rs two review passes by now.

Once the remaining comments are resolved, I think it mostly checks out albeit still being quite verbose with all the state-machine'y & waker stuff. Don't know how much time we should spend polishing this right now, because if we want to support other backends (see comment above) much of this will need to be revisited anyway.

I'll leave it up for you to decide.


let Connector {
receive_tx: Some((receive_prod_tx, receive_cons_tx, receive_err_tx)),
send_tx: Some((send_prod_tx, send_cons_tx, _send_err_tx)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional for _send_err_tx to be dropped here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no scenario, when an error could be sent on the future returned by send, so yes.

))
}

async fn connect<T>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In its new form connect awaits the handshake, but doesn't report the status.
Failures during the handshake phase are reported as "success" and only surfaced later on on its streams.
Maybe the easiest way to validate the intended behavior is to update test_tls_invalid_certificate to only rely on the result returned by connect and don't attempt any I/O on the streams/futures afterwards

Copy link
Member Author

@rvolosatovs rvolosatovs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was intending to follow-up on this later in the week, but as I received another review from @badeend I'll just go ahead and write a reply now:

First, for context, the implementation was done for the original version of the interface I proposed in WebAssembly/wasi-tls@91f93ce - it was later hastily adapted to match the OOP-centric design that was suggested in WebAssembly/wasi-tls#17 (comment)

The intention of this PR was only to build a PoC - that has now been achieved.

In it's current state the implementation is very brittle and PoC quality at best. That said, I struggle to see a way to implement this "nicely" given the APIs we have today. I do believe that the original interface version I proposed in WebAssembly/wasi-tls@91f93ce would allow for a much nicer and simpler host implementation.

Short-term I will not have either the time or incentive to fully address feedback on this PR and/or maintain this code. I don't think I will work on this PR any time soon (if ever).

I would not like to stall progress on this, therefore, I see a few options:

  • Merge and take care of the rest in follow-ups
  • I close this PR and someone (@badeend ?) opens a different one either based on this implementation or a completely fresh one
  • Someone (@badeend ?) takes over this PR and gets it over the line

))
}

async fn connect<T>(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. I would also prefer connect to return an error on connection failures - the only "simple" way of doing that with the existing API I can come up with would be changing handshake oneshot element value to be a Result, would that address your concern?

connect and don't attempt any I/O on the streams/futures afterwards

In current design the guest drives the I/O, so receive stream must be polled for connect to be able to do work. With the existing Wasmtime API, the only way for the host to consume data from a StreamReader is to pipe it into a consumer implementation, that's what receive and send do now - they "register" a consumer. There is no way to deregister a consumer at a later point and there is an additional hazard in that dropping an unpiped StreamReader would leak it. So even if the relevant StreamReaders would be plumbed through to connect implementation and proper bookkeeping would be implemented in drop, we would not be able to directly read bytes from the stream in connect.

The only other option then is for consumer itself to spawn a (Tokio) task - given wasip3 implementation direction and design so far, I believe we want to avoid having to do that unless absolutely necessary. In my personal opinion if we cannot use Rust async machinery directly in host implementation without relying on Tokio tasks and message-passing, it suggests that we should probably revisit the Wasmtime's async support APIs to fix that. I don't think we should be reimplementing https://github.com/bytecodealliance/wasmtime/blob/d42d0b6df22d8d7be21212899a9cae21767c6992/crates/wasi/src/p2/write_stream.rs in wasip3


let Connector {
receive_tx: Some((receive_prod_tx, receive_cons_tx, receive_err_tx)),
send_tx: Some((send_prod_tx, send_cons_tx, _send_err_tx)),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no scenario, when an error could be sent on the future returned by send, so yes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is what I originally had, but have eventually removed and brought back. The challenge is that send_close_notify can only be called after handshake has finished. I made sure that's the case by carefully arranging the oneshot sends from connect

@alexcrichton
Copy link
Member

I'm not familiar with the current design of wasi-tls much less the async-aware wasip3 version so I can only comment at a sort of high-level as opposed to offering thoughts on specifics. From that perspective I'd personally lean towards leaving this as a PR to avoid having too much mostly-unmaintained code in-tree. It sounds like @badeend you'd ideally see this built on the preexisting abstractions and @rvolosatovs you don't have the time/motivation right now to pursue such a refactoring. I'd personally prefer to avoid an interim-state landed in Wasmtime where there are no plans for improvement and no one's quite satisfied with the current implementation.

Does that sound reasonable enough to leave this as a PR for now, and revisit once time/motivation/energy have been unblocked?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

wasmtime:api Related to the API of the `wasmtime` crate itself

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants