Struct PeerManager
pub struct PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
RM: Deref,
OM: Deref,
L: Deref,
CMH: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<RM as Deref>::Target: RoutingMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<CMH as Deref>::Target: CustomMessageHandler,
<NS as Deref>::Target: NodeSigner,{ /* private fields */ }
Expand description
A PeerManager manages a set of peers, described by their SocketDescriptor
and marshalls
socket events into messages which it passes on to its MessageHandler
.
Locks are taken internally, so you must never assume that reentrancy from a
SocketDescriptor
call back into PeerManager
methods will not deadlock.
Calls to read_event
will decode relevant messages and pass them to the
ChannelMessageHandler
, likely doing message processing in-line. Thus, the primary form of
parallelism in Rust-Lightning is in calls to read_event
. Note, however, that calls to any
PeerManager
functions related to the same connection must occur only in serial, making new
calls only after previous ones have returned.
Rather than using a plain PeerManager
, it is preferable to use either a SimpleArcPeerManager
a SimpleRefPeerManager
, for conciseness. See their documentation for more details, but
essentially you should default to using a SimpleRefPeerManager
, and use a
SimpleArcPeerManager
when you require a PeerManager
with a static lifetime, such as when
you’re using lightning-net-tokio.
Implementations§
§impl<Descriptor, CM, OM, L, NS> PeerManager<Descriptor, CM, IgnoringMessageHandler, OM, L, IgnoringMessageHandler, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
OM: Deref,
L: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<NS as Deref>::Target: NodeSigner,
impl<Descriptor, CM, OM, L, NS> PeerManager<Descriptor, CM, IgnoringMessageHandler, OM, L, IgnoringMessageHandler, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
OM: Deref,
L: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<NS as Deref>::Target: NodeSigner,
pub fn new_channel_only(
channel_message_handler: CM,
onion_message_handler: OM,
current_time: u32,
ephemeral_random_data: &[u8; 32],
logger: L,
node_signer: NS,
) -> PeerManager<Descriptor, CM, IgnoringMessageHandler, OM, L, IgnoringMessageHandler, NS>
pub fn new_channel_only( channel_message_handler: CM, onion_message_handler: OM, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS, ) -> PeerManager<Descriptor, CM, IgnoringMessageHandler, OM, L, IgnoringMessageHandler, NS>
Constructs a new PeerManager
with the given ChannelMessageHandler
and
OnionMessageHandler
. No routing message handler is used and network graph messages are
ignored.
ephemeral_random_data
is used to derive per-connection ephemeral keys and must be
cryptographically secure random bytes.
current_time
is used as an always-increasing counter that survives across restarts and is
incremented irregularly internally. In general it is best to simply use the current UNIX
timestamp, however if it is not available a persistent counter that increases once per
minute should suffice.
This is not exported to bindings users as we can’t export a PeerManager with a dummy route handler
§impl<Descriptor, RM, L, NS> PeerManager<Descriptor, ErroringMessageHandler, RM, IgnoringMessageHandler, L, IgnoringMessageHandler, NS>where
Descriptor: SocketDescriptor,
RM: Deref,
L: Deref,
NS: Deref,
<RM as Deref>::Target: RoutingMessageHandler,
<L as Deref>::Target: Logger,
<NS as Deref>::Target: NodeSigner,
impl<Descriptor, RM, L, NS> PeerManager<Descriptor, ErroringMessageHandler, RM, IgnoringMessageHandler, L, IgnoringMessageHandler, NS>where
Descriptor: SocketDescriptor,
RM: Deref,
L: Deref,
NS: Deref,
<RM as Deref>::Target: RoutingMessageHandler,
<L as Deref>::Target: Logger,
<NS as Deref>::Target: NodeSigner,
pub fn new_routing_only(
routing_message_handler: RM,
current_time: u32,
ephemeral_random_data: &[u8; 32],
logger: L,
node_signer: NS,
) -> PeerManager<Descriptor, ErroringMessageHandler, RM, IgnoringMessageHandler, L, IgnoringMessageHandler, NS>
pub fn new_routing_only( routing_message_handler: RM, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS, ) -> PeerManager<Descriptor, ErroringMessageHandler, RM, IgnoringMessageHandler, L, IgnoringMessageHandler, NS>
Constructs a new PeerManager
with the given RoutingMessageHandler
. No channel message
handler or onion message handler is used and onion and channel messages will be ignored (or
generate error messages). Note that some other lightning implementations time-out connections
after some time if no channel is built with the peer.
current_time
is used as an always-increasing counter that survives across restarts and is
incremented irregularly internally. In general it is best to simply use the current UNIX
timestamp, however if it is not available a persistent counter that increases once per
minute should suffice.
ephemeral_random_data
is used to derive per-connection ephemeral keys and must be
cryptographically secure random bytes.
This is not exported to bindings users as we can’t export a PeerManager with a dummy channel handler
§impl<Descriptor, CM, RM, OM, L, CMH, NS> PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
RM: Deref,
OM: Deref,
L: Deref,
CMH: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<RM as Deref>::Target: RoutingMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<CMH as Deref>::Target: CustomMessageHandler,
<NS as Deref>::Target: NodeSigner,
impl<Descriptor, CM, RM, OM, L, CMH, NS> PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
RM: Deref,
OM: Deref,
L: Deref,
CMH: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<RM as Deref>::Target: RoutingMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<CMH as Deref>::Target: CustomMessageHandler,
<NS as Deref>::Target: NodeSigner,
pub fn new(
message_handler: MessageHandler<CM, RM, OM, CMH>,
current_time: u32,
ephemeral_random_data: &[u8; 32],
logger: L,
node_signer: NS,
) -> PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
pub fn new( message_handler: MessageHandler<CM, RM, OM, CMH>, current_time: u32, ephemeral_random_data: &[u8; 32], logger: L, node_signer: NS, ) -> PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
Constructs a new PeerManager
with the given message handlers.
ephemeral_random_data
is used to derive per-connection ephemeral keys and must be
cryptographically secure random bytes.
current_time
is used as an always-increasing counter that survives across restarts and is
incremented irregularly internally. In general it is best to simply use the current UNIX
timestamp, however if it is not available a persistent counter that increases once per
minute should suffice.
pub fn get_peer_node_ids(&self) -> Vec<(PublicKey, Option<SocketAddress>)>
pub fn get_peer_node_ids(&self) -> Vec<(PublicKey, Option<SocketAddress>)>
Get a list of tuples mapping from node id to network addresses for peers which have completed the initial handshake.
For outbound connections, the PublicKey
will be the same as the their_node_id
parameter
passed in to Self::new_outbound_connection
, however entries will only appear once the initial
handshake has completed and we are sure the remote peer has the private key for the given
PublicKey
.
The returned Option
s will only be Some
if an address had been previously given via
Self::new_outbound_connection
or Self::new_inbound_connection
.
pub fn new_outbound_connection(
&self,
their_node_id: PublicKey,
descriptor: Descriptor,
remote_network_address: Option<SocketAddress>,
) -> Result<Vec<u8>, PeerHandleError>
pub fn new_outbound_connection( &self, their_node_id: PublicKey, descriptor: Descriptor, remote_network_address: Option<SocketAddress>, ) -> Result<Vec<u8>, PeerHandleError>
Indicates a new outbound connection has been established to a node with the given node_id
and an optional remote network address.
The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.
If an Err
is returned here you must disconnect the connection immediately.
Returns a small number of bytes to send to the remote node (currently always 50).
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected
.
pub fn new_inbound_connection(
&self,
descriptor: Descriptor,
remote_network_address: Option<SocketAddress>,
) -> Result<(), PeerHandleError>
pub fn new_inbound_connection( &self, descriptor: Descriptor, remote_network_address: Option<SocketAddress>, ) -> Result<(), PeerHandleError>
Indicates a new inbound connection has been established to a node with an optional remote network address.
The remote network address adds the option to report a remote IP address back to a connecting peer using the init message. The user should pass the remote network address of the host they are connected to.
May refuse the connection by returning an Err, but will never write bytes to the remote end
(outbound connector always speaks first). If an Err
is returned here you must disconnect
the connection immediately.
Panics if descriptor is duplicative with some other descriptor which has not yet been
socket_disconnected
.
pub fn write_buffer_space_avail(
&self,
descriptor: &mut Descriptor,
) -> Result<(), PeerHandleError>
pub fn write_buffer_space_avail( &self, descriptor: &mut Descriptor, ) -> Result<(), PeerHandleError>
Indicates that there is room to write data to the given socket descriptor.
May return an Err to indicate that the connection should be closed.
May call send_data
on the descriptor passed in (or an equal descriptor) before
returning. Thus, be very careful with reentrancy issues! The invariants around calling
write_buffer_space_avail
in case a write did not fully complete must still hold - be
ready to call write_buffer_space_avail
again if a write call generated here isn’t
sufficient!
pub fn read_event(
&self,
peer_descriptor: &mut Descriptor,
data: &[u8],
) -> Result<bool, PeerHandleError>
pub fn read_event( &self, peer_descriptor: &mut Descriptor, data: &[u8], ) -> Result<bool, PeerHandleError>
Indicates that data was read from the given socket descriptor.
May return an Err to indicate that the connection should be closed.
Will not call back into send_data
on any descriptors to avoid reentrancy complexity.
Thus, however, you should call process_events
after any read_event
to generate
send_data
calls to handle responses.
If Ok(true)
is returned, further read_events should not be triggered until a
send_data
call on this descriptor has resume_read
set (preventing DoS issues in the
send buffer).
In order to avoid processing too many messages at once per peer, data
should be on the
order of 4KiB.
pub fn process_events(&self)
pub fn process_events(&self)
Checks for any events generated by our handlers and processes them. Includes sending most
response messages as well as messages generated by calls to handler functions directly (eg
functions like ChannelManager::process_pending_htlc_forwards
or send_payment
).
May call send_data
on SocketDescriptor
s. Thus, be very careful with reentrancy
issues!
You don’t have to call this function explicitly if you are using [lightning-net-tokio
]
or one of the other clients provided in our language bindings.
Note that if there are any other calls to this function waiting on lock(s) this may return without doing any work. All available events that need handling will be handled before the other calls return.
pub fn socket_disconnected(&self, descriptor: &Descriptor)
pub fn socket_disconnected(&self, descriptor: &Descriptor)
Indicates that the given socket descriptor’s connection is now closed.
pub fn disconnect_by_node_id(&self, node_id: PublicKey)
pub fn disconnect_by_node_id(&self, node_id: PublicKey)
Disconnect a peer given its node id.
If a peer is connected, this will call disconnect_socket
on the descriptor for the
peer. Thus, be very careful about reentrancy issues.
pub fn disconnect_all_peers(&self)
pub fn disconnect_all_peers(&self)
Disconnects all currently-connected peers. This is useful on platforms where there may be an indication that TCP sockets have stalled even if we weren’t around to time them out using regular ping/pongs.
pub fn timer_tick_occurred(&self)
pub fn timer_tick_occurred(&self)
Send pings to each peer and disconnect those which did not respond to the last round of pings.
This may be called on any timescale you want, however, roughly once every ten seconds is preferred. The call rate determines both how often we send a ping to our peers and how much time they have to respond before we disconnect them.
May call send_data
on all SocketDescriptor
s. Thus, be very careful with reentrancy
issues!
pub fn broadcast_node_announcement(
&self,
rgb: [u8; 3],
alias: [u8; 32],
addresses: Vec<SocketAddress>,
)
pub fn broadcast_node_announcement( &self, rgb: [u8; 3], alias: [u8; 32], addresses: Vec<SocketAddress>, )
Generates a signed node_announcement from the given arguments, sending it to all connected peers. Note that peers will likely ignore this message unless we have at least one public channel which has at least six confirmations on-chain.
rgb
is a node “color” and alias
is a printable human-readable string to describe this
node to humans. They carry no in-protocol meaning.
addresses
represent the set (possibly empty) of socket addresses on which this node
accepts incoming connections. These will be included in the node_announcement, publicly
tying these addresses together and to this node. If you wish to preserve user privacy,
addresses should likely contain only Tor Onion addresses.
Panics if addresses
is absurdly large (more than 100).
Trait Implementations§
§impl<Descriptor, CM, RM, OM, L, CMH, NS> APeerManager for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
RM: Deref,
OM: Deref,
L: Deref,
CMH: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<RM as Deref>::Target: RoutingMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<CMH as Deref>::Target: CustomMessageHandler,
<NS as Deref>::Target: NodeSigner,
impl<Descriptor, CM, RM, OM, L, CMH, NS> APeerManager for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
Descriptor: SocketDescriptor,
CM: Deref,
RM: Deref,
OM: Deref,
L: Deref,
CMH: Deref,
NS: Deref,
<CM as Deref>::Target: ChannelMessageHandler,
<RM as Deref>::Target: RoutingMessageHandler,
<OM as Deref>::Target: OnionMessageHandler,
<L as Deref>::Target: Logger,
<CMH as Deref>::Target: CustomMessageHandler,
<NS as Deref>::Target: NodeSigner,
type Descriptor = Descriptor
type CMT = <CM as Deref>::Target
type CM = CM
type RMT = <RM as Deref>::Target
type RM = RM
type OMT = <OM as Deref>::Target
type OM = OM
type LT = <L as Deref>::Target
type L = L
type CMHT = <CMH as Deref>::Target
type CMH = CMH
type NST = <NS as Deref>::Target
type NS = NS
§fn as_ref(&self) -> &PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
fn as_ref(&self) -> &PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
PeerManager
.Auto Trait Implementations§
impl<Descriptor, CM, RM, OM, L, CMH, NS> !Freeze for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
impl<Descriptor, CM, RM, OM, L, CMH, NS> RefUnwindSafe for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
NS: RefUnwindSafe,
L: RefUnwindSafe,
CM: RefUnwindSafe,
RM: RefUnwindSafe,
OM: RefUnwindSafe,
CMH: RefUnwindSafe,
impl<Descriptor, CM, RM, OM, L, CMH, NS> Send for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
impl<Descriptor, CM, RM, OM, L, CMH, NS> Sync for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
impl<Descriptor, CM, RM, OM, L, CMH, NS> Unpin for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>
impl<Descriptor, CM, RM, OM, L, CMH, NS> UnwindSafe for PeerManager<Descriptor, CM, RM, OM, L, CMH, NS>where
NS: UnwindSafe,
L: UnwindSafe,
CM: UnwindSafe,
RM: UnwindSafe,
OM: UnwindSafe,
CMH: UnwindSafe,
Blanket Implementations§
§impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedExplicit<'a, E> for Twhere
T: 'a,
§impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
impl<'a, T, E> AsTaggedImplicit<'a, E> for Twhere
T: 'a,
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
fn instrument(self, span: Span) -> Instrumented<Self> ⓘ
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request